Sep 10 00:12:37.842844 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 10 00:12:37.842864 kernel: Linux version 6.6.104-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Sep 9 22:41:53 -00 2025 Sep 10 00:12:37.842873 kernel: KASLR enabled Sep 10 00:12:37.842879 kernel: efi: EFI v2.7 by EDK II Sep 10 00:12:37.842885 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Sep 10 00:12:37.842891 kernel: random: crng init done Sep 10 00:12:37.842898 kernel: ACPI: Early table checksum verification disabled Sep 10 00:12:37.842904 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Sep 10 00:12:37.842910 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 10 00:12:37.842917 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:12:37.842923 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:12:37.842929 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:12:37.842935 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:12:37.842941 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:12:37.842949 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:12:37.842957 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:12:37.842963 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:12:37.842970 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:12:37.842976 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 10 00:12:37.842982 kernel: NUMA: Failed to initialise from firmware Sep 10 00:12:37.842989 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 10 00:12:37.842995 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Sep 10 00:12:37.843001 kernel: Zone ranges: Sep 10 00:12:37.843008 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 10 00:12:37.843014 kernel: DMA32 empty Sep 10 00:12:37.843021 kernel: Normal empty Sep 10 00:12:37.843027 kernel: Movable zone start for each node Sep 10 00:12:37.843034 kernel: Early memory node ranges Sep 10 00:12:37.843040 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Sep 10 00:12:37.843047 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Sep 10 00:12:37.843053 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Sep 10 00:12:37.843059 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 10 00:12:37.843066 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 10 00:12:37.843072 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 10 00:12:37.843078 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 10 00:12:37.843085 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 10 00:12:37.843091 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 10 00:12:37.843099 kernel: psci: probing for conduit method from ACPI. Sep 10 00:12:37.843105 kernel: psci: PSCIv1.1 detected in firmware. Sep 10 00:12:37.843111 kernel: psci: Using standard PSCI v0.2 function IDs Sep 10 00:12:37.843120 kernel: psci: Trusted OS migration not required Sep 10 00:12:37.843127 kernel: psci: SMC Calling Convention v1.1 Sep 10 00:12:37.843134 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 10 00:12:37.843142 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 10 00:12:37.843149 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 10 00:12:37.843156 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 10 00:12:37.843163 kernel: Detected PIPT I-cache on CPU0 Sep 10 00:12:37.843169 kernel: CPU features: detected: GIC system register CPU interface Sep 10 00:12:37.843176 kernel: CPU features: detected: Hardware dirty bit management Sep 10 00:12:37.843183 kernel: CPU features: detected: Spectre-v4 Sep 10 00:12:37.843190 kernel: CPU features: detected: Spectre-BHB Sep 10 00:12:37.843197 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 10 00:12:37.843204 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 10 00:12:37.843212 kernel: CPU features: detected: ARM erratum 1418040 Sep 10 00:12:37.843231 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 10 00:12:37.843238 kernel: alternatives: applying boot alternatives Sep 10 00:12:37.843247 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=9519a2b52292e68cf8bced92b7c71fffa7243efe8697174d43c360b4308144c8 Sep 10 00:12:37.843254 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 10 00:12:37.843261 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 10 00:12:37.843268 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 10 00:12:37.843274 kernel: Fallback order for Node 0: 0 Sep 10 00:12:37.843281 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Sep 10 00:12:37.843288 kernel: Policy zone: DMA Sep 10 00:12:37.843295 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 10 00:12:37.843303 kernel: software IO TLB: area num 4. Sep 10 00:12:37.843310 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Sep 10 00:12:37.843317 kernel: Memory: 2386404K/2572288K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 185884K reserved, 0K cma-reserved) Sep 10 00:12:37.843324 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 10 00:12:37.843332 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 10 00:12:37.843339 kernel: rcu: RCU event tracing is enabled. Sep 10 00:12:37.843346 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 10 00:12:37.843353 kernel: Trampoline variant of Tasks RCU enabled. Sep 10 00:12:37.843360 kernel: Tracing variant of Tasks RCU enabled. Sep 10 00:12:37.843367 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 10 00:12:37.843374 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 10 00:12:37.843382 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 10 00:12:37.843389 kernel: GICv3: 256 SPIs implemented Sep 10 00:12:37.843396 kernel: GICv3: 0 Extended SPIs implemented Sep 10 00:12:37.843402 kernel: Root IRQ handler: gic_handle_irq Sep 10 00:12:37.843409 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 10 00:12:37.843416 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 10 00:12:37.843422 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 10 00:12:37.843429 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Sep 10 00:12:37.843436 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Sep 10 00:12:37.843443 kernel: GICv3: using LPI property table @0x00000000400f0000 Sep 10 00:12:37.843449 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Sep 10 00:12:37.843456 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 10 00:12:37.843464 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 10 00:12:37.843471 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 10 00:12:37.843478 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 10 00:12:37.843485 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 10 00:12:37.843491 kernel: arm-pv: using stolen time PV Sep 10 00:12:37.843506 kernel: Console: colour dummy device 80x25 Sep 10 00:12:37.843515 kernel: ACPI: Core revision 20230628 Sep 10 00:12:37.843524 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 10 00:12:37.843532 kernel: pid_max: default: 32768 minimum: 301 Sep 10 00:12:37.843539 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 10 00:12:37.843548 kernel: landlock: Up and running. Sep 10 00:12:37.843555 kernel: SELinux: Initializing. Sep 10 00:12:37.843562 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 10 00:12:37.843569 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 10 00:12:37.843576 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 10 00:12:37.843583 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 10 00:12:37.843590 kernel: rcu: Hierarchical SRCU implementation. Sep 10 00:12:37.843605 kernel: rcu: Max phase no-delay instances is 400. Sep 10 00:12:37.843612 kernel: Platform MSI: ITS@0x8080000 domain created Sep 10 00:12:37.843621 kernel: PCI/MSI: ITS@0x8080000 domain created Sep 10 00:12:37.843628 kernel: Remapping and enabling EFI services. Sep 10 00:12:37.843635 kernel: smp: Bringing up secondary CPUs ... Sep 10 00:12:37.843642 kernel: Detected PIPT I-cache on CPU1 Sep 10 00:12:37.843649 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 10 00:12:37.843656 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Sep 10 00:12:37.843663 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 10 00:12:37.843670 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 10 00:12:37.843676 kernel: Detected PIPT I-cache on CPU2 Sep 10 00:12:37.843683 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 10 00:12:37.843692 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Sep 10 00:12:37.843699 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 10 00:12:37.843711 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 10 00:12:37.843719 kernel: Detected PIPT I-cache on CPU3 Sep 10 00:12:37.843726 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 10 00:12:37.843734 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Sep 10 00:12:37.843741 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 10 00:12:37.843748 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 10 00:12:37.843755 kernel: smp: Brought up 1 node, 4 CPUs Sep 10 00:12:37.843764 kernel: SMP: Total of 4 processors activated. Sep 10 00:12:37.843771 kernel: CPU features: detected: 32-bit EL0 Support Sep 10 00:12:37.843778 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 10 00:12:37.843786 kernel: CPU features: detected: Common not Private translations Sep 10 00:12:37.843793 kernel: CPU features: detected: CRC32 instructions Sep 10 00:12:37.843800 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 10 00:12:37.843807 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 10 00:12:37.843815 kernel: CPU features: detected: LSE atomic instructions Sep 10 00:12:37.843824 kernel: CPU features: detected: Privileged Access Never Sep 10 00:12:37.843832 kernel: CPU features: detected: RAS Extension Support Sep 10 00:12:37.843839 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 10 00:12:37.843847 kernel: CPU: All CPU(s) started at EL1 Sep 10 00:12:37.843854 kernel: alternatives: applying system-wide alternatives Sep 10 00:12:37.843861 kernel: devtmpfs: initialized Sep 10 00:12:37.843869 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 10 00:12:37.843877 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 10 00:12:37.843884 kernel: pinctrl core: initialized pinctrl subsystem Sep 10 00:12:37.843893 kernel: SMBIOS 3.0.0 present. Sep 10 00:12:37.843901 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Sep 10 00:12:37.843908 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 10 00:12:37.843916 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 10 00:12:37.843923 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 10 00:12:37.843933 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 10 00:12:37.843941 kernel: audit: initializing netlink subsys (disabled) Sep 10 00:12:37.843948 kernel: audit: type=2000 audit(0.021:1): state=initialized audit_enabled=0 res=1 Sep 10 00:12:37.843959 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 10 00:12:37.843968 kernel: cpuidle: using governor menu Sep 10 00:12:37.843976 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 10 00:12:37.843983 kernel: ASID allocator initialised with 32768 entries Sep 10 00:12:37.843991 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 10 00:12:37.843999 kernel: Serial: AMBA PL011 UART driver Sep 10 00:12:37.844006 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 10 00:12:37.844014 kernel: Modules: 0 pages in range for non-PLT usage Sep 10 00:12:37.844023 kernel: Modules: 509008 pages in range for PLT usage Sep 10 00:12:37.844031 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 10 00:12:37.844041 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 10 00:12:37.844050 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 10 00:12:37.844060 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 10 00:12:37.844067 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 10 00:12:37.844075 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 10 00:12:37.844082 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 10 00:12:37.844090 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 10 00:12:37.844097 kernel: ACPI: Added _OSI(Module Device) Sep 10 00:12:37.844104 kernel: ACPI: Added _OSI(Processor Device) Sep 10 00:12:37.844115 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 10 00:12:37.844122 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 10 00:12:37.844130 kernel: ACPI: Interpreter enabled Sep 10 00:12:37.844138 kernel: ACPI: Using GIC for interrupt routing Sep 10 00:12:37.844145 kernel: ACPI: MCFG table detected, 1 entries Sep 10 00:12:37.844166 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 10 00:12:37.844178 kernel: printk: console [ttyAMA0] enabled Sep 10 00:12:37.844188 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 10 00:12:37.844344 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 10 00:12:37.844427 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 10 00:12:37.844498 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 10 00:12:37.844707 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 10 00:12:37.844779 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 10 00:12:37.844789 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 10 00:12:37.844796 kernel: PCI host bridge to bus 0000:00 Sep 10 00:12:37.844869 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 10 00:12:37.844937 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 10 00:12:37.844998 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 10 00:12:37.845058 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 10 00:12:37.845140 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Sep 10 00:12:37.845221 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Sep 10 00:12:37.845291 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Sep 10 00:12:37.845362 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Sep 10 00:12:37.845430 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Sep 10 00:12:37.845508 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Sep 10 00:12:37.845586 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Sep 10 00:12:37.845669 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Sep 10 00:12:37.845734 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 10 00:12:37.845795 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 10 00:12:37.845872 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 10 00:12:37.845881 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 10 00:12:37.845889 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 10 00:12:37.845896 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 10 00:12:37.845904 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 10 00:12:37.845911 kernel: iommu: Default domain type: Translated Sep 10 00:12:37.845918 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 10 00:12:37.845926 kernel: efivars: Registered efivars operations Sep 10 00:12:37.845933 kernel: vgaarb: loaded Sep 10 00:12:37.845942 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 10 00:12:37.845949 kernel: VFS: Disk quotas dquot_6.6.0 Sep 10 00:12:37.845957 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 10 00:12:37.845964 kernel: pnp: PnP ACPI init Sep 10 00:12:37.846041 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 10 00:12:37.846052 kernel: pnp: PnP ACPI: found 1 devices Sep 10 00:12:37.846059 kernel: NET: Registered PF_INET protocol family Sep 10 00:12:37.846067 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 10 00:12:37.846076 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 10 00:12:37.846084 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 10 00:12:37.846091 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 10 00:12:37.846099 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 10 00:12:37.846106 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 10 00:12:37.846114 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 10 00:12:37.846121 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 10 00:12:37.846129 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 10 00:12:37.846136 kernel: PCI: CLS 0 bytes, default 64 Sep 10 00:12:37.846145 kernel: kvm [1]: HYP mode not available Sep 10 00:12:37.846152 kernel: Initialise system trusted keyrings Sep 10 00:12:37.846159 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 10 00:12:37.846166 kernel: Key type asymmetric registered Sep 10 00:12:37.846174 kernel: Asymmetric key parser 'x509' registered Sep 10 00:12:37.846181 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 10 00:12:37.846188 kernel: io scheduler mq-deadline registered Sep 10 00:12:37.846195 kernel: io scheduler kyber registered Sep 10 00:12:37.846203 kernel: io scheduler bfq registered Sep 10 00:12:37.846211 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 10 00:12:37.846219 kernel: ACPI: button: Power Button [PWRB] Sep 10 00:12:37.846226 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 10 00:12:37.846294 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 10 00:12:37.846304 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 10 00:12:37.846312 kernel: thunder_xcv, ver 1.0 Sep 10 00:12:37.846319 kernel: thunder_bgx, ver 1.0 Sep 10 00:12:37.846326 kernel: nicpf, ver 1.0 Sep 10 00:12:37.846334 kernel: nicvf, ver 1.0 Sep 10 00:12:37.846409 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 10 00:12:37.846474 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-10T00:12:37 UTC (1757463157) Sep 10 00:12:37.846484 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 10 00:12:37.846492 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Sep 10 00:12:37.846508 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 10 00:12:37.846517 kernel: watchdog: Hard watchdog permanently disabled Sep 10 00:12:37.846524 kernel: NET: Registered PF_INET6 protocol family Sep 10 00:12:37.846531 kernel: Segment Routing with IPv6 Sep 10 00:12:37.846541 kernel: In-situ OAM (IOAM) with IPv6 Sep 10 00:12:37.846548 kernel: NET: Registered PF_PACKET protocol family Sep 10 00:12:37.846555 kernel: Key type dns_resolver registered Sep 10 00:12:37.846562 kernel: registered taskstats version 1 Sep 10 00:12:37.846570 kernel: Loading compiled-in X.509 certificates Sep 10 00:12:37.846577 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.104-flatcar: e85a1044dffeb2f9696d4659bfe36fdfbb79b10c' Sep 10 00:12:37.846585 kernel: Key type .fscrypt registered Sep 10 00:12:37.846592 kernel: Key type fscrypt-provisioning registered Sep 10 00:12:37.846606 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 10 00:12:37.846615 kernel: ima: Allocated hash algorithm: sha1 Sep 10 00:12:37.846622 kernel: ima: No architecture policies found Sep 10 00:12:37.846630 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 10 00:12:37.846637 kernel: clk: Disabling unused clocks Sep 10 00:12:37.846644 kernel: Freeing unused kernel memory: 39424K Sep 10 00:12:37.846652 kernel: Run /init as init process Sep 10 00:12:37.846659 kernel: with arguments: Sep 10 00:12:37.846667 kernel: /init Sep 10 00:12:37.846674 kernel: with environment: Sep 10 00:12:37.846682 kernel: HOME=/ Sep 10 00:12:37.846689 kernel: TERM=linux Sep 10 00:12:37.846709 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 10 00:12:37.846719 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 10 00:12:37.846728 systemd[1]: Detected virtualization kvm. Sep 10 00:12:37.846736 systemd[1]: Detected architecture arm64. Sep 10 00:12:37.846744 systemd[1]: Running in initrd. Sep 10 00:12:37.846752 systemd[1]: No hostname configured, using default hostname. Sep 10 00:12:37.846761 systemd[1]: Hostname set to . Sep 10 00:12:37.846769 systemd[1]: Initializing machine ID from VM UUID. Sep 10 00:12:37.846776 systemd[1]: Queued start job for default target initrd.target. Sep 10 00:12:37.846784 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 00:12:37.846792 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 00:12:37.846801 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 10 00:12:37.846809 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 10 00:12:37.846818 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 10 00:12:37.846826 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 10 00:12:37.846836 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 10 00:12:37.846844 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 10 00:12:37.846852 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 00:12:37.846859 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 10 00:12:37.846867 systemd[1]: Reached target paths.target - Path Units. Sep 10 00:12:37.846877 systemd[1]: Reached target slices.target - Slice Units. Sep 10 00:12:37.846884 systemd[1]: Reached target swap.target - Swaps. Sep 10 00:12:37.846892 systemd[1]: Reached target timers.target - Timer Units. Sep 10 00:12:37.846900 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 10 00:12:37.846908 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 10 00:12:37.846916 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 10 00:12:37.846924 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 10 00:12:37.846932 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 10 00:12:37.846940 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 10 00:12:37.846949 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 00:12:37.846957 systemd[1]: Reached target sockets.target - Socket Units. Sep 10 00:12:37.846965 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 10 00:12:37.846973 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 10 00:12:37.846980 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 10 00:12:37.846988 systemd[1]: Starting systemd-fsck-usr.service... Sep 10 00:12:37.846996 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 10 00:12:37.847004 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 10 00:12:37.847013 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 00:12:37.847021 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 10 00:12:37.847029 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 00:12:37.847037 systemd[1]: Finished systemd-fsck-usr.service. Sep 10 00:12:37.847063 systemd-journald[239]: Collecting audit messages is disabled. Sep 10 00:12:37.847084 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 10 00:12:37.847093 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 10 00:12:37.847101 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 00:12:37.847109 systemd-journald[239]: Journal started Sep 10 00:12:37.847128 systemd-journald[239]: Runtime Journal (/run/log/journal/b4bf189aa02c432c931aa162b0849f8a) is 5.9M, max 47.3M, 41.4M free. Sep 10 00:12:37.834469 systemd-modules-load[240]: Inserted module 'overlay' Sep 10 00:12:37.848892 systemd[1]: Started systemd-journald.service - Journal Service. Sep 10 00:12:37.849560 kernel: Bridge firewalling registered Sep 10 00:12:37.850032 systemd-modules-load[240]: Inserted module 'br_netfilter' Sep 10 00:12:37.850375 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 10 00:12:37.852575 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 10 00:12:37.866648 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 10 00:12:37.868230 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 10 00:12:37.871764 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 10 00:12:37.875717 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 10 00:12:37.879546 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 10 00:12:37.881230 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 00:12:37.883780 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 00:12:37.884909 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 00:12:37.896710 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 10 00:12:37.898612 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 10 00:12:37.907622 dracut-cmdline[278]: dracut-dracut-053 Sep 10 00:12:37.910029 dracut-cmdline[278]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=9519a2b52292e68cf8bced92b7c71fffa7243efe8697174d43c360b4308144c8 Sep 10 00:12:37.923405 systemd-resolved[279]: Positive Trust Anchors: Sep 10 00:12:37.923422 systemd-resolved[279]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 10 00:12:37.923454 systemd-resolved[279]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 10 00:12:37.928175 systemd-resolved[279]: Defaulting to hostname 'linux'. Sep 10 00:12:37.929822 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 10 00:12:37.931519 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 10 00:12:37.974525 kernel: SCSI subsystem initialized Sep 10 00:12:37.979517 kernel: Loading iSCSI transport class v2.0-870. Sep 10 00:12:37.986522 kernel: iscsi: registered transport (tcp) Sep 10 00:12:37.999535 kernel: iscsi: registered transport (qla4xxx) Sep 10 00:12:37.999589 kernel: QLogic iSCSI HBA Driver Sep 10 00:12:38.041217 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 10 00:12:38.051677 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 10 00:12:38.066648 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 10 00:12:38.066692 kernel: device-mapper: uevent: version 1.0.3 Sep 10 00:12:38.067528 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 10 00:12:38.112541 kernel: raid6: neonx8 gen() 15626 MB/s Sep 10 00:12:38.129519 kernel: raid6: neonx4 gen() 15616 MB/s Sep 10 00:12:38.146525 kernel: raid6: neonx2 gen() 13186 MB/s Sep 10 00:12:38.163526 kernel: raid6: neonx1 gen() 10501 MB/s Sep 10 00:12:38.180515 kernel: raid6: int64x8 gen() 6950 MB/s Sep 10 00:12:38.197525 kernel: raid6: int64x4 gen() 7346 MB/s Sep 10 00:12:38.214526 kernel: raid6: int64x2 gen() 6128 MB/s Sep 10 00:12:38.231525 kernel: raid6: int64x1 gen() 5058 MB/s Sep 10 00:12:38.231551 kernel: raid6: using algorithm neonx8 gen() 15626 MB/s Sep 10 00:12:38.248532 kernel: raid6: .... xor() 12043 MB/s, rmw enabled Sep 10 00:12:38.248558 kernel: raid6: using neon recovery algorithm Sep 10 00:12:38.253517 kernel: xor: measuring software checksum speed Sep 10 00:12:38.253532 kernel: 8regs : 19759 MB/sec Sep 10 00:12:38.255009 kernel: 32regs : 17643 MB/sec Sep 10 00:12:38.255022 kernel: arm64_neon : 26945 MB/sec Sep 10 00:12:38.255031 kernel: xor: using function: arm64_neon (26945 MB/sec) Sep 10 00:12:38.302522 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 10 00:12:38.313307 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 10 00:12:38.324690 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 00:12:38.335628 systemd-udevd[462]: Using default interface naming scheme 'v255'. Sep 10 00:12:38.338818 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 00:12:38.342665 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 10 00:12:38.356265 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation Sep 10 00:12:38.380993 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 10 00:12:38.392661 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 10 00:12:38.432703 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 00:12:38.438676 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 10 00:12:38.449751 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 10 00:12:38.452186 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 10 00:12:38.453622 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 00:12:38.455747 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 10 00:12:38.461743 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 10 00:12:38.473471 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 10 00:12:38.486185 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 10 00:12:38.486342 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 10 00:12:38.489969 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 10 00:12:38.496187 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 10 00:12:38.496209 kernel: GPT:9289727 != 19775487 Sep 10 00:12:38.496219 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 10 00:12:38.496229 kernel: GPT:9289727 != 19775487 Sep 10 00:12:38.496238 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 10 00:12:38.496247 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 00:12:38.490076 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 00:12:38.496223 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 10 00:12:38.497974 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 10 00:12:38.498114 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 00:12:38.499465 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 00:12:38.505826 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 00:12:38.512517 kernel: BTRFS: device fsid 56932cd9-691c-4ccb-8da6-e6508edf5f69 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (515) Sep 10 00:12:38.514526 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (507) Sep 10 00:12:38.522985 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 10 00:12:38.524182 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 00:12:38.535099 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 10 00:12:38.538922 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 10 00:12:38.539927 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 10 00:12:38.545112 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 10 00:12:38.555686 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 10 00:12:38.557788 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 10 00:12:38.561699 disk-uuid[553]: Primary Header is updated. Sep 10 00:12:38.561699 disk-uuid[553]: Secondary Entries is updated. Sep 10 00:12:38.561699 disk-uuid[553]: Secondary Header is updated. Sep 10 00:12:38.566585 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 00:12:38.568525 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 00:12:38.571572 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 00:12:38.575243 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 00:12:39.571519 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 00:12:39.571913 disk-uuid[555]: The operation has completed successfully. Sep 10 00:12:39.593618 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 10 00:12:39.593707 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 10 00:12:39.617643 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 10 00:12:39.620288 sh[576]: Success Sep 10 00:12:39.629546 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 10 00:12:39.654567 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 10 00:12:39.668751 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 10 00:12:39.670449 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 10 00:12:39.680283 kernel: BTRFS info (device dm-0): first mount of filesystem 56932cd9-691c-4ccb-8da6-e6508edf5f69 Sep 10 00:12:39.680328 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 10 00:12:39.680349 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 10 00:12:39.680368 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 10 00:12:39.681510 kernel: BTRFS info (device dm-0): using free space tree Sep 10 00:12:39.684532 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 10 00:12:39.685626 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 10 00:12:39.695717 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 10 00:12:39.697018 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 10 00:12:39.704904 kernel: BTRFS info (device vda6): first mount of filesystem 1f9a2be6-c1a7-433d-9dbe-1e5d2ce6fc09 Sep 10 00:12:39.704941 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 10 00:12:39.704952 kernel: BTRFS info (device vda6): using free space tree Sep 10 00:12:39.707883 kernel: BTRFS info (device vda6): auto enabling async discard Sep 10 00:12:39.714549 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 10 00:12:39.715548 kernel: BTRFS info (device vda6): last unmount of filesystem 1f9a2be6-c1a7-433d-9dbe-1e5d2ce6fc09 Sep 10 00:12:39.720711 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 10 00:12:39.727812 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 10 00:12:39.792789 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 10 00:12:39.793175 ignition[669]: Ignition 2.19.0 Sep 10 00:12:39.793181 ignition[669]: Stage: fetch-offline Sep 10 00:12:39.793214 ignition[669]: no configs at "/usr/lib/ignition/base.d" Sep 10 00:12:39.793221 ignition[669]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:12:39.793370 ignition[669]: parsed url from cmdline: "" Sep 10 00:12:39.793373 ignition[669]: no config URL provided Sep 10 00:12:39.793378 ignition[669]: reading system config file "/usr/lib/ignition/user.ign" Sep 10 00:12:39.793384 ignition[669]: no config at "/usr/lib/ignition/user.ign" Sep 10 00:12:39.793406 ignition[669]: op(1): [started] loading QEMU firmware config module Sep 10 00:12:39.793410 ignition[669]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 10 00:12:39.801163 ignition[669]: op(1): [finished] loading QEMU firmware config module Sep 10 00:12:39.805740 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 10 00:12:39.823459 systemd-networkd[769]: lo: Link UP Sep 10 00:12:39.823471 systemd-networkd[769]: lo: Gained carrier Sep 10 00:12:39.824147 systemd-networkd[769]: Enumeration completed Sep 10 00:12:39.824537 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 10 00:12:39.825756 systemd[1]: Reached target network.target - Network. Sep 10 00:12:39.827202 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 00:12:39.827206 systemd-networkd[769]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 10 00:12:39.828014 systemd-networkd[769]: eth0: Link UP Sep 10 00:12:39.828017 systemd-networkd[769]: eth0: Gained carrier Sep 10 00:12:39.828025 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 00:12:39.845556 systemd-networkd[769]: eth0: DHCPv4 address 10.0.0.106/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 10 00:12:39.862200 ignition[669]: parsing config with SHA512: bdc930dc33017860bacc87a92e49f4d5279e349b2003073dba4d4df8388f0714a664822aaa267ca5b28c374bc4c2b92841f749b837162bbb7d9d8429f62f7322 Sep 10 00:12:39.867645 unknown[669]: fetched base config from "system" Sep 10 00:12:39.868187 ignition[669]: fetch-offline: fetch-offline passed Sep 10 00:12:39.867656 unknown[669]: fetched user config from "qemu" Sep 10 00:12:39.868269 ignition[669]: Ignition finished successfully Sep 10 00:12:39.871090 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 10 00:12:39.872100 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 10 00:12:39.879642 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 10 00:12:39.892103 ignition[773]: Ignition 2.19.0 Sep 10 00:12:39.892113 ignition[773]: Stage: kargs Sep 10 00:12:39.892269 ignition[773]: no configs at "/usr/lib/ignition/base.d" Sep 10 00:12:39.892278 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:12:39.893132 ignition[773]: kargs: kargs passed Sep 10 00:12:39.893172 ignition[773]: Ignition finished successfully Sep 10 00:12:39.895928 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 10 00:12:39.915655 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 10 00:12:39.924554 ignition[782]: Ignition 2.19.0 Sep 10 00:12:39.924563 ignition[782]: Stage: disks Sep 10 00:12:39.924725 ignition[782]: no configs at "/usr/lib/ignition/base.d" Sep 10 00:12:39.924733 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:12:39.925571 ignition[782]: disks: disks passed Sep 10 00:12:39.926902 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 10 00:12:39.925623 ignition[782]: Ignition finished successfully Sep 10 00:12:39.929603 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 10 00:12:39.930385 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 10 00:12:39.931986 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 10 00:12:39.933409 systemd[1]: Reached target sysinit.target - System Initialization. Sep 10 00:12:39.934991 systemd[1]: Reached target basic.target - Basic System. Sep 10 00:12:39.945634 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 10 00:12:39.954329 systemd-fsck[793]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 10 00:12:39.958157 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 10 00:12:39.959990 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 10 00:12:40.002525 kernel: EXT4-fs (vda9): mounted filesystem 43028332-c79c-426f-8992-528d495eb356 r/w with ordered data mode. Quota mode: none. Sep 10 00:12:40.002540 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 10 00:12:40.003557 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 10 00:12:40.016435 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 10 00:12:40.017950 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 10 00:12:40.019310 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 10 00:12:40.019347 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 10 00:12:40.024579 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (801) Sep 10 00:12:40.019368 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 10 00:12:40.027317 kernel: BTRFS info (device vda6): first mount of filesystem 1f9a2be6-c1a7-433d-9dbe-1e5d2ce6fc09 Sep 10 00:12:40.027333 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 10 00:12:40.027343 kernel: BTRFS info (device vda6): using free space tree Sep 10 00:12:40.025760 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 10 00:12:40.029066 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 10 00:12:40.032277 kernel: BTRFS info (device vda6): auto enabling async discard Sep 10 00:12:40.033921 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 10 00:12:40.067873 initrd-setup-root[825]: cut: /sysroot/etc/passwd: No such file or directory Sep 10 00:12:40.072421 initrd-setup-root[832]: cut: /sysroot/etc/group: No such file or directory Sep 10 00:12:40.076283 initrd-setup-root[839]: cut: /sysroot/etc/shadow: No such file or directory Sep 10 00:12:40.079786 initrd-setup-root[846]: cut: /sysroot/etc/gshadow: No such file or directory Sep 10 00:12:40.153161 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 10 00:12:40.167641 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 10 00:12:40.169985 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 10 00:12:40.174519 kernel: BTRFS info (device vda6): last unmount of filesystem 1f9a2be6-c1a7-433d-9dbe-1e5d2ce6fc09 Sep 10 00:12:40.186782 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 10 00:12:40.190595 ignition[914]: INFO : Ignition 2.19.0 Sep 10 00:12:40.190595 ignition[914]: INFO : Stage: mount Sep 10 00:12:40.192470 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 00:12:40.192470 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:12:40.192470 ignition[914]: INFO : mount: mount passed Sep 10 00:12:40.192470 ignition[914]: INFO : Ignition finished successfully Sep 10 00:12:40.193181 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 10 00:12:40.200617 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 10 00:12:40.678964 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 10 00:12:40.688704 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 10 00:12:40.696060 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (927) Sep 10 00:12:40.696093 kernel: BTRFS info (device vda6): first mount of filesystem 1f9a2be6-c1a7-433d-9dbe-1e5d2ce6fc09 Sep 10 00:12:40.696105 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 10 00:12:40.696810 kernel: BTRFS info (device vda6): using free space tree Sep 10 00:12:40.699536 kernel: BTRFS info (device vda6): auto enabling async discard Sep 10 00:12:40.700833 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 10 00:12:40.717886 ignition[944]: INFO : Ignition 2.19.0 Sep 10 00:12:40.717886 ignition[944]: INFO : Stage: files Sep 10 00:12:40.719171 ignition[944]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 00:12:40.719171 ignition[944]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:12:40.719171 ignition[944]: DEBUG : files: compiled without relabeling support, skipping Sep 10 00:12:40.722518 ignition[944]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 10 00:12:40.722518 ignition[944]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 10 00:12:40.722518 ignition[944]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 10 00:12:40.722518 ignition[944]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 10 00:12:40.726604 ignition[944]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 10 00:12:40.726604 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 10 00:12:40.726604 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 10 00:12:40.722878 unknown[944]: wrote ssh authorized keys file for user: core Sep 10 00:12:40.805628 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 10 00:12:41.068527 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 10 00:12:41.068527 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 10 00:12:41.068527 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 10 00:12:41.068527 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 10 00:12:41.068527 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 10 00:12:41.068527 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 10 00:12:41.068527 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 10 00:12:41.068527 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 10 00:12:41.068527 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 10 00:12:41.084914 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 10 00:12:41.084914 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 10 00:12:41.084914 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 10 00:12:41.084914 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 10 00:12:41.084914 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 10 00:12:41.084914 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 10 00:12:41.271685 systemd-networkd[769]: eth0: Gained IPv6LL Sep 10 00:12:41.515646 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 10 00:12:42.033009 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 10 00:12:42.033009 ignition[944]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 10 00:12:42.035964 ignition[944]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 10 00:12:42.035964 ignition[944]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 10 00:12:42.035964 ignition[944]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 10 00:12:42.035964 ignition[944]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 10 00:12:42.035964 ignition[944]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 10 00:12:42.035964 ignition[944]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 10 00:12:42.035964 ignition[944]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 10 00:12:42.035964 ignition[944]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 10 00:12:42.052063 ignition[944]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 10 00:12:42.055786 ignition[944]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 10 00:12:42.057638 ignition[944]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 10 00:12:42.057638 ignition[944]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 10 00:12:42.057638 ignition[944]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 10 00:12:42.057638 ignition[944]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 10 00:12:42.057638 ignition[944]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 10 00:12:42.057638 ignition[944]: INFO : files: files passed Sep 10 00:12:42.057638 ignition[944]: INFO : Ignition finished successfully Sep 10 00:12:42.058303 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 10 00:12:42.070721 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 10 00:12:42.072719 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 10 00:12:42.073974 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 10 00:12:42.074064 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 10 00:12:42.079479 initrd-setup-root-after-ignition[972]: grep: /sysroot/oem/oem-release: No such file or directory Sep 10 00:12:42.082824 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 10 00:12:42.082824 initrd-setup-root-after-ignition[974]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 10 00:12:42.085260 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 10 00:12:42.086090 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 10 00:12:42.087651 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 10 00:12:42.096628 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 10 00:12:42.115493 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 10 00:12:42.115619 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 10 00:12:42.117411 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 10 00:12:42.118800 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 10 00:12:42.120191 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 10 00:12:42.120886 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 10 00:12:42.135570 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 10 00:12:42.141666 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 10 00:12:42.148786 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 10 00:12:42.149715 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 00:12:42.151277 systemd[1]: Stopped target timers.target - Timer Units. Sep 10 00:12:42.152607 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 10 00:12:42.152711 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 10 00:12:42.154605 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 10 00:12:42.156268 systemd[1]: Stopped target basic.target - Basic System. Sep 10 00:12:42.157481 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 10 00:12:42.158832 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 10 00:12:42.160348 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 10 00:12:42.161885 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 10 00:12:42.163306 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 10 00:12:42.164784 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 10 00:12:42.166276 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 10 00:12:42.167588 systemd[1]: Stopped target swap.target - Swaps. Sep 10 00:12:42.168808 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 10 00:12:42.168909 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 10 00:12:42.170675 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 10 00:12:42.172135 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 00:12:42.173631 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 10 00:12:42.174558 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 00:12:42.175892 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 10 00:12:42.175995 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 10 00:12:42.178172 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 10 00:12:42.178279 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 10 00:12:42.179773 systemd[1]: Stopped target paths.target - Path Units. Sep 10 00:12:42.181013 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 10 00:12:42.186549 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 00:12:42.188449 systemd[1]: Stopped target slices.target - Slice Units. Sep 10 00:12:42.189254 systemd[1]: Stopped target sockets.target - Socket Units. Sep 10 00:12:42.190526 systemd[1]: iscsid.socket: Deactivated successfully. Sep 10 00:12:42.190617 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 10 00:12:42.191822 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 10 00:12:42.191894 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 10 00:12:42.193059 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 10 00:12:42.193160 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 10 00:12:42.194430 systemd[1]: ignition-files.service: Deactivated successfully. Sep 10 00:12:42.194538 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 10 00:12:42.204744 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 10 00:12:42.205406 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 10 00:12:42.205537 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 00:12:42.210669 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 10 00:12:42.211290 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 10 00:12:42.211408 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 00:12:42.212785 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 10 00:12:42.217527 ignition[998]: INFO : Ignition 2.19.0 Sep 10 00:12:42.217527 ignition[998]: INFO : Stage: umount Sep 10 00:12:42.217527 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 00:12:42.217527 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:12:42.212883 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 10 00:12:42.223092 ignition[998]: INFO : umount: umount passed Sep 10 00:12:42.223092 ignition[998]: INFO : Ignition finished successfully Sep 10 00:12:42.218132 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 10 00:12:42.219177 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 10 00:12:42.220697 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 10 00:12:42.220784 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 10 00:12:42.223435 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 10 00:12:42.223860 systemd[1]: Stopped target network.target - Network. Sep 10 00:12:42.225435 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 10 00:12:42.225494 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 10 00:12:42.226875 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 10 00:12:42.226915 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 10 00:12:42.228302 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 10 00:12:42.228341 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 10 00:12:42.229567 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 10 00:12:42.229616 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 10 00:12:42.231274 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 10 00:12:42.232583 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 10 00:12:42.238550 systemd-networkd[769]: eth0: DHCPv6 lease lost Sep 10 00:12:42.238833 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 10 00:12:42.238942 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 10 00:12:42.242268 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 10 00:12:42.242384 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 10 00:12:42.244571 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 10 00:12:42.244632 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 10 00:12:42.254632 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 10 00:12:42.255287 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 10 00:12:42.255344 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 10 00:12:42.256906 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 10 00:12:42.256947 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 10 00:12:42.258387 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 10 00:12:42.258425 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 10 00:12:42.260055 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 10 00:12:42.260091 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 00:12:42.261612 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 00:12:42.271735 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 10 00:12:42.271866 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 10 00:12:42.281261 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 10 00:12:42.281412 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 00:12:42.283171 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 10 00:12:42.283208 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 10 00:12:42.284493 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 10 00:12:42.284536 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 00:12:42.285981 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 10 00:12:42.286024 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 10 00:12:42.288140 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 10 00:12:42.288182 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 10 00:12:42.290166 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 10 00:12:42.290207 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 00:12:42.302668 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 10 00:12:42.303468 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 10 00:12:42.303538 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 00:12:42.305273 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 10 00:12:42.305313 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 00:12:42.307026 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 10 00:12:42.307104 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 10 00:12:42.309521 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 10 00:12:42.309615 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 10 00:12:42.311425 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 10 00:12:42.312245 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 10 00:12:42.312303 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 10 00:12:42.314244 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 10 00:12:42.322730 systemd[1]: Switching root. Sep 10 00:12:42.345346 systemd-journald[239]: Journal stopped Sep 10 00:12:42.986978 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). Sep 10 00:12:42.987039 kernel: SELinux: policy capability network_peer_controls=1 Sep 10 00:12:42.987051 kernel: SELinux: policy capability open_perms=1 Sep 10 00:12:42.987064 kernel: SELinux: policy capability extended_socket_class=1 Sep 10 00:12:42.987074 kernel: SELinux: policy capability always_check_network=0 Sep 10 00:12:42.987084 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 10 00:12:42.987093 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 10 00:12:42.987103 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 10 00:12:42.987112 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 10 00:12:42.987121 kernel: audit: type=1403 audit(1757463162.484:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 10 00:12:42.987136 systemd[1]: Successfully loaded SELinux policy in 29.057ms. Sep 10 00:12:42.987156 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.974ms. Sep 10 00:12:42.987169 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 10 00:12:42.987180 systemd[1]: Detected virtualization kvm. Sep 10 00:12:42.987191 systemd[1]: Detected architecture arm64. Sep 10 00:12:42.987201 systemd[1]: Detected first boot. Sep 10 00:12:42.987212 systemd[1]: Initializing machine ID from VM UUID. Sep 10 00:12:42.987224 zram_generator::config[1042]: No configuration found. Sep 10 00:12:42.987235 systemd[1]: Populated /etc with preset unit settings. Sep 10 00:12:42.987246 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 10 00:12:42.987259 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 10 00:12:42.987269 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 10 00:12:42.987281 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 10 00:12:42.987291 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 10 00:12:42.987302 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 10 00:12:42.987313 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 10 00:12:42.987323 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 10 00:12:42.987334 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 10 00:12:42.987344 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 10 00:12:42.987356 systemd[1]: Created slice user.slice - User and Session Slice. Sep 10 00:12:42.987367 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 00:12:42.987378 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 00:12:42.987388 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 10 00:12:42.987399 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 10 00:12:42.987409 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 10 00:12:42.987420 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 10 00:12:42.987430 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 10 00:12:42.987441 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 00:12:42.987454 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 10 00:12:42.987464 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 10 00:12:42.987475 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 10 00:12:42.987485 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 10 00:12:42.987496 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 00:12:42.987514 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 10 00:12:42.987526 systemd[1]: Reached target slices.target - Slice Units. Sep 10 00:12:42.987538 systemd[1]: Reached target swap.target - Swaps. Sep 10 00:12:42.987549 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 10 00:12:42.987559 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 10 00:12:42.987570 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 10 00:12:42.987586 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 10 00:12:42.987600 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 00:12:42.987610 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 10 00:12:42.987621 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 10 00:12:42.987631 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 10 00:12:42.987641 systemd[1]: Mounting media.mount - External Media Directory... Sep 10 00:12:42.987654 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 10 00:12:42.987664 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 10 00:12:42.987675 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 10 00:12:42.987686 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 10 00:12:42.987697 systemd[1]: Reached target machines.target - Containers. Sep 10 00:12:42.987707 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 10 00:12:42.987718 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 00:12:42.987729 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 10 00:12:42.987741 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 10 00:12:42.987752 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 00:12:42.987762 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 10 00:12:42.987772 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 00:12:42.987783 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 10 00:12:42.987793 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 00:12:42.987804 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 10 00:12:42.987814 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 10 00:12:42.987826 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 10 00:12:42.987837 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 10 00:12:42.987847 systemd[1]: Stopped systemd-fsck-usr.service. Sep 10 00:12:42.987860 kernel: fuse: init (API version 7.39) Sep 10 00:12:42.987870 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 10 00:12:42.987880 kernel: loop: module loaded Sep 10 00:12:42.987890 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 10 00:12:42.987901 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 10 00:12:42.987910 kernel: ACPI: bus type drm_connector registered Sep 10 00:12:42.987923 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 10 00:12:42.987934 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 10 00:12:42.987944 systemd[1]: verity-setup.service: Deactivated successfully. Sep 10 00:12:42.987955 systemd[1]: Stopped verity-setup.service. Sep 10 00:12:42.987980 systemd-journald[1106]: Collecting audit messages is disabled. Sep 10 00:12:42.988002 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 10 00:12:42.988012 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 10 00:12:42.988023 systemd-journald[1106]: Journal started Sep 10 00:12:42.988045 systemd-journald[1106]: Runtime Journal (/run/log/journal/b4bf189aa02c432c931aa162b0849f8a) is 5.9M, max 47.3M, 41.4M free. Sep 10 00:12:42.816099 systemd[1]: Queued start job for default target multi-user.target. Sep 10 00:12:42.834326 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 10 00:12:42.834686 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 10 00:12:42.991037 systemd[1]: Started systemd-journald.service - Journal Service. Sep 10 00:12:42.991644 systemd[1]: Mounted media.mount - External Media Directory. Sep 10 00:12:42.992473 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 10 00:12:42.993438 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 10 00:12:42.994441 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 10 00:12:42.995457 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 10 00:12:42.996679 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 00:12:42.997861 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 10 00:12:42.997988 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 10 00:12:42.999121 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 00:12:42.999257 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 00:12:43.000375 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 10 00:12:43.000526 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 10 00:12:43.001536 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 00:12:43.001684 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 00:12:43.002795 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 10 00:12:43.002916 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 10 00:12:43.004153 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 00:12:43.004283 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 00:12:43.005381 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 10 00:12:43.006557 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 10 00:12:43.007712 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 10 00:12:43.019525 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 10 00:12:43.031637 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 10 00:12:43.033473 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 10 00:12:43.034281 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 10 00:12:43.034308 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 10 00:12:43.036009 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 10 00:12:43.038067 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 10 00:12:43.039916 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 10 00:12:43.040775 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 00:12:43.042060 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 10 00:12:43.043810 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 10 00:12:43.044697 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 00:12:43.045730 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 10 00:12:43.046658 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 10 00:12:43.050703 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 10 00:12:43.051751 systemd-journald[1106]: Time spent on flushing to /var/log/journal/b4bf189aa02c432c931aa162b0849f8a is 11.674ms for 853 entries. Sep 10 00:12:43.051751 systemd-journald[1106]: System Journal (/var/log/journal/b4bf189aa02c432c931aa162b0849f8a) is 8.0M, max 195.6M, 187.6M free. Sep 10 00:12:43.070558 systemd-journald[1106]: Received client request to flush runtime journal. Sep 10 00:12:43.070618 kernel: loop0: detected capacity change from 0 to 114328 Sep 10 00:12:43.054795 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 10 00:12:43.063972 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 10 00:12:43.067892 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 00:12:43.069703 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 10 00:12:43.070903 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 10 00:12:43.072386 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 10 00:12:43.074003 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 10 00:12:43.081898 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 10 00:12:43.082516 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 10 00:12:43.084439 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 10 00:12:43.093149 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 10 00:12:43.104042 kernel: loop1: detected capacity change from 0 to 203944 Sep 10 00:12:43.103666 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 10 00:12:43.106742 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 10 00:12:43.110916 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 10 00:12:43.115182 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 10 00:12:43.125829 udevadm[1169]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 10 00:12:43.137466 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 10 00:12:43.138166 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 10 00:12:43.144521 kernel: loop2: detected capacity change from 0 to 114432 Sep 10 00:12:43.153320 systemd-tmpfiles[1172]: ACLs are not supported, ignoring. Sep 10 00:12:43.153629 systemd-tmpfiles[1172]: ACLs are not supported, ignoring. Sep 10 00:12:43.157907 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 00:12:43.170586 kernel: loop3: detected capacity change from 0 to 114328 Sep 10 00:12:43.175514 kernel: loop4: detected capacity change from 0 to 203944 Sep 10 00:12:43.180524 kernel: loop5: detected capacity change from 0 to 114432 Sep 10 00:12:43.183392 (sd-merge)[1177]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 10 00:12:43.183791 (sd-merge)[1177]: Merged extensions into '/usr'. Sep 10 00:12:43.188421 systemd[1]: Reloading requested from client PID 1153 ('systemd-sysext') (unit systemd-sysext.service)... Sep 10 00:12:43.188439 systemd[1]: Reloading... Sep 10 00:12:43.243563 zram_generator::config[1202]: No configuration found. Sep 10 00:12:43.313956 ldconfig[1148]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 10 00:12:43.339162 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 00:12:43.375382 systemd[1]: Reloading finished in 186 ms. Sep 10 00:12:43.409256 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 10 00:12:43.412857 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 10 00:12:43.422823 systemd[1]: Starting ensure-sysext.service... Sep 10 00:12:43.424644 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 10 00:12:43.430212 systemd[1]: Reloading requested from client PID 1238 ('systemctl') (unit ensure-sysext.service)... Sep 10 00:12:43.430227 systemd[1]: Reloading... Sep 10 00:12:43.440810 systemd-tmpfiles[1239]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 10 00:12:43.441338 systemd-tmpfiles[1239]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 10 00:12:43.442078 systemd-tmpfiles[1239]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 10 00:12:43.442378 systemd-tmpfiles[1239]: ACLs are not supported, ignoring. Sep 10 00:12:43.442490 systemd-tmpfiles[1239]: ACLs are not supported, ignoring. Sep 10 00:12:43.444800 systemd-tmpfiles[1239]: Detected autofs mount point /boot during canonicalization of boot. Sep 10 00:12:43.444895 systemd-tmpfiles[1239]: Skipping /boot Sep 10 00:12:43.451648 systemd-tmpfiles[1239]: Detected autofs mount point /boot during canonicalization of boot. Sep 10 00:12:43.451732 systemd-tmpfiles[1239]: Skipping /boot Sep 10 00:12:43.480532 zram_generator::config[1269]: No configuration found. Sep 10 00:12:43.556813 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 00:12:43.592246 systemd[1]: Reloading finished in 161 ms. Sep 10 00:12:43.607402 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 10 00:12:43.617896 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 00:12:43.624900 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 10 00:12:43.627058 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 10 00:12:43.629774 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 10 00:12:43.633243 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 10 00:12:43.637379 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 00:12:43.640893 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 10 00:12:43.645710 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 00:12:43.657115 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 00:12:43.659172 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 00:12:43.661816 systemd-udevd[1313]: Using default interface naming scheme 'v255'. Sep 10 00:12:43.662997 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 00:12:43.664807 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 00:12:43.666768 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 10 00:12:43.669241 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 10 00:12:43.671100 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 00:12:43.671240 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 00:12:43.672966 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 00:12:43.673083 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 00:12:43.674933 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 00:12:43.675063 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 00:12:43.677725 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 10 00:12:43.684237 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 00:12:43.693202 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 10 00:12:43.703586 systemd[1]: Finished ensure-sysext.service. Sep 10 00:12:43.704901 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 10 00:12:43.709589 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 00:12:43.711130 augenrules[1355]: No rules Sep 10 00:12:43.721721 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 00:12:43.723550 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 10 00:12:43.725191 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 00:12:43.729678 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 00:12:43.730549 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 00:12:43.732066 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 10 00:12:43.735482 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 10 00:12:43.737487 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 10 00:12:43.738561 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 10 00:12:43.740530 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 10 00:12:43.743096 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 00:12:43.743285 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 00:12:43.744442 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 10 00:12:43.744635 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 10 00:12:43.745735 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 00:12:43.745886 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 00:12:43.752857 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 10 00:12:43.754403 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 00:12:43.754739 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 00:12:43.758432 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 10 00:12:43.760416 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 00:12:43.760476 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 10 00:12:43.762536 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1341) Sep 10 00:12:43.786433 systemd-resolved[1307]: Positive Trust Anchors: Sep 10 00:12:43.786451 systemd-resolved[1307]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 10 00:12:43.786483 systemd-resolved[1307]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 10 00:12:43.792838 systemd-resolved[1307]: Defaulting to hostname 'linux'. Sep 10 00:12:43.795180 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 10 00:12:43.796894 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 10 00:12:43.808386 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 10 00:12:43.817674 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 10 00:12:43.818130 systemd-networkd[1370]: lo: Link UP Sep 10 00:12:43.818144 systemd-networkd[1370]: lo: Gained carrier Sep 10 00:12:43.819087 systemd-networkd[1370]: Enumeration completed Sep 10 00:12:43.819167 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 10 00:12:43.819829 systemd-networkd[1370]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 00:12:43.819841 systemd-networkd[1370]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 10 00:12:43.820096 systemd[1]: Reached target network.target - Network. Sep 10 00:12:43.820883 systemd-networkd[1370]: eth0: Link UP Sep 10 00:12:43.820894 systemd-networkd[1370]: eth0: Gained carrier Sep 10 00:12:43.820907 systemd-networkd[1370]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 00:12:43.822405 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 10 00:12:43.827643 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 10 00:12:43.828954 systemd[1]: Reached target time-set.target - System Time Set. Sep 10 00:12:43.837836 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 10 00:12:43.837990 systemd-networkd[1370]: eth0: DHCPv4 address 10.0.0.106/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 10 00:12:43.838711 systemd-timesyncd[1371]: Network configuration changed, trying to establish connection. Sep 10 00:12:43.839541 systemd-timesyncd[1371]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 10 00:12:43.839603 systemd-timesyncd[1371]: Initial clock synchronization to Wed 2025-09-10 00:12:43.622240 UTC. Sep 10 00:12:43.870747 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 00:12:43.881563 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 10 00:12:43.896675 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 10 00:12:43.905536 lvm[1394]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 10 00:12:43.908287 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 00:12:43.930738 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 10 00:12:43.931832 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 10 00:12:43.932690 systemd[1]: Reached target sysinit.target - System Initialization. Sep 10 00:12:43.933526 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 10 00:12:43.934402 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 10 00:12:43.935617 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 10 00:12:43.936483 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 10 00:12:43.937398 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 10 00:12:43.938388 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 10 00:12:43.938417 systemd[1]: Reached target paths.target - Path Units. Sep 10 00:12:43.939163 systemd[1]: Reached target timers.target - Timer Units. Sep 10 00:12:43.940536 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 10 00:12:43.942481 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 10 00:12:43.954413 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 10 00:12:43.956335 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 10 00:12:43.957657 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 10 00:12:43.958492 systemd[1]: Reached target sockets.target - Socket Units. Sep 10 00:12:43.959206 systemd[1]: Reached target basic.target - Basic System. Sep 10 00:12:43.960017 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 10 00:12:43.960046 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 10 00:12:43.960908 systemd[1]: Starting containerd.service - containerd container runtime... Sep 10 00:12:43.962551 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 10 00:12:43.965605 lvm[1402]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 10 00:12:43.966691 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 10 00:12:43.968729 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 10 00:12:43.970644 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 10 00:12:43.971932 jq[1405]: false Sep 10 00:12:43.972007 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 10 00:12:43.973732 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 10 00:12:43.975906 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 10 00:12:43.980042 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 10 00:12:43.983684 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 10 00:12:43.988242 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 10 00:12:43.988672 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 10 00:12:43.989190 systemd[1]: Starting update-engine.service - Update Engine... Sep 10 00:12:43.992394 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 10 00:12:43.992470 extend-filesystems[1406]: Found loop3 Sep 10 00:12:43.992470 extend-filesystems[1406]: Found loop4 Sep 10 00:12:43.995114 extend-filesystems[1406]: Found loop5 Sep 10 00:12:43.995114 extend-filesystems[1406]: Found vda Sep 10 00:12:43.995114 extend-filesystems[1406]: Found vda1 Sep 10 00:12:43.995114 extend-filesystems[1406]: Found vda2 Sep 10 00:12:43.995114 extend-filesystems[1406]: Found vda3 Sep 10 00:12:43.995114 extend-filesystems[1406]: Found usr Sep 10 00:12:43.995114 extend-filesystems[1406]: Found vda4 Sep 10 00:12:43.995114 extend-filesystems[1406]: Found vda6 Sep 10 00:12:43.995114 extend-filesystems[1406]: Found vda7 Sep 10 00:12:43.995114 extend-filesystems[1406]: Found vda9 Sep 10 00:12:43.995114 extend-filesystems[1406]: Checking size of /dev/vda9 Sep 10 00:12:43.994352 dbus-daemon[1404]: [system] SELinux support is enabled Sep 10 00:12:43.994004 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 10 00:12:43.996583 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 10 00:12:44.009859 jq[1419]: true Sep 10 00:12:44.000683 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 10 00:12:44.003561 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 10 00:12:44.003882 systemd[1]: motdgen.service: Deactivated successfully. Sep 10 00:12:44.004006 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 10 00:12:44.006804 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 10 00:12:44.008653 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 10 00:12:44.015064 (ntainerd)[1430]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 10 00:12:44.017249 extend-filesystems[1406]: Resized partition /dev/vda9 Sep 10 00:12:44.019801 extend-filesystems[1437]: resize2fs 1.47.1 (20-May-2024) Sep 10 00:12:44.019310 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 10 00:12:44.020976 update_engine[1418]: I20250910 00:12:44.020438 1418 main.cc:92] Flatcar Update Engine starting Sep 10 00:12:44.019356 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 10 00:12:44.023487 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 10 00:12:44.023575 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 10 00:12:44.025462 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 10 00:12:44.025565 jq[1429]: true Sep 10 00:12:44.025730 update_engine[1418]: I20250910 00:12:44.024974 1418 update_check_scheduler.cc:74] Next update check in 3m16s Sep 10 00:12:44.027226 systemd[1]: Started update-engine.service - Update Engine. Sep 10 00:12:44.033147 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 10 00:12:44.040589 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 10 00:12:44.051203 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1341) Sep 10 00:12:44.051248 extend-filesystems[1437]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 10 00:12:44.051248 extend-filesystems[1437]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 10 00:12:44.051248 extend-filesystems[1437]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 10 00:12:44.067572 tar[1426]: linux-arm64/helm Sep 10 00:12:44.053540 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 10 00:12:44.067845 extend-filesystems[1406]: Resized filesystem in /dev/vda9 Sep 10 00:12:44.053725 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 10 00:12:44.081786 systemd-logind[1417]: Watching system buttons on /dev/input/event0 (Power Button) Sep 10 00:12:44.082328 systemd-logind[1417]: New seat seat0. Sep 10 00:12:44.086944 bash[1460]: Updated "/home/core/.ssh/authorized_keys" Sep 10 00:12:44.097310 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 10 00:12:44.099538 systemd[1]: Started systemd-logind.service - User Login Management. Sep 10 00:12:44.103041 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 10 00:12:44.122827 locksmithd[1442]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 10 00:12:44.187138 containerd[1430]: time="2025-09-10T00:12:44.187048826Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 10 00:12:44.212288 containerd[1430]: time="2025-09-10T00:12:44.212250419Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 10 00:12:44.213716 containerd[1430]: time="2025-09-10T00:12:44.213673932Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.104-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 10 00:12:44.213716 containerd[1430]: time="2025-09-10T00:12:44.213710511Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 10 00:12:44.213716 containerd[1430]: time="2025-09-10T00:12:44.213725260Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 10 00:12:44.213880 containerd[1430]: time="2025-09-10T00:12:44.213861264Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 10 00:12:44.213907 containerd[1430]: time="2025-09-10T00:12:44.213882355Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 10 00:12:44.213946 containerd[1430]: time="2025-09-10T00:12:44.213931854Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 10 00:12:44.213966 containerd[1430]: time="2025-09-10T00:12:44.213946641Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 10 00:12:44.214105 containerd[1430]: time="2025-09-10T00:12:44.214086420Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 10 00:12:44.214128 containerd[1430]: time="2025-09-10T00:12:44.214105371Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 10 00:12:44.214128 containerd[1430]: time="2025-09-10T00:12:44.214118174Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 10 00:12:44.214167 containerd[1430]: time="2025-09-10T00:12:44.214126969Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 10 00:12:44.214234 containerd[1430]: time="2025-09-10T00:12:44.214195457Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 10 00:12:44.214399 containerd[1430]: time="2025-09-10T00:12:44.214380376Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 10 00:12:44.214496 containerd[1430]: time="2025-09-10T00:12:44.214471202Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 10 00:12:44.214546 containerd[1430]: time="2025-09-10T00:12:44.214497041Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 10 00:12:44.214591 containerd[1430]: time="2025-09-10T00:12:44.214576698Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 10 00:12:44.214642 containerd[1430]: time="2025-09-10T00:12:44.214629231Z" level=info msg="metadata content store policy set" policy=shared Sep 10 00:12:44.217802 containerd[1430]: time="2025-09-10T00:12:44.217775817Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 10 00:12:44.217851 containerd[1430]: time="2025-09-10T00:12:44.217822786Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 10 00:12:44.217851 containerd[1430]: time="2025-09-10T00:12:44.217836717Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 10 00:12:44.217905 containerd[1430]: time="2025-09-10T00:12:44.217854384Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 10 00:12:44.217905 containerd[1430]: time="2025-09-10T00:12:44.217867109Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 10 00:12:44.217993 containerd[1430]: time="2025-09-10T00:12:44.217976379Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 10 00:12:44.218250 containerd[1430]: time="2025-09-10T00:12:44.218234846Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 10 00:12:44.218336 containerd[1430]: time="2025-09-10T00:12:44.218321040Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 10 00:12:44.218363 containerd[1430]: time="2025-09-10T00:12:44.218339486Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 10 00:12:44.218363 containerd[1430]: time="2025-09-10T00:12:44.218351938Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 10 00:12:44.218403 containerd[1430]: time="2025-09-10T00:12:44.218364391Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 10 00:12:44.218403 containerd[1430]: time="2025-09-10T00:12:44.218375715Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 10 00:12:44.218403 containerd[1430]: time="2025-09-10T00:12:44.218387000Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 10 00:12:44.218403 containerd[1430]: time="2025-09-10T00:12:44.218398713Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 10 00:12:44.218464 containerd[1430]: time="2025-09-10T00:12:44.218411048Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 10 00:12:44.218464 containerd[1430]: time="2025-09-10T00:12:44.218421944Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 10 00:12:44.218464 containerd[1430]: time="2025-09-10T00:12:44.218433268Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 10 00:12:44.218464 containerd[1430]: time="2025-09-10T00:12:44.218443152Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 10 00:12:44.218464 containerd[1430]: time="2025-09-10T00:12:44.218459963Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 10 00:12:44.218584 containerd[1430]: time="2025-09-10T00:12:44.218483584Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 10 00:12:44.218584 containerd[1430]: time="2025-09-10T00:12:44.218495998Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 10 00:12:44.218584 containerd[1430]: time="2025-09-10T00:12:44.218581881Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 10 00:12:44.218647 containerd[1430]: time="2025-09-10T00:12:44.218593944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 10 00:12:44.218647 containerd[1430]: time="2025-09-10T00:12:44.218605502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 10 00:12:44.218647 containerd[1430]: time="2025-09-10T00:12:44.218617409Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 10 00:12:44.218647 containerd[1430]: time="2025-09-10T00:12:44.218628655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 10 00:12:44.218647 containerd[1430]: time="2025-09-10T00:12:44.218639357Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 10 00:12:44.218731 containerd[1430]: time="2025-09-10T00:12:44.218651342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 10 00:12:44.218731 containerd[1430]: time="2025-09-10T00:12:44.218662316Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 10 00:12:44.218731 containerd[1430]: time="2025-09-10T00:12:44.218673368Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 10 00:12:44.218731 containerd[1430]: time="2025-09-10T00:12:44.218684186Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 10 00:12:44.218731 containerd[1430]: time="2025-09-10T00:12:44.218697572Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 10 00:12:44.218731 containerd[1430]: time="2025-09-10T00:12:44.218720220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 10 00:12:44.218731 containerd[1430]: time="2025-09-10T00:12:44.218731233Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 10 00:12:44.218841 containerd[1430]: time="2025-09-10T00:12:44.218740922Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 10 00:12:44.218859 containerd[1430]: time="2025-09-10T00:12:44.218844550Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 10 00:12:44.218876 containerd[1430]: time="2025-09-10T00:12:44.218859571Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 10 00:12:44.218876 containerd[1430]: time="2025-09-10T00:12:44.218869183Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 10 00:12:44.218913 containerd[1430]: time="2025-09-10T00:12:44.218880079Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 10 00:12:44.219152 containerd[1430]: time="2025-09-10T00:12:44.218888834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 10 00:12:44.219152 containerd[1430]: time="2025-09-10T00:12:44.219022893Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 10 00:12:44.219152 containerd[1430]: time="2025-09-10T00:12:44.219032660Z" level=info msg="NRI interface is disabled by configuration." Sep 10 00:12:44.219152 containerd[1430]: time="2025-09-10T00:12:44.219042389Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 10 00:12:44.219399 containerd[1430]: time="2025-09-10T00:12:44.219348953Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 10 00:12:44.219624 containerd[1430]: time="2025-09-10T00:12:44.219405379Z" level=info msg="Connect containerd service" Sep 10 00:12:44.219624 containerd[1430]: time="2025-09-10T00:12:44.219426353Z" level=info msg="using legacy CRI server" Sep 10 00:12:44.219624 containerd[1430]: time="2025-09-10T00:12:44.219432385Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 10 00:12:44.219624 containerd[1430]: time="2025-09-10T00:12:44.219533600Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 10 00:12:44.220134 containerd[1430]: time="2025-09-10T00:12:44.220111551Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 10 00:12:44.220323 containerd[1430]: time="2025-09-10T00:12:44.220297521Z" level=info msg="Start subscribing containerd event" Sep 10 00:12:44.220360 containerd[1430]: time="2025-09-10T00:12:44.220334878Z" level=info msg="Start recovering state" Sep 10 00:12:44.220396 containerd[1430]: time="2025-09-10T00:12:44.220384027Z" level=info msg="Start event monitor" Sep 10 00:12:44.220421 containerd[1430]: time="2025-09-10T00:12:44.220396751Z" level=info msg="Start snapshots syncer" Sep 10 00:12:44.220421 containerd[1430]: time="2025-09-10T00:12:44.220405857Z" level=info msg="Start cni network conf syncer for default" Sep 10 00:12:44.220421 containerd[1430]: time="2025-09-10T00:12:44.220412434Z" level=info msg="Start streaming server" Sep 10 00:12:44.220947 containerd[1430]: time="2025-09-10T00:12:44.220927499Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 10 00:12:44.221000 containerd[1430]: time="2025-09-10T00:12:44.220973496Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 10 00:12:44.221033 containerd[1430]: time="2025-09-10T00:12:44.221020387Z" level=info msg="containerd successfully booted in 0.034953s" Sep 10 00:12:44.222592 systemd[1]: Started containerd.service - containerd container runtime. Sep 10 00:12:44.381568 tar[1426]: linux-arm64/LICENSE Sep 10 00:12:44.381742 tar[1426]: linux-arm64/README.md Sep 10 00:12:44.395536 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 10 00:12:44.733263 sshd_keygen[1425]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 10 00:12:44.750989 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 10 00:12:44.758793 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 10 00:12:44.763801 systemd[1]: issuegen.service: Deactivated successfully. Sep 10 00:12:44.763976 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 10 00:12:44.766131 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 10 00:12:44.776361 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 10 00:12:44.778714 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 10 00:12:44.780466 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 10 00:12:44.781561 systemd[1]: Reached target getty.target - Login Prompts. Sep 10 00:12:44.918651 systemd-networkd[1370]: eth0: Gained IPv6LL Sep 10 00:12:44.921285 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 10 00:12:44.922810 systemd[1]: Reached target network-online.target - Network is Online. Sep 10 00:12:44.935724 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 10 00:12:44.937680 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:12:44.939344 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 10 00:12:44.953617 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 10 00:12:44.953785 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 10 00:12:44.954985 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 10 00:12:44.955716 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 10 00:12:45.460053 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:12:45.461287 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 10 00:12:45.465081 (kubelet)[1517]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 00:12:45.466796 systemd[1]: Startup finished in 502ms (kernel) + 4.809s (initrd) + 3.011s (userspace) = 8.323s. Sep 10 00:12:45.819656 kubelet[1517]: E0910 00:12:45.819556 1517 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 00:12:45.821913 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 00:12:45.822054 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 00:12:50.696102 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 10 00:12:50.698329 systemd[1]: Started sshd@0-10.0.0.106:22-10.0.0.1:48610.service - OpenSSH per-connection server daemon (10.0.0.1:48610). Sep 10 00:12:50.746349 sshd[1531]: Accepted publickey for core from 10.0.0.1 port 48610 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:12:50.748168 sshd[1531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:12:50.755593 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 10 00:12:50.768749 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 10 00:12:50.770161 systemd-logind[1417]: New session 1 of user core. Sep 10 00:12:50.777332 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 10 00:12:50.780516 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 10 00:12:50.786801 (systemd)[1535]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:12:50.856656 systemd[1535]: Queued start job for default target default.target. Sep 10 00:12:50.868265 systemd[1535]: Created slice app.slice - User Application Slice. Sep 10 00:12:50.868306 systemd[1535]: Reached target paths.target - Paths. Sep 10 00:12:50.868320 systemd[1535]: Reached target timers.target - Timers. Sep 10 00:12:50.869386 systemd[1535]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 10 00:12:50.877927 systemd[1535]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 10 00:12:50.877974 systemd[1535]: Reached target sockets.target - Sockets. Sep 10 00:12:50.877984 systemd[1535]: Reached target basic.target - Basic System. Sep 10 00:12:50.878015 systemd[1535]: Reached target default.target - Main User Target. Sep 10 00:12:50.878046 systemd[1535]: Startup finished in 86ms. Sep 10 00:12:50.878287 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 10 00:12:50.879442 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 10 00:12:50.943202 systemd[1]: Started sshd@1-10.0.0.106:22-10.0.0.1:48614.service - OpenSSH per-connection server daemon (10.0.0.1:48614). Sep 10 00:12:50.978782 sshd[1546]: Accepted publickey for core from 10.0.0.1 port 48614 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:12:50.979987 sshd[1546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:12:50.983859 systemd-logind[1417]: New session 2 of user core. Sep 10 00:12:50.995694 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 10 00:12:51.046353 sshd[1546]: pam_unix(sshd:session): session closed for user core Sep 10 00:12:51.055681 systemd[1]: sshd@1-10.0.0.106:22-10.0.0.1:48614.service: Deactivated successfully. Sep 10 00:12:51.057071 systemd[1]: session-2.scope: Deactivated successfully. Sep 10 00:12:51.058242 systemd-logind[1417]: Session 2 logged out. Waiting for processes to exit. Sep 10 00:12:51.059800 systemd[1]: Started sshd@2-10.0.0.106:22-10.0.0.1:48622.service - OpenSSH per-connection server daemon (10.0.0.1:48622). Sep 10 00:12:51.060939 systemd-logind[1417]: Removed session 2. Sep 10 00:12:51.095261 sshd[1553]: Accepted publickey for core from 10.0.0.1 port 48622 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:12:51.096364 sshd[1553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:12:51.099789 systemd-logind[1417]: New session 3 of user core. Sep 10 00:12:51.111787 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 10 00:12:51.158596 sshd[1553]: pam_unix(sshd:session): session closed for user core Sep 10 00:12:51.167708 systemd[1]: sshd@2-10.0.0.106:22-10.0.0.1:48622.service: Deactivated successfully. Sep 10 00:12:51.169739 systemd[1]: session-3.scope: Deactivated successfully. Sep 10 00:12:51.171707 systemd-logind[1417]: Session 3 logged out. Waiting for processes to exit. Sep 10 00:12:51.173330 systemd[1]: Started sshd@3-10.0.0.106:22-10.0.0.1:48626.service - OpenSSH per-connection server daemon (10.0.0.1:48626). Sep 10 00:12:51.174907 systemd-logind[1417]: Removed session 3. Sep 10 00:12:51.209459 sshd[1560]: Accepted publickey for core from 10.0.0.1 port 48626 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:12:51.210611 sshd[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:12:51.214354 systemd-logind[1417]: New session 4 of user core. Sep 10 00:12:51.224625 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 10 00:12:51.274674 sshd[1560]: pam_unix(sshd:session): session closed for user core Sep 10 00:12:51.287730 systemd[1]: sshd@3-10.0.0.106:22-10.0.0.1:48626.service: Deactivated successfully. Sep 10 00:12:51.289072 systemd[1]: session-4.scope: Deactivated successfully. Sep 10 00:12:51.290261 systemd-logind[1417]: Session 4 logged out. Waiting for processes to exit. Sep 10 00:12:51.291300 systemd[1]: Started sshd@4-10.0.0.106:22-10.0.0.1:48634.service - OpenSSH per-connection server daemon (10.0.0.1:48634). Sep 10 00:12:51.292914 systemd-logind[1417]: Removed session 4. Sep 10 00:12:51.327488 sshd[1567]: Accepted publickey for core from 10.0.0.1 port 48634 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:12:51.328661 sshd[1567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:12:51.333093 systemd-logind[1417]: New session 5 of user core. Sep 10 00:12:51.339704 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 10 00:12:51.396197 sudo[1570]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 10 00:12:51.396511 sudo[1570]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 00:12:51.415319 sudo[1570]: pam_unix(sudo:session): session closed for user root Sep 10 00:12:51.417045 sshd[1567]: pam_unix(sshd:session): session closed for user core Sep 10 00:12:51.430847 systemd[1]: sshd@4-10.0.0.106:22-10.0.0.1:48634.service: Deactivated successfully. Sep 10 00:12:51.432173 systemd[1]: session-5.scope: Deactivated successfully. Sep 10 00:12:51.434619 systemd-logind[1417]: Session 5 logged out. Waiting for processes to exit. Sep 10 00:12:51.435809 systemd[1]: Started sshd@5-10.0.0.106:22-10.0.0.1:48648.service - OpenSSH per-connection server daemon (10.0.0.1:48648). Sep 10 00:12:51.436496 systemd-logind[1417]: Removed session 5. Sep 10 00:12:51.473130 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 48648 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:12:51.474358 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:12:51.478067 systemd-logind[1417]: New session 6 of user core. Sep 10 00:12:51.485673 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 10 00:12:51.537315 sudo[1579]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 10 00:12:51.537612 sudo[1579]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 00:12:51.540547 sudo[1579]: pam_unix(sudo:session): session closed for user root Sep 10 00:12:51.544776 sudo[1578]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 10 00:12:51.545262 sudo[1578]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 00:12:51.562740 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 10 00:12:51.563938 auditctl[1582]: No rules Sep 10 00:12:51.564737 systemd[1]: audit-rules.service: Deactivated successfully. Sep 10 00:12:51.564933 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 10 00:12:51.566379 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 10 00:12:51.588151 augenrules[1600]: No rules Sep 10 00:12:51.589211 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 10 00:12:51.590430 sudo[1578]: pam_unix(sudo:session): session closed for user root Sep 10 00:12:51.592120 sshd[1575]: pam_unix(sshd:session): session closed for user core Sep 10 00:12:51.602577 systemd[1]: sshd@5-10.0.0.106:22-10.0.0.1:48648.service: Deactivated successfully. Sep 10 00:12:51.603697 systemd[1]: session-6.scope: Deactivated successfully. Sep 10 00:12:51.605632 systemd-logind[1417]: Session 6 logged out. Waiting for processes to exit. Sep 10 00:12:51.615743 systemd[1]: Started sshd@6-10.0.0.106:22-10.0.0.1:48664.service - OpenSSH per-connection server daemon (10.0.0.1:48664). Sep 10 00:12:51.616768 systemd-logind[1417]: Removed session 6. Sep 10 00:12:51.647981 sshd[1608]: Accepted publickey for core from 10.0.0.1 port 48664 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:12:51.649180 sshd[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:12:51.653008 systemd-logind[1417]: New session 7 of user core. Sep 10 00:12:51.664703 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 10 00:12:51.714521 sudo[1611]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 10 00:12:51.714793 sudo[1611]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 00:12:51.973857 (dockerd)[1629]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 10 00:12:51.974144 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 10 00:12:52.183875 dockerd[1629]: time="2025-09-10T00:12:52.183815545Z" level=info msg="Starting up" Sep 10 00:12:52.322401 dockerd[1629]: time="2025-09-10T00:12:52.322301749Z" level=info msg="Loading containers: start." Sep 10 00:12:52.405539 kernel: Initializing XFRM netlink socket Sep 10 00:12:52.467274 systemd-networkd[1370]: docker0: Link UP Sep 10 00:12:52.488893 dockerd[1629]: time="2025-09-10T00:12:52.488784908Z" level=info msg="Loading containers: done." Sep 10 00:12:52.502077 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1568871889-merged.mount: Deactivated successfully. Sep 10 00:12:52.504297 dockerd[1629]: time="2025-09-10T00:12:52.504241435Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 10 00:12:52.504379 dockerd[1629]: time="2025-09-10T00:12:52.504363881Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 10 00:12:52.504507 dockerd[1629]: time="2025-09-10T00:12:52.504486487Z" level=info msg="Daemon has completed initialization" Sep 10 00:12:52.530424 dockerd[1629]: time="2025-09-10T00:12:52.530298497Z" level=info msg="API listen on /run/docker.sock" Sep 10 00:12:52.530547 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 10 00:12:53.017309 containerd[1430]: time="2025-09-10T00:12:53.017268310Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 10 00:12:53.618101 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1596665467.mount: Deactivated successfully. Sep 10 00:12:54.492338 containerd[1430]: time="2025-09-10T00:12:54.492291618Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:12:54.493255 containerd[1430]: time="2025-09-10T00:12:54.493204252Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.12: active requests=0, bytes read=25652443" Sep 10 00:12:54.493943 containerd[1430]: time="2025-09-10T00:12:54.493914265Z" level=info msg="ImageCreate event name:\"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:12:54.497526 containerd[1430]: time="2025-09-10T00:12:54.497152131Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:12:54.498650 containerd[1430]: time="2025-09-10T00:12:54.498611870Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.12\" with image id \"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\", size \"25649241\" in 1.481300902s" Sep 10 00:12:54.498691 containerd[1430]: time="2025-09-10T00:12:54.498655159Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\"" Sep 10 00:12:54.499959 containerd[1430]: time="2025-09-10T00:12:54.499903697Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 10 00:12:55.505519 containerd[1430]: time="2025-09-10T00:12:55.505468075Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:12:55.507147 containerd[1430]: time="2025-09-10T00:12:55.507115792Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.12: active requests=0, bytes read=22460311" Sep 10 00:12:55.508536 containerd[1430]: time="2025-09-10T00:12:55.508178191Z" level=info msg="ImageCreate event name:\"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:12:55.513941 containerd[1430]: time="2025-09-10T00:12:55.513760285Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:12:55.515010 containerd[1430]: time="2025-09-10T00:12:55.514891530Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.12\" with image id \"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\", size \"23997423\" in 1.014942272s" Sep 10 00:12:55.515010 containerd[1430]: time="2025-09-10T00:12:55.514926788Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\"" Sep 10 00:12:55.515806 containerd[1430]: time="2025-09-10T00:12:55.515784037Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 10 00:12:55.830730 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 10 00:12:55.841762 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:12:55.938160 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:12:55.942994 (kubelet)[1843]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 00:12:55.984797 kubelet[1843]: E0910 00:12:55.984759 1843 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 00:12:55.988007 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 00:12:55.988160 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 00:12:56.640549 containerd[1430]: time="2025-09-10T00:12:56.639571455Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:12:56.641210 containerd[1430]: time="2025-09-10T00:12:56.641177385Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.12: active requests=0, bytes read=17125905" Sep 10 00:12:56.642270 containerd[1430]: time="2025-09-10T00:12:56.642248654Z" level=info msg="ImageCreate event name:\"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:12:56.646005 containerd[1430]: time="2025-09-10T00:12:56.645963309Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:12:56.647394 containerd[1430]: time="2025-09-10T00:12:56.647216418Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.12\" with image id \"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\", size \"18663035\" in 1.131405265s" Sep 10 00:12:56.647394 containerd[1430]: time="2025-09-10T00:12:56.647248522Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\"" Sep 10 00:12:56.647928 containerd[1430]: time="2025-09-10T00:12:56.647752471Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 10 00:12:57.586053 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3318142916.mount: Deactivated successfully. Sep 10 00:12:57.797066 containerd[1430]: time="2025-09-10T00:12:57.796618847Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:12:57.797430 containerd[1430]: time="2025-09-10T00:12:57.797147226Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.12: active requests=0, bytes read=26916097" Sep 10 00:12:57.797889 containerd[1430]: time="2025-09-10T00:12:57.797861193Z" level=info msg="ImageCreate event name:\"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:12:57.799918 containerd[1430]: time="2025-09-10T00:12:57.799885381Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:12:57.802744 containerd[1430]: time="2025-09-10T00:12:57.800564556Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.12\" with image id \"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\", repo tag \"registry.k8s.io/kube-proxy:v1.31.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\", size \"26915114\" in 1.152781487s" Sep 10 00:12:57.802744 containerd[1430]: time="2025-09-10T00:12:57.800829840Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\"" Sep 10 00:12:57.803079 containerd[1430]: time="2025-09-10T00:12:57.803059241Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 10 00:12:58.290105 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3426649125.mount: Deactivated successfully. Sep 10 00:12:58.897635 containerd[1430]: time="2025-09-10T00:12:58.896945584Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:12:58.897635 containerd[1430]: time="2025-09-10T00:12:58.897584935Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Sep 10 00:12:58.899141 containerd[1430]: time="2025-09-10T00:12:58.899097775Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:12:58.901707 containerd[1430]: time="2025-09-10T00:12:58.901652991Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:12:58.903073 containerd[1430]: time="2025-09-10T00:12:58.903015583Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.099815771s" Sep 10 00:12:58.903073 containerd[1430]: time="2025-09-10T00:12:58.903052547Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 10 00:12:58.903536 containerd[1430]: time="2025-09-10T00:12:58.903510980Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 10 00:12:59.394905 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4223020946.mount: Deactivated successfully. Sep 10 00:12:59.398580 containerd[1430]: time="2025-09-10T00:12:59.398542557Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:12:59.399176 containerd[1430]: time="2025-09-10T00:12:59.399016893Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 10 00:12:59.399801 containerd[1430]: time="2025-09-10T00:12:59.399767572Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:12:59.401890 containerd[1430]: time="2025-09-10T00:12:59.401860158Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:12:59.402817 containerd[1430]: time="2025-09-10T00:12:59.402784997Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 499.241351ms" Sep 10 00:12:59.402817 containerd[1430]: time="2025-09-10T00:12:59.402817518Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 10 00:12:59.403414 containerd[1430]: time="2025-09-10T00:12:59.403389016Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 10 00:12:59.888415 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3953032160.mount: Deactivated successfully. Sep 10 00:13:01.434479 containerd[1430]: time="2025-09-10T00:13:01.434432026Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:13:01.436594 containerd[1430]: time="2025-09-10T00:13:01.436564388Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66537163" Sep 10 00:13:01.437240 containerd[1430]: time="2025-09-10T00:13:01.437189749Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:13:01.440574 containerd[1430]: time="2025-09-10T00:13:01.440540963Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:13:01.442929 containerd[1430]: time="2025-09-10T00:13:01.442891192Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.039465583s" Sep 10 00:13:01.442994 containerd[1430]: time="2025-09-10T00:13:01.442929205Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Sep 10 00:13:06.082007 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 10 00:13:06.091679 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:13:06.222135 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:13:06.225652 (kubelet)[2005]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 00:13:06.262148 kubelet[2005]: E0910 00:13:06.262086 2005 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 00:13:06.264649 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 00:13:06.264788 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 00:13:06.786268 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:13:06.796720 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:13:06.816496 systemd[1]: Reloading requested from client PID 2020 ('systemctl') (unit session-7.scope)... Sep 10 00:13:06.816522 systemd[1]: Reloading... Sep 10 00:13:06.880581 zram_generator::config[2060]: No configuration found. Sep 10 00:13:07.189793 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 00:13:07.242585 systemd[1]: Reloading finished in 425 ms. Sep 10 00:13:07.291211 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:13:07.293938 systemd[1]: kubelet.service: Deactivated successfully. Sep 10 00:13:07.294109 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:13:07.295610 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:13:07.393674 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:13:07.397199 (kubelet)[2106]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 10 00:13:07.430566 kubelet[2106]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 00:13:07.430566 kubelet[2106]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 10 00:13:07.430566 kubelet[2106]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 00:13:07.430885 kubelet[2106]: I0910 00:13:07.430624 2106 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 10 00:13:08.594543 kubelet[2106]: I0910 00:13:08.593982 2106 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 10 00:13:08.594543 kubelet[2106]: I0910 00:13:08.594016 2106 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 10 00:13:08.594543 kubelet[2106]: I0910 00:13:08.594257 2106 server.go:934] "Client rotation is on, will bootstrap in background" Sep 10 00:13:08.612457 kubelet[2106]: E0910 00:13:08.612402 2106 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.106:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:13:08.612878 kubelet[2106]: I0910 00:13:08.612864 2106 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 00:13:08.620531 kubelet[2106]: E0910 00:13:08.620450 2106 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 10 00:13:08.620531 kubelet[2106]: I0910 00:13:08.620476 2106 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 10 00:13:08.624008 kubelet[2106]: I0910 00:13:08.623677 2106 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 10 00:13:08.624426 kubelet[2106]: I0910 00:13:08.624412 2106 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 10 00:13:08.624631 kubelet[2106]: I0910 00:13:08.624608 2106 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 10 00:13:08.624885 kubelet[2106]: I0910 00:13:08.624698 2106 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 10 00:13:08.625090 kubelet[2106]: I0910 00:13:08.625077 2106 topology_manager.go:138] "Creating topology manager with none policy" Sep 10 00:13:08.625144 kubelet[2106]: I0910 00:13:08.625136 2106 container_manager_linux.go:300] "Creating device plugin manager" Sep 10 00:13:08.625412 kubelet[2106]: I0910 00:13:08.625400 2106 state_mem.go:36] "Initialized new in-memory state store" Sep 10 00:13:08.627397 kubelet[2106]: I0910 00:13:08.627381 2106 kubelet.go:408] "Attempting to sync node with API server" Sep 10 00:13:08.627513 kubelet[2106]: I0910 00:13:08.627493 2106 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 10 00:13:08.627710 kubelet[2106]: I0910 00:13:08.627566 2106 kubelet.go:314] "Adding apiserver pod source" Sep 10 00:13:08.627710 kubelet[2106]: I0910 00:13:08.627579 2106 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 10 00:13:08.631043 kubelet[2106]: W0910 00:13:08.630996 2106 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Sep 10 00:13:08.631104 kubelet[2106]: E0910 00:13:08.631053 2106 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:13:08.631104 kubelet[2106]: W0910 00:13:08.630996 2106 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.106:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Sep 10 00:13:08.631160 kubelet[2106]: E0910 00:13:08.631113 2106 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.106:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:13:08.631535 kubelet[2106]: I0910 00:13:08.631514 2106 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 10 00:13:08.632247 kubelet[2106]: I0910 00:13:08.632233 2106 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 10 00:13:08.632457 kubelet[2106]: W0910 00:13:08.632447 2106 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 10 00:13:08.633446 kubelet[2106]: I0910 00:13:08.633323 2106 server.go:1274] "Started kubelet" Sep 10 00:13:08.634022 kubelet[2106]: I0910 00:13:08.633980 2106 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 10 00:13:08.634856 kubelet[2106]: I0910 00:13:08.634162 2106 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 10 00:13:08.634856 kubelet[2106]: I0910 00:13:08.634263 2106 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 10 00:13:08.634856 kubelet[2106]: I0910 00:13:08.634666 2106 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 10 00:13:08.635109 kubelet[2106]: I0910 00:13:08.635086 2106 server.go:449] "Adding debug handlers to kubelet server" Sep 10 00:13:08.636592 kubelet[2106]: I0910 00:13:08.635994 2106 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 10 00:13:08.636787 kubelet[2106]: E0910 00:13:08.635736 2106 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.106:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.106:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1863c37568503565 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-10 00:13:08.633302373 +0000 UTC m=+1.233318180,LastTimestamp:2025-09-10 00:13:08.633302373 +0000 UTC m=+1.233318180,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 10 00:13:08.636973 kubelet[2106]: E0910 00:13:08.636958 2106 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:13:08.637042 kubelet[2106]: I0910 00:13:08.637034 2106 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 10 00:13:08.637272 kubelet[2106]: I0910 00:13:08.637258 2106 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 10 00:13:08.637953 kubelet[2106]: I0910 00:13:08.637530 2106 reconciler.go:26] "Reconciler: start to sync state" Sep 10 00:13:08.637953 kubelet[2106]: I0910 00:13:08.637735 2106 factory.go:221] Registration of the systemd container factory successfully Sep 10 00:13:08.637953 kubelet[2106]: E0910 00:13:08.637326 2106 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 10 00:13:08.637953 kubelet[2106]: I0910 00:13:08.637820 2106 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 10 00:13:08.638168 kubelet[2106]: E0910 00:13:08.638126 2106 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.106:6443: connect: connection refused" interval="200ms" Sep 10 00:13:08.638587 kubelet[2106]: W0910 00:13:08.638552 2106 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Sep 10 00:13:08.638694 kubelet[2106]: E0910 00:13:08.638679 2106 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:13:08.641398 kubelet[2106]: I0910 00:13:08.641376 2106 factory.go:221] Registration of the containerd container factory successfully Sep 10 00:13:08.649580 kubelet[2106]: I0910 00:13:08.649544 2106 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 10 00:13:08.650791 kubelet[2106]: I0910 00:13:08.650766 2106 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 10 00:13:08.650791 kubelet[2106]: I0910 00:13:08.650794 2106 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 10 00:13:08.650895 kubelet[2106]: I0910 00:13:08.650810 2106 kubelet.go:2321] "Starting kubelet main sync loop" Sep 10 00:13:08.650895 kubelet[2106]: E0910 00:13:08.650854 2106 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 10 00:13:08.652307 kubelet[2106]: W0910 00:13:08.652166 2106 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Sep 10 00:13:08.652307 kubelet[2106]: E0910 00:13:08.652213 2106 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:13:08.652706 kubelet[2106]: I0910 00:13:08.652677 2106 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 10 00:13:08.652706 kubelet[2106]: I0910 00:13:08.652695 2106 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 10 00:13:08.652706 kubelet[2106]: I0910 00:13:08.652710 2106 state_mem.go:36] "Initialized new in-memory state store" Sep 10 00:13:08.729096 kubelet[2106]: I0910 00:13:08.729048 2106 policy_none.go:49] "None policy: Start" Sep 10 00:13:08.729876 kubelet[2106]: I0910 00:13:08.729831 2106 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 10 00:13:08.729876 kubelet[2106]: I0910 00:13:08.729869 2106 state_mem.go:35] "Initializing new in-memory state store" Sep 10 00:13:08.736321 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 10 00:13:08.737657 kubelet[2106]: E0910 00:13:08.737613 2106 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:13:08.745940 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 10 00:13:08.748465 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 10 00:13:08.751341 kubelet[2106]: E0910 00:13:08.751321 2106 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 10 00:13:08.758687 kubelet[2106]: I0910 00:13:08.758148 2106 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 10 00:13:08.758687 kubelet[2106]: I0910 00:13:08.758317 2106 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 10 00:13:08.758687 kubelet[2106]: I0910 00:13:08.758327 2106 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 10 00:13:08.758687 kubelet[2106]: I0910 00:13:08.758497 2106 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 10 00:13:08.759840 kubelet[2106]: E0910 00:13:08.759823 2106 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 10 00:13:08.838952 kubelet[2106]: E0910 00:13:08.838914 2106 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.106:6443: connect: connection refused" interval="400ms" Sep 10 00:13:08.860307 kubelet[2106]: I0910 00:13:08.860204 2106 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 00:13:08.860779 kubelet[2106]: E0910 00:13:08.860751 2106 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.106:6443/api/v1/nodes\": dial tcp 10.0.0.106:6443: connect: connection refused" node="localhost" Sep 10 00:13:08.959057 systemd[1]: Created slice kubepods-burstable-pod057217e72b388d2608ba733f970ffa25.slice - libcontainer container kubepods-burstable-pod057217e72b388d2608ba733f970ffa25.slice. Sep 10 00:13:08.977036 systemd[1]: Created slice kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice - libcontainer container kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice. Sep 10 00:13:08.989779 systemd[1]: Created slice kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice - libcontainer container kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice. Sep 10 00:13:09.040342 kubelet[2106]: I0910 00:13:09.040303 2106 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:13:09.040342 kubelet[2106]: I0910 00:13:09.040340 2106 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:13:09.040433 kubelet[2106]: I0910 00:13:09.040360 2106 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:13:09.040433 kubelet[2106]: I0910 00:13:09.040376 2106 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/057217e72b388d2608ba733f970ffa25-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"057217e72b388d2608ba733f970ffa25\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:13:09.040433 kubelet[2106]: I0910 00:13:09.040390 2106 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/057217e72b388d2608ba733f970ffa25-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"057217e72b388d2608ba733f970ffa25\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:13:09.040433 kubelet[2106]: I0910 00:13:09.040405 2106 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/057217e72b388d2608ba733f970ffa25-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"057217e72b388d2608ba733f970ffa25\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:13:09.040433 kubelet[2106]: I0910 00:13:09.040421 2106 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:13:09.040564 kubelet[2106]: I0910 00:13:09.040435 2106 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:13:09.040564 kubelet[2106]: I0910 00:13:09.040459 2106 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 10 00:13:09.062178 kubelet[2106]: I0910 00:13:09.062157 2106 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 00:13:09.062464 kubelet[2106]: E0910 00:13:09.062422 2106 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.106:6443/api/v1/nodes\": dial tcp 10.0.0.106:6443: connect: connection refused" node="localhost" Sep 10 00:13:09.239812 kubelet[2106]: E0910 00:13:09.239725 2106 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.106:6443: connect: connection refused" interval="800ms" Sep 10 00:13:09.276556 kubelet[2106]: E0910 00:13:09.276308 2106 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:13:09.276938 containerd[1430]: time="2025-09-10T00:13:09.276884963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:057217e72b388d2608ba733f970ffa25,Namespace:kube-system,Attempt:0,}" Sep 10 00:13:09.279397 kubelet[2106]: E0910 00:13:09.279368 2106 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:13:09.279733 containerd[1430]: time="2025-09-10T00:13:09.279690017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,}" Sep 10 00:13:09.292187 kubelet[2106]: E0910 00:13:09.292150 2106 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:13:09.292607 containerd[1430]: time="2025-09-10T00:13:09.292578943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,}" Sep 10 00:13:09.463524 kubelet[2106]: I0910 00:13:09.463483 2106 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 00:13:09.463866 kubelet[2106]: E0910 00:13:09.463842 2106 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.106:6443/api/v1/nodes\": dial tcp 10.0.0.106:6443: connect: connection refused" node="localhost" Sep 10 00:13:09.652128 kubelet[2106]: W0910 00:13:09.652032 2106 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.106:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Sep 10 00:13:09.652128 kubelet[2106]: E0910 00:13:09.652099 2106 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.106:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:13:09.757143 kubelet[2106]: W0910 00:13:09.757087 2106 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Sep 10 00:13:09.757246 kubelet[2106]: E0910 00:13:09.757151 2106 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:13:09.782421 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3019194358.mount: Deactivated successfully. Sep 10 00:13:09.787189 containerd[1430]: time="2025-09-10T00:13:09.787146950Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 00:13:09.788018 containerd[1430]: time="2025-09-10T00:13:09.787970876Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 10 00:13:09.790164 containerd[1430]: time="2025-09-10T00:13:09.788561706Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 00:13:09.790164 containerd[1430]: time="2025-09-10T00:13:09.789777932Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 00:13:09.790164 containerd[1430]: time="2025-09-10T00:13:09.789807104Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 10 00:13:09.790628 containerd[1430]: time="2025-09-10T00:13:09.790597302Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 00:13:09.791181 containerd[1430]: time="2025-09-10T00:13:09.791139259Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Sep 10 00:13:09.793423 containerd[1430]: time="2025-09-10T00:13:09.793393165Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 00:13:09.795011 containerd[1430]: time="2025-09-10T00:13:09.794983470Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 502.340669ms" Sep 10 00:13:09.795864 containerd[1430]: time="2025-09-10T00:13:09.795732868Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 515.965325ms" Sep 10 00:13:09.799289 containerd[1430]: time="2025-09-10T00:13:09.799254790Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 522.290303ms" Sep 10 00:13:09.804152 kubelet[2106]: W0910 00:13:09.803986 2106 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Sep 10 00:13:09.804152 kubelet[2106]: E0910 00:13:09.804057 2106 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:13:09.885420 containerd[1430]: time="2025-09-10T00:13:09.884881665Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:13:09.885420 containerd[1430]: time="2025-09-10T00:13:09.885308254Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:13:09.885420 containerd[1430]: time="2025-09-10T00:13:09.885340223Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:13:09.885677 containerd[1430]: time="2025-09-10T00:13:09.885463464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:13:09.886175 containerd[1430]: time="2025-09-10T00:13:09.886014093Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:13:09.886379 containerd[1430]: time="2025-09-10T00:13:09.886346332Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:13:09.886481 containerd[1430]: time="2025-09-10T00:13:09.886420421Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:13:09.886993 containerd[1430]: time="2025-09-10T00:13:09.886950350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:13:09.889158 containerd[1430]: time="2025-09-10T00:13:09.888657423Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:13:09.889158 containerd[1430]: time="2025-09-10T00:13:09.888710771Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:13:09.889158 containerd[1430]: time="2025-09-10T00:13:09.888725797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:13:09.889158 containerd[1430]: time="2025-09-10T00:13:09.888800245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:13:09.909692 systemd[1]: Started cri-containerd-cd0b142998c8b07a575619b2e9b8535883dfc4848cbf5534f63aa677aa891bb7.scope - libcontainer container cd0b142998c8b07a575619b2e9b8535883dfc4848cbf5534f63aa677aa891bb7. Sep 10 00:13:09.914239 systemd[1]: Started cri-containerd-8405ed34f9fc92dc1240a9d416e65cc20a64a5ee9de1515b7bc848c2cf3a4708.scope - libcontainer container 8405ed34f9fc92dc1240a9d416e65cc20a64a5ee9de1515b7bc848c2cf3a4708. Sep 10 00:13:09.915872 systemd[1]: Started cri-containerd-e5a267ff850aaffd9299125bcd268c41b0c191c0b5439b7ab2d601e70dc178bf.scope - libcontainer container e5a267ff850aaffd9299125bcd268c41b0c191c0b5439b7ab2d601e70dc178bf. Sep 10 00:13:09.944840 containerd[1430]: time="2025-09-10T00:13:09.944727811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:057217e72b388d2608ba733f970ffa25,Namespace:kube-system,Attempt:0,} returns sandbox id \"cd0b142998c8b07a575619b2e9b8535883dfc4848cbf5534f63aa677aa891bb7\"" Sep 10 00:13:09.945654 kubelet[2106]: E0910 00:13:09.945632 2106 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:13:09.947627 containerd[1430]: time="2025-09-10T00:13:09.947475041Z" level=info msg="CreateContainer within sandbox \"cd0b142998c8b07a575619b2e9b8535883dfc4848cbf5534f63aa677aa891bb7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 10 00:13:09.947826 containerd[1430]: time="2025-09-10T00:13:09.947542136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"e5a267ff850aaffd9299125bcd268c41b0c191c0b5439b7ab2d601e70dc178bf\"" Sep 10 00:13:09.948373 kubelet[2106]: E0910 00:13:09.948340 2106 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:13:09.951296 containerd[1430]: time="2025-09-10T00:13:09.951270899Z" level=info msg="CreateContainer within sandbox \"e5a267ff850aaffd9299125bcd268c41b0c191c0b5439b7ab2d601e70dc178bf\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 10 00:13:09.954520 containerd[1430]: time="2025-09-10T00:13:09.954263652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"8405ed34f9fc92dc1240a9d416e65cc20a64a5ee9de1515b7bc848c2cf3a4708\"" Sep 10 00:13:09.955130 kubelet[2106]: E0910 00:13:09.954971 2106 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:13:09.956009 containerd[1430]: time="2025-09-10T00:13:09.955981515Z" level=info msg="CreateContainer within sandbox \"8405ed34f9fc92dc1240a9d416e65cc20a64a5ee9de1515b7bc848c2cf3a4708\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 10 00:13:09.964434 containerd[1430]: time="2025-09-10T00:13:09.964381052Z" level=info msg="CreateContainer within sandbox \"e5a267ff850aaffd9299125bcd268c41b0c191c0b5439b7ab2d601e70dc178bf\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7227b376df056fae8976c3f46af59b65a109c1c4b2e1c3304d2bec87567355f0\"" Sep 10 00:13:09.964920 containerd[1430]: time="2025-09-10T00:13:09.964894996Z" level=info msg="StartContainer for \"7227b376df056fae8976c3f46af59b65a109c1c4b2e1c3304d2bec87567355f0\"" Sep 10 00:13:09.966731 containerd[1430]: time="2025-09-10T00:13:09.966704690Z" level=info msg="CreateContainer within sandbox \"cd0b142998c8b07a575619b2e9b8535883dfc4848cbf5534f63aa677aa891bb7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"77764857fa34fe9bf33ddaeed8cbbb55a769975b7c22bd487da0087312df6024\"" Sep 10 00:13:09.967079 containerd[1430]: time="2025-09-10T00:13:09.967057230Z" level=info msg="StartContainer for \"77764857fa34fe9bf33ddaeed8cbbb55a769975b7c22bd487da0087312df6024\"" Sep 10 00:13:09.977707 containerd[1430]: time="2025-09-10T00:13:09.977649731Z" level=info msg="CreateContainer within sandbox \"8405ed34f9fc92dc1240a9d416e65cc20a64a5ee9de1515b7bc848c2cf3a4708\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f9e669be8e529452e2dddbd461ee99fb126314fc5bd0c8f5d25b749c0ef5fe72\"" Sep 10 00:13:09.978217 containerd[1430]: time="2025-09-10T00:13:09.978197523Z" level=info msg="StartContainer for \"f9e669be8e529452e2dddbd461ee99fb126314fc5bd0c8f5d25b749c0ef5fe72\"" Sep 10 00:13:09.992745 systemd[1]: Started cri-containerd-7227b376df056fae8976c3f46af59b65a109c1c4b2e1c3304d2bec87567355f0.scope - libcontainer container 7227b376df056fae8976c3f46af59b65a109c1c4b2e1c3304d2bec87567355f0. Sep 10 00:13:09.994326 systemd[1]: Started cri-containerd-77764857fa34fe9bf33ddaeed8cbbb55a769975b7c22bd487da0087312df6024.scope - libcontainer container 77764857fa34fe9bf33ddaeed8cbbb55a769975b7c22bd487da0087312df6024. Sep 10 00:13:10.008764 systemd[1]: Started cri-containerd-f9e669be8e529452e2dddbd461ee99fb126314fc5bd0c8f5d25b749c0ef5fe72.scope - libcontainer container f9e669be8e529452e2dddbd461ee99fb126314fc5bd0c8f5d25b749c0ef5fe72. Sep 10 00:13:10.026767 kubelet[2106]: W0910 00:13:10.026708 2106 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Sep 10 00:13:10.026889 kubelet[2106]: E0910 00:13:10.026770 2106 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.106:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:13:10.040360 kubelet[2106]: E0910 00:13:10.040321 2106 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.106:6443: connect: connection refused" interval="1.6s" Sep 10 00:13:10.042551 containerd[1430]: time="2025-09-10T00:13:10.041732806Z" level=info msg="StartContainer for \"77764857fa34fe9bf33ddaeed8cbbb55a769975b7c22bd487da0087312df6024\" returns successfully" Sep 10 00:13:10.046302 containerd[1430]: time="2025-09-10T00:13:10.046267139Z" level=info msg="StartContainer for \"7227b376df056fae8976c3f46af59b65a109c1c4b2e1c3304d2bec87567355f0\" returns successfully" Sep 10 00:13:10.049344 containerd[1430]: time="2025-09-10T00:13:10.049311530Z" level=info msg="StartContainer for \"f9e669be8e529452e2dddbd461ee99fb126314fc5bd0c8f5d25b749c0ef5fe72\" returns successfully" Sep 10 00:13:10.265972 kubelet[2106]: I0910 00:13:10.265874 2106 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 00:13:10.662302 kubelet[2106]: E0910 00:13:10.662271 2106 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:13:10.664857 kubelet[2106]: E0910 00:13:10.664833 2106 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:13:10.665699 kubelet[2106]: E0910 00:13:10.665681 2106 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:13:11.667230 kubelet[2106]: E0910 00:13:11.667119 2106 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:13:11.668076 kubelet[2106]: E0910 00:13:11.667929 2106 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:13:12.360649 kubelet[2106]: E0910 00:13:12.360598 2106 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 10 00:13:12.450853 kubelet[2106]: I0910 00:13:12.450704 2106 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 10 00:13:12.630274 kubelet[2106]: I0910 00:13:12.629884 2106 apiserver.go:52] "Watching apiserver" Sep 10 00:13:12.638323 kubelet[2106]: I0910 00:13:12.638301 2106 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 10 00:13:14.338203 systemd[1]: Reloading requested from client PID 2390 ('systemctl') (unit session-7.scope)... Sep 10 00:13:14.338222 systemd[1]: Reloading... Sep 10 00:13:14.418608 zram_generator::config[2432]: No configuration found. Sep 10 00:13:14.497368 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 00:13:14.562223 systemd[1]: Reloading finished in 223 ms. Sep 10 00:13:14.594002 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:13:14.617361 systemd[1]: kubelet.service: Deactivated successfully. Sep 10 00:13:14.620554 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:13:14.620614 systemd[1]: kubelet.service: Consumed 1.586s CPU time, 129.5M memory peak, 0B memory swap peak. Sep 10 00:13:14.629786 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:13:14.733215 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:13:14.739598 (kubelet)[2471]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 10 00:13:14.782971 kubelet[2471]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 00:13:14.782971 kubelet[2471]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 10 00:13:14.782971 kubelet[2471]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 00:13:14.782971 kubelet[2471]: I0910 00:13:14.782882 2471 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 10 00:13:14.788364 kubelet[2471]: I0910 00:13:14.788327 2471 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 10 00:13:14.788364 kubelet[2471]: I0910 00:13:14.788355 2471 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 10 00:13:14.788617 kubelet[2471]: I0910 00:13:14.788592 2471 server.go:934] "Client rotation is on, will bootstrap in background" Sep 10 00:13:14.789919 kubelet[2471]: I0910 00:13:14.789902 2471 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 10 00:13:14.792033 kubelet[2471]: I0910 00:13:14.792006 2471 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 00:13:14.797701 kubelet[2471]: E0910 00:13:14.796997 2471 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 10 00:13:14.797701 kubelet[2471]: I0910 00:13:14.797023 2471 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 10 00:13:14.799587 kubelet[2471]: I0910 00:13:14.799556 2471 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 10 00:13:14.799692 kubelet[2471]: I0910 00:13:14.799675 2471 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 10 00:13:14.799804 kubelet[2471]: I0910 00:13:14.799778 2471 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 10 00:13:14.799946 kubelet[2471]: I0910 00:13:14.799802 2471 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 10 00:13:14.800014 kubelet[2471]: I0910 00:13:14.799953 2471 topology_manager.go:138] "Creating topology manager with none policy" Sep 10 00:13:14.800014 kubelet[2471]: I0910 00:13:14.799962 2471 container_manager_linux.go:300] "Creating device plugin manager" Sep 10 00:13:14.800014 kubelet[2471]: I0910 00:13:14.799992 2471 state_mem.go:36] "Initialized new in-memory state store" Sep 10 00:13:14.800078 kubelet[2471]: I0910 00:13:14.800066 2471 kubelet.go:408] "Attempting to sync node with API server" Sep 10 00:13:14.800078 kubelet[2471]: I0910 00:13:14.800077 2471 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 10 00:13:14.800113 kubelet[2471]: I0910 00:13:14.800093 2471 kubelet.go:314] "Adding apiserver pod source" Sep 10 00:13:14.800113 kubelet[2471]: I0910 00:13:14.800105 2471 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 10 00:13:14.801178 kubelet[2471]: I0910 00:13:14.801155 2471 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 10 00:13:14.804527 kubelet[2471]: I0910 00:13:14.801721 2471 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 10 00:13:14.804527 kubelet[2471]: I0910 00:13:14.802074 2471 server.go:1274] "Started kubelet" Sep 10 00:13:14.804527 kubelet[2471]: I0910 00:13:14.802461 2471 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 10 00:13:14.804527 kubelet[2471]: I0910 00:13:14.803546 2471 server.go:449] "Adding debug handlers to kubelet server" Sep 10 00:13:14.804527 kubelet[2471]: I0910 00:13:14.803810 2471 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 10 00:13:14.804831 kubelet[2471]: I0910 00:13:14.804784 2471 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 10 00:13:14.805046 kubelet[2471]: I0910 00:13:14.805030 2471 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 10 00:13:14.805130 kubelet[2471]: I0910 00:13:14.804849 2471 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 10 00:13:14.805710 kubelet[2471]: I0910 00:13:14.805693 2471 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 10 00:13:14.805876 kubelet[2471]: I0910 00:13:14.805848 2471 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 10 00:13:14.806033 kubelet[2471]: I0910 00:13:14.806022 2471 reconciler.go:26] "Reconciler: start to sync state" Sep 10 00:13:14.808810 kubelet[2471]: E0910 00:13:14.806758 2471 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:13:14.809155 kubelet[2471]: I0910 00:13:14.809134 2471 factory.go:221] Registration of the systemd container factory successfully Sep 10 00:13:14.809328 kubelet[2471]: I0910 00:13:14.809292 2471 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 10 00:13:14.815514 kubelet[2471]: I0910 00:13:14.813542 2471 factory.go:221] Registration of the containerd container factory successfully Sep 10 00:13:14.825920 kubelet[2471]: I0910 00:13:14.825885 2471 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 10 00:13:14.827380 kubelet[2471]: I0910 00:13:14.827363 2471 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 10 00:13:14.827467 kubelet[2471]: I0910 00:13:14.827457 2471 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 10 00:13:14.827573 kubelet[2471]: I0910 00:13:14.827554 2471 kubelet.go:2321] "Starting kubelet main sync loop" Sep 10 00:13:14.827628 kubelet[2471]: E0910 00:13:14.827610 2471 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 10 00:13:14.858524 kubelet[2471]: I0910 00:13:14.858415 2471 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 10 00:13:14.858524 kubelet[2471]: I0910 00:13:14.858436 2471 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 10 00:13:14.858524 kubelet[2471]: I0910 00:13:14.858457 2471 state_mem.go:36] "Initialized new in-memory state store" Sep 10 00:13:14.858648 kubelet[2471]: I0910 00:13:14.858603 2471 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 10 00:13:14.858648 kubelet[2471]: I0910 00:13:14.858614 2471 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 10 00:13:14.858648 kubelet[2471]: I0910 00:13:14.858631 2471 policy_none.go:49] "None policy: Start" Sep 10 00:13:14.860873 kubelet[2471]: I0910 00:13:14.860833 2471 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 10 00:13:14.860873 kubelet[2471]: I0910 00:13:14.860871 2471 state_mem.go:35] "Initializing new in-memory state store" Sep 10 00:13:14.862270 kubelet[2471]: I0910 00:13:14.862020 2471 state_mem.go:75] "Updated machine memory state" Sep 10 00:13:14.866332 kubelet[2471]: I0910 00:13:14.866306 2471 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 10 00:13:14.866527 kubelet[2471]: I0910 00:13:14.866454 2471 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 10 00:13:14.866527 kubelet[2471]: I0910 00:13:14.866474 2471 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 10 00:13:14.866908 kubelet[2471]: I0910 00:13:14.866892 2471 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 10 00:13:14.970344 kubelet[2471]: I0910 00:13:14.970314 2471 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 10 00:13:15.005286 kubelet[2471]: I0910 00:13:15.005253 2471 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 10 00:13:15.005394 kubelet[2471]: I0910 00:13:15.005333 2471 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 10 00:13:15.007887 kubelet[2471]: I0910 00:13:15.007804 2471 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 10 00:13:15.007887 kubelet[2471]: I0910 00:13:15.007839 2471 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/057217e72b388d2608ba733f970ffa25-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"057217e72b388d2608ba733f970ffa25\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:13:15.007887 kubelet[2471]: I0910 00:13:15.007880 2471 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/057217e72b388d2608ba733f970ffa25-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"057217e72b388d2608ba733f970ffa25\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:13:15.008033 kubelet[2471]: I0910 00:13:15.007904 2471 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:13:15.008033 kubelet[2471]: I0910 00:13:15.007951 2471 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/057217e72b388d2608ba733f970ffa25-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"057217e72b388d2608ba733f970ffa25\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:13:15.008033 kubelet[2471]: I0910 00:13:15.007980 2471 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:13:15.008098 kubelet[2471]: I0910 00:13:15.008050 2471 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:13:15.008098 kubelet[2471]: I0910 00:13:15.008067 2471 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:13:15.008137 kubelet[2471]: I0910 00:13:15.008096 2471 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:13:15.236765 kubelet[2471]: E0910 00:13:15.236642 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:13:15.236867 kubelet[2471]: E0910 00:13:15.236787 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:13:15.236894 kubelet[2471]: E0910 00:13:15.236864 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:13:15.801090 kubelet[2471]: I0910 00:13:15.801053 2471 apiserver.go:52] "Watching apiserver" Sep 10 00:13:15.806859 kubelet[2471]: I0910 00:13:15.806834 2471 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 10 00:13:15.846808 kubelet[2471]: E0910 00:13:15.846743 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:13:15.847707 kubelet[2471]: E0910 00:13:15.847681 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:13:15.854520 kubelet[2471]: E0910 00:13:15.853660 2471 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 10 00:13:15.854520 kubelet[2471]: E0910 00:13:15.853781 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:13:15.866934 kubelet[2471]: I0910 00:13:15.866886 2471 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.866860811 podStartE2EDuration="1.866860811s" podCreationTimestamp="2025-09-10 00:13:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:13:15.866828065 +0000 UTC m=+1.123650798" watchObservedRunningTime="2025-09-10 00:13:15.866860811 +0000 UTC m=+1.123683544" Sep 10 00:13:15.883369 kubelet[2471]: I0910 00:13:15.883324 2471 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.883309134 podStartE2EDuration="1.883309134s" podCreationTimestamp="2025-09-10 00:13:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:13:15.876634062 +0000 UTC m=+1.133456795" watchObservedRunningTime="2025-09-10 00:13:15.883309134 +0000 UTC m=+1.140131867" Sep 10 00:13:15.893722 kubelet[2471]: I0910 00:13:15.892788 2471 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.892773038 podStartE2EDuration="1.892773038s" podCreationTimestamp="2025-09-10 00:13:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:13:15.883567542 +0000 UTC m=+1.140390235" watchObservedRunningTime="2025-09-10 00:13:15.892773038 +0000 UTC m=+1.149595771" Sep 10 00:13:16.849095 kubelet[2471]: E0910 00:13:16.848754 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:13:17.176251 kubelet[2471]: E0910 00:13:17.176155 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:13:19.301850 kubelet[2471]: I0910 00:13:19.301818 2471 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 10 00:13:19.302256 containerd[1430]: time="2025-09-10T00:13:19.302168401Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 10 00:13:19.303143 kubelet[2471]: I0910 00:13:19.302594 2471 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 10 00:13:19.490296 kubelet[2471]: E0910 00:13:19.490250 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:13:20.322623 systemd[1]: Created slice kubepods-besteffort-pod70d66721_07e5_4593_a3fe_29388774a2da.slice - libcontainer container kubepods-besteffort-pod70d66721_07e5_4593_a3fe_29388774a2da.slice. Sep 10 00:13:20.346600 kubelet[2471]: I0910 00:13:20.346557 2471 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/70d66721-07e5-4593-a3fe-29388774a2da-kube-proxy\") pod \"kube-proxy-zjbhq\" (UID: \"70d66721-07e5-4593-a3fe-29388774a2da\") " pod="kube-system/kube-proxy-zjbhq" Sep 10 00:13:20.346600 kubelet[2471]: I0910 00:13:20.346599 2471 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/70d66721-07e5-4593-a3fe-29388774a2da-xtables-lock\") pod \"kube-proxy-zjbhq\" (UID: \"70d66721-07e5-4593-a3fe-29388774a2da\") " pod="kube-system/kube-proxy-zjbhq" Sep 10 00:13:20.346964 kubelet[2471]: I0910 00:13:20.346616 2471 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/70d66721-07e5-4593-a3fe-29388774a2da-lib-modules\") pod \"kube-proxy-zjbhq\" (UID: \"70d66721-07e5-4593-a3fe-29388774a2da\") " pod="kube-system/kube-proxy-zjbhq" Sep 10 00:13:20.346964 kubelet[2471]: I0910 00:13:20.346638 2471 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x562q\" (UniqueName: \"kubernetes.io/projected/70d66721-07e5-4593-a3fe-29388774a2da-kube-api-access-x562q\") pod \"kube-proxy-zjbhq\" (UID: \"70d66721-07e5-4593-a3fe-29388774a2da\") " pod="kube-system/kube-proxy-zjbhq" Sep 10 00:13:20.385336 kubelet[2471]: E0910 00:13:20.385277 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:13:20.437743 systemd[1]: Created slice kubepods-besteffort-podead279e5_5260_488a_99c3_659ffa5f440b.slice - libcontainer container kubepods-besteffort-podead279e5_5260_488a_99c3_659ffa5f440b.slice. Sep 10 00:13:20.447936 kubelet[2471]: I0910 00:13:20.447756 2471 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ead279e5-5260-488a-99c3-659ffa5f440b-var-lib-calico\") pod \"tigera-operator-58fc44c59b-crjj6\" (UID: \"ead279e5-5260-488a-99c3-659ffa5f440b\") " pod="tigera-operator/tigera-operator-58fc44c59b-crjj6" Sep 10 00:13:20.447936 kubelet[2471]: I0910 00:13:20.447796 2471 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2f7t\" (UniqueName: \"kubernetes.io/projected/ead279e5-5260-488a-99c3-659ffa5f440b-kube-api-access-b2f7t\") pod \"tigera-operator-58fc44c59b-crjj6\" (UID: \"ead279e5-5260-488a-99c3-659ffa5f440b\") " pod="tigera-operator/tigera-operator-58fc44c59b-crjj6" Sep 10 00:13:20.633326 kubelet[2471]: E0910 00:13:20.633295 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:13:20.633971 containerd[1430]: time="2025-09-10T00:13:20.633917092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zjbhq,Uid:70d66721-07e5-4593-a3fe-29388774a2da,Namespace:kube-system,Attempt:0,}" Sep 10 00:13:20.652138 containerd[1430]: time="2025-09-10T00:13:20.652046923Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:13:20.652232 containerd[1430]: time="2025-09-10T00:13:20.652148526Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:13:20.652232 containerd[1430]: time="2025-09-10T00:13:20.652176767Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:13:20.652295 containerd[1430]: time="2025-09-10T00:13:20.652268050Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:13:20.673665 systemd[1]: Started cri-containerd-a0a994acec55cc3b28f965469226455c699bad6bfab0301a60aad9bf6571f76a.scope - libcontainer container a0a994acec55cc3b28f965469226455c699bad6bfab0301a60aad9bf6571f76a. Sep 10 00:13:20.690036 containerd[1430]: time="2025-09-10T00:13:20.690000683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zjbhq,Uid:70d66721-07e5-4593-a3fe-29388774a2da,Namespace:kube-system,Attempt:0,} returns sandbox id \"a0a994acec55cc3b28f965469226455c699bad6bfab0301a60aad9bf6571f76a\"" Sep 10 00:13:20.690687 kubelet[2471]: E0910 00:13:20.690665 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:13:20.693616 containerd[1430]: time="2025-09-10T00:13:20.693578567Z" level=info msg="CreateContainer within sandbox \"a0a994acec55cc3b28f965469226455c699bad6bfab0301a60aad9bf6571f76a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 10 00:13:20.706034 containerd[1430]: time="2025-09-10T00:13:20.705987799Z" level=info msg="CreateContainer within sandbox \"a0a994acec55cc3b28f965469226455c699bad6bfab0301a60aad9bf6571f76a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bbaee31a7ae9a89fc15fa424d976ffce550dcf6f972046151fcedd4274339133\"" Sep 10 00:13:20.706706 containerd[1430]: time="2025-09-10T00:13:20.706647862Z" level=info msg="StartContainer for \"bbaee31a7ae9a89fc15fa424d976ffce550dcf6f972046151fcedd4274339133\"" Sep 10 00:13:20.732653 systemd[1]: Started cri-containerd-bbaee31a7ae9a89fc15fa424d976ffce550dcf6f972046151fcedd4274339133.scope - libcontainer container bbaee31a7ae9a89fc15fa424d976ffce550dcf6f972046151fcedd4274339133. Sep 10 00:13:20.741441 containerd[1430]: time="2025-09-10T00:13:20.741405551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-crjj6,Uid:ead279e5-5260-488a-99c3-659ffa5f440b,Namespace:tigera-operator,Attempt:0,}" Sep 10 00:13:20.756292 containerd[1430]: time="2025-09-10T00:13:20.756254228Z" level=info msg="StartContainer for \"bbaee31a7ae9a89fc15fa424d976ffce550dcf6f972046151fcedd4274339133\" returns successfully" Sep 10 00:13:20.762517 containerd[1430]: time="2025-09-10T00:13:20.761921865Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:13:20.762517 containerd[1430]: time="2025-09-10T00:13:20.762299878Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:13:20.762517 containerd[1430]: time="2025-09-10T00:13:20.762312238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:13:20.762517 containerd[1430]: time="2025-09-10T00:13:20.762392841Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:13:20.779676 systemd[1]: Started cri-containerd-2ec804b0003878a4ab9276c905a8958a6497088a6355c0b7dd2e470d2a0f6cc2.scope - libcontainer container 2ec804b0003878a4ab9276c905a8958a6497088a6355c0b7dd2e470d2a0f6cc2. Sep 10 00:13:20.810690 containerd[1430]: time="2025-09-10T00:13:20.810561077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-crjj6,Uid:ead279e5-5260-488a-99c3-659ffa5f440b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"2ec804b0003878a4ab9276c905a8958a6497088a6355c0b7dd2e470d2a0f6cc2\"" Sep 10 00:13:20.815596 containerd[1430]: time="2025-09-10T00:13:20.815414645Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 10 00:13:20.858088 kubelet[2471]: E0910 00:13:20.858052 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:13:20.859220 kubelet[2471]: E0910 00:13:20.859132 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:13:20.867685 kubelet[2471]: I0910 00:13:20.867627 2471 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zjbhq" podStartSLOduration=0.867612341 podStartE2EDuration="867.612341ms" podCreationTimestamp="2025-09-10 00:13:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:13:20.866983559 +0000 UTC m=+6.123806292" watchObservedRunningTime="2025-09-10 00:13:20.867612341 +0000 UTC m=+6.124435074" Sep 10 00:13:22.233676 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3947421356.mount: Deactivated successfully. Sep 10 00:13:22.599626 containerd[1430]: time="2025-09-10T00:13:22.599583989Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:13:22.600446 containerd[1430]: time="2025-09-10T00:13:22.600418895Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 10 00:13:22.601546 containerd[1430]: time="2025-09-10T00:13:22.601220880Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:13:22.603445 containerd[1430]: time="2025-09-10T00:13:22.603411668Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:13:22.604356 containerd[1430]: time="2025-09-10T00:13:22.604323456Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 1.78887117s" Sep 10 00:13:22.604356 containerd[1430]: time="2025-09-10T00:13:22.604353857Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 10 00:13:22.606395 containerd[1430]: time="2025-09-10T00:13:22.606367320Z" level=info msg="CreateContainer within sandbox \"2ec804b0003878a4ab9276c905a8958a6497088a6355c0b7dd2e470d2a0f6cc2\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 10 00:13:22.616568 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4145378979.mount: Deactivated successfully. Sep 10 00:13:22.619481 containerd[1430]: time="2025-09-10T00:13:22.619440647Z" level=info msg="CreateContainer within sandbox \"2ec804b0003878a4ab9276c905a8958a6497088a6355c0b7dd2e470d2a0f6cc2\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"b711c5239263604b99791795532d348e9ec24bc49e9998fc79ab70f630d225be\"" Sep 10 00:13:22.619882 containerd[1430]: time="2025-09-10T00:13:22.619858060Z" level=info msg="StartContainer for \"b711c5239263604b99791795532d348e9ec24bc49e9998fc79ab70f630d225be\"" Sep 10 00:13:22.648660 systemd[1]: Started cri-containerd-b711c5239263604b99791795532d348e9ec24bc49e9998fc79ab70f630d225be.scope - libcontainer container b711c5239263604b99791795532d348e9ec24bc49e9998fc79ab70f630d225be. Sep 10 00:13:22.670209 containerd[1430]: time="2025-09-10T00:13:22.670111344Z" level=info msg="StartContainer for \"b711c5239263604b99791795532d348e9ec24bc49e9998fc79ab70f630d225be\" returns successfully" Sep 10 00:13:22.869956 kubelet[2471]: I0910 00:13:22.869821 2471 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-crjj6" podStartSLOduration=1.076708285 podStartE2EDuration="2.8698054s" podCreationTimestamp="2025-09-10 00:13:20 +0000 UTC" firstStartedPulling="2025-09-10 00:13:20.811839201 +0000 UTC m=+6.068661934" lastFinishedPulling="2025-09-10 00:13:22.604936356 +0000 UTC m=+7.861759049" observedRunningTime="2025-09-10 00:13:22.869077418 +0000 UTC m=+8.125900151" watchObservedRunningTime="2025-09-10 00:13:22.8698054 +0000 UTC m=+8.126628133" Sep 10 00:13:27.183910 kubelet[2471]: E0910 00:13:27.183853 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:13:27.732902 sudo[1611]: pam_unix(sudo:session): session closed for user root Sep 10 00:13:27.739491 sshd[1608]: pam_unix(sshd:session): session closed for user core Sep 10 00:13:27.743013 systemd[1]: sshd@6-10.0.0.106:22-10.0.0.1:48664.service: Deactivated successfully. Sep 10 00:13:27.747164 systemd[1]: session-7.scope: Deactivated successfully. Sep 10 00:13:27.747387 systemd[1]: session-7.scope: Consumed 7.082s CPU time, 150.2M memory peak, 0B memory swap peak. Sep 10 00:13:27.749456 systemd-logind[1417]: Session 7 logged out. Waiting for processes to exit. Sep 10 00:13:27.752104 systemd-logind[1417]: Removed session 7. Sep 10 00:13:29.145631 update_engine[1418]: I20250910 00:13:29.145561 1418 update_attempter.cc:509] Updating boot flags... Sep 10 00:13:29.171531 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2882) Sep 10 00:13:29.238739 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2883) Sep 10 00:13:29.294611 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2883) Sep 10 00:13:29.505782 kubelet[2471]: E0910 00:13:29.505666 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:13:31.512583 systemd[1]: Created slice kubepods-besteffort-pod4369c713_241d_46c1_acc6_0a5d442280f2.slice - libcontainer container kubepods-besteffort-pod4369c713_241d_46c1_acc6_0a5d442280f2.slice. Sep 10 00:13:31.518969 kubelet[2471]: I0910 00:13:31.518934 2471 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/4369c713-241d-46c1-acc6-0a5d442280f2-typha-certs\") pod \"calico-typha-7bdb699c94-x99d6\" (UID: \"4369c713-241d-46c1-acc6-0a5d442280f2\") " pod="calico-system/calico-typha-7bdb699c94-x99d6" Sep 10 00:13:31.519454 kubelet[2471]: I0910 00:13:31.518975 2471 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4369c713-241d-46c1-acc6-0a5d442280f2-tigera-ca-bundle\") pod \"calico-typha-7bdb699c94-x99d6\" (UID: \"4369c713-241d-46c1-acc6-0a5d442280f2\") " pod="calico-system/calico-typha-7bdb699c94-x99d6" Sep 10 00:13:31.519454 kubelet[2471]: I0910 00:13:31.518995 2471 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hd9fx\" (UniqueName: \"kubernetes.io/projected/4369c713-241d-46c1-acc6-0a5d442280f2-kube-api-access-hd9fx\") pod \"calico-typha-7bdb699c94-x99d6\" (UID: \"4369c713-241d-46c1-acc6-0a5d442280f2\") " pod="calico-system/calico-typha-7bdb699c94-x99d6" Sep 10 00:13:31.817090 kubelet[2471]: E0910 00:13:31.816977 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:13:31.817761 containerd[1430]: time="2025-09-10T00:13:31.817708929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7bdb699c94-x99d6,Uid:4369c713-241d-46c1-acc6-0a5d442280f2,Namespace:calico-system,Attempt:0,}" Sep 10 00:13:31.850667 systemd[1]: Created slice kubepods-besteffort-pod22e79169_98b3_47be_808e_ece6c1487630.slice - libcontainer container kubepods-besteffort-pod22e79169_98b3_47be_808e_ece6c1487630.slice. Sep 10 00:13:31.901024 containerd[1430]: time="2025-09-10T00:13:31.900905465Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:13:31.901024 containerd[1430]: time="2025-09-10T00:13:31.900983627Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:13:31.901024 containerd[1430]: time="2025-09-10T00:13:31.900998867Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:13:31.901245 containerd[1430]: time="2025-09-10T00:13:31.901120390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:13:31.922008 kubelet[2471]: I0910 00:13:31.921806 2471 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/22e79169-98b3-47be-808e-ece6c1487630-var-run-calico\") pod \"calico-node-lkls6\" (UID: \"22e79169-98b3-47be-808e-ece6c1487630\") " pod="calico-system/calico-node-lkls6" Sep 10 00:13:31.922008 kubelet[2471]: I0910 00:13:31.921854 2471 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/22e79169-98b3-47be-808e-ece6c1487630-xtables-lock\") pod \"calico-node-lkls6\" (UID: \"22e79169-98b3-47be-808e-ece6c1487630\") " pod="calico-system/calico-node-lkls6" Sep 10 00:13:31.922008 kubelet[2471]: I0910 00:13:31.921874 2471 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/22e79169-98b3-47be-808e-ece6c1487630-lib-modules\") pod \"calico-node-lkls6\" (UID: \"22e79169-98b3-47be-808e-ece6c1487630\") " pod="calico-system/calico-node-lkls6" Sep 10 00:13:31.922008 kubelet[2471]: I0910 00:13:31.921889 2471 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/22e79169-98b3-47be-808e-ece6c1487630-policysync\") pod \"calico-node-lkls6\" (UID: \"22e79169-98b3-47be-808e-ece6c1487630\") " pod="calico-system/calico-node-lkls6" Sep 10 00:13:31.922008 kubelet[2471]: I0910 00:13:31.921904 2471 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/22e79169-98b3-47be-808e-ece6c1487630-var-lib-calico\") pod \"calico-node-lkls6\" (UID: \"22e79169-98b3-47be-808e-ece6c1487630\") " pod="calico-system/calico-node-lkls6" Sep 10 00:13:31.922795 kubelet[2471]: I0910 00:13:31.921919 2471 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkbsg\" (UniqueName: \"kubernetes.io/projected/22e79169-98b3-47be-808e-ece6c1487630-kube-api-access-qkbsg\") pod \"calico-node-lkls6\" (UID: \"22e79169-98b3-47be-808e-ece6c1487630\") " pod="calico-system/calico-node-lkls6" Sep 10 00:13:31.922795 kubelet[2471]: I0910 00:13:31.921935 2471 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/22e79169-98b3-47be-808e-ece6c1487630-flexvol-driver-host\") pod \"calico-node-lkls6\" (UID: \"22e79169-98b3-47be-808e-ece6c1487630\") " pod="calico-system/calico-node-lkls6" Sep 10 00:13:31.922795 kubelet[2471]: I0910 00:13:31.921957 2471 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22e79169-98b3-47be-808e-ece6c1487630-tigera-ca-bundle\") pod \"calico-node-lkls6\" (UID: \"22e79169-98b3-47be-808e-ece6c1487630\") " pod="calico-system/calico-node-lkls6" Sep 10 00:13:31.922795 kubelet[2471]: I0910 00:13:31.921972 2471 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/22e79169-98b3-47be-808e-ece6c1487630-cni-net-dir\") pod \"calico-node-lkls6\" (UID: \"22e79169-98b3-47be-808e-ece6c1487630\") " pod="calico-system/calico-node-lkls6" Sep 10 00:13:31.922795 kubelet[2471]: I0910 00:13:31.922032 2471 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/22e79169-98b3-47be-808e-ece6c1487630-cni-bin-dir\") pod \"calico-node-lkls6\" (UID: \"22e79169-98b3-47be-808e-ece6c1487630\") " pod="calico-system/calico-node-lkls6" Sep 10 00:13:31.922915 kubelet[2471]: I0910 00:13:31.922076 2471 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/22e79169-98b3-47be-808e-ece6c1487630-node-certs\") pod \"calico-node-lkls6\" (UID: \"22e79169-98b3-47be-808e-ece6c1487630\") " pod="calico-system/calico-node-lkls6" Sep 10 00:13:31.922915 kubelet[2471]: I0910 00:13:31.922094 2471 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/22e79169-98b3-47be-808e-ece6c1487630-cni-log-dir\") pod \"calico-node-lkls6\" (UID: \"22e79169-98b3-47be-808e-ece6c1487630\") " pod="calico-system/calico-node-lkls6" Sep 10 00:13:31.923041 systemd[1]: Started cri-containerd-920828207cb846f35af8ad7b373bc319e337130259daba622b3e2fef4e252799.scope - libcontainer container 920828207cb846f35af8ad7b373bc319e337130259daba622b3e2fef4e252799. Sep 10 00:13:31.949373 containerd[1430]: time="2025-09-10T00:13:31.949335847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7bdb699c94-x99d6,Uid:4369c713-241d-46c1-acc6-0a5d442280f2,Namespace:calico-system,Attempt:0,} returns sandbox id \"920828207cb846f35af8ad7b373bc319e337130259daba622b3e2fef4e252799\"" Sep 10 00:13:31.950182 kubelet[2471]: E0910 00:13:31.949999 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:13:31.950924 containerd[1430]: time="2025-09-10T00:13:31.950889117Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 10 00:13:32.052033 kubelet[2471]: E0910 00:13:32.052006 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:32.052181 kubelet[2471]: W0910 00:13:32.052166 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:32.053939 kubelet[2471]: E0910 00:13:32.053893 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:32.085912 kubelet[2471]: E0910 00:13:32.085860 2471 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6btnb" podUID="09c5ed53-4869-4b1c-8a65-b62ac3f88415" Sep 10 00:13:32.122426 kubelet[2471]: E0910 00:13:32.122392 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:32.122426 kubelet[2471]: W0910 00:13:32.122416 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:32.122426 kubelet[2471]: E0910 00:13:32.122435 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:32.122608 kubelet[2471]: E0910 00:13:32.122575 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:32.122608 kubelet[2471]: W0910 00:13:32.122583 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:32.122608 kubelet[2471]: E0910 00:13:32.122592 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:32.123286 kubelet[2471]: E0910 00:13:32.122712 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:32.123286 kubelet[2471]: W0910 00:13:32.122723 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:32.123286 kubelet[2471]: E0910 00:13:32.122731 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:32.123286 kubelet[2471]: E0910 00:13:32.122863 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:32.123286 kubelet[2471]: W0910 00:13:32.122869 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:32.123286 kubelet[2471]: E0910 00:13:32.122877 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:32.123286 kubelet[2471]: E0910 00:13:32.123000 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:32.123286 kubelet[2471]: W0910 00:13:32.123006 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:32.123286 kubelet[2471]: E0910 00:13:32.123013 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:32.123286 kubelet[2471]: E0910 00:13:32.123127 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:32.123765 kubelet[2471]: W0910 00:13:32.123135 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:32.123765 kubelet[2471]: E0910 00:13:32.123149 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:32.123765 kubelet[2471]: E0910 00:13:32.123265 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:32.123765 kubelet[2471]: W0910 00:13:32.123272 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:32.123765 kubelet[2471]: E0910 00:13:32.123280 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:32.123765 kubelet[2471]: E0910 00:13:32.123421 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:32.123765 kubelet[2471]: W0910 00:13:32.123428 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:32.123765 kubelet[2471]: E0910 00:13:32.123436 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:32.123765 kubelet[2471]: E0910 00:13:32.123600 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:32.123765 kubelet[2471]: W0910 00:13:32.123608 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:32.124194 kubelet[2471]: E0910 00:13:32.123616 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:32.124194 kubelet[2471]: E0910 00:13:32.123802 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:32.124194 kubelet[2471]: W0910 00:13:32.123810 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:32.124194 kubelet[2471]: E0910 00:13:32.123817 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:32.124194 kubelet[2471]: E0910 00:13:32.123944 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:32.124194 kubelet[2471]: W0910 00:13:32.123950 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:32.124194 kubelet[2471]: E0910 00:13:32.123962 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:32.124194 kubelet[2471]: E0910 00:13:32.124085 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:32.124194 kubelet[2471]: W0910 00:13:32.124096 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:32.124194 kubelet[2471]: E0910 00:13:32.124105 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:32.124468 kubelet[2471]: E0910 00:13:32.124421 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:32.124468 kubelet[2471]: W0910 00:13:32.124431 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:32.124468 kubelet[2471]: E0910 00:13:32.124442 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:32.124743 kubelet[2471]: E0910 00:13:32.124727 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:32.124743 kubelet[2471]: W0910 00:13:32.124739 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:32.124859 kubelet[2471]: E0910 00:13:32.124750 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:32.124916 kubelet[2471]: E0910 00:13:32.124900 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:32.124916 kubelet[2471]: W0910 00:13:32.124911 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:32.124963 kubelet[2471]: E0910 00:13:32.124918 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:32.125066 kubelet[2471]: E0910 00:13:32.125048 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:32.125066 kubelet[2471]: W0910 00:13:32.125059 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:32.125066 kubelet[2471]: E0910 00:13:32.125067 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:32.125226 kubelet[2471]: E0910 00:13:32.125209 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:32.125226 kubelet[2471]: W0910 00:13:32.125219 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:32.125226 kubelet[2471]: E0910 00:13:32.125227 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:32.125370 kubelet[2471]: E0910 00:13:32.125357 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:32.125370 kubelet[2471]: W0910 00:13:32.125368 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:32.125894 kubelet[2471]: E0910 00:13:32.125375 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:32.125894 kubelet[2471]: E0910 00:13:32.125626 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:32.125894 kubelet[2471]: W0910 00:13:32.125635 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:32.125894 kubelet[2471]: E0910 00:13:32.125644 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:32.125894 kubelet[2471]: E0910 00:13:32.125814 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:32.125894 kubelet[2471]: W0910 00:13:32.125822 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:32.125894 kubelet[2471]: E0910 00:13:32.125830 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:32.126484 kubelet[2471]: E0910 00:13:32.126072 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:32.126484 kubelet[2471]: W0910 00:13:32.126080 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:32.126484 kubelet[2471]: E0910 00:13:32.126088 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:32.126484 kubelet[2471]: I0910 00:13:32.126112 2471 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/09c5ed53-4869-4b1c-8a65-b62ac3f88415-registration-dir\") pod \"csi-node-driver-6btnb\" (UID: \"09c5ed53-4869-4b1c-8a65-b62ac3f88415\") " pod="calico-system/csi-node-driver-6btnb" Sep 10 00:13:32.126484 kubelet[2471]: E0910 00:13:32.126265 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:32.126484 kubelet[2471]: W0910 00:13:32.126273 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:32.126484 kubelet[2471]: E0910 00:13:32.126280 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:32.126484 kubelet[2471]: I0910 00:13:32.126293 2471 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r28fp\" (UniqueName: \"kubernetes.io/projected/09c5ed53-4869-4b1c-8a65-b62ac3f88415-kube-api-access-r28fp\") pod \"csi-node-driver-6btnb\" (UID: \"09c5ed53-4869-4b1c-8a65-b62ac3f88415\") " pod="calico-system/csi-node-driver-6btnb" Sep 10 00:13:32.126484 kubelet[2471]: E0910 00:13:32.126436 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:32.126680 kubelet[2471]: W0910 00:13:32.126444 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:32.126680 kubelet[2471]: E0910 00:13:32.126453 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:32.126680 kubelet[2471]: I0910 00:13:32.126465 2471 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/09c5ed53-4869-4b1c-8a65-b62ac3f88415-varrun\") pod \"csi-node-driver-6btnb\" (UID: \"09c5ed53-4869-4b1c-8a65-b62ac3f88415\") " pod="calico-system/csi-node-driver-6btnb" Sep 10 00:13:32.128701 kubelet[2471]: E0910 00:13:32.126862 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:32.128701 kubelet[2471]: W0910 00:13:32.126874 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:32.128701 kubelet[2471]: E0910 00:13:32.126885 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:32.128701 kubelet[2471]: I0910 00:13:32.126901 2471 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/09c5ed53-4869-4b1c-8a65-b62ac3f88415-socket-dir\") pod \"csi-node-driver-6btnb\" (UID: \"09c5ed53-4869-4b1c-8a65-b62ac3f88415\") " pod="calico-system/csi-node-driver-6btnb" Sep 10 00:13:32.128701 kubelet[2471]: E0910 00:13:32.127073 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:32.128701 kubelet[2471]: W0910 00:13:32.127082 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:32.128701 kubelet[2471]: E0910 00:13:32.127090 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:32.128701 kubelet[2471]: I0910 00:13:32.127103 2471 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/09c5ed53-4869-4b1c-8a65-b62ac3f88415-kubelet-dir\") pod \"csi-node-driver-6btnb\" (UID: \"09c5ed53-4869-4b1c-8a65-b62ac3f88415\") " pod="calico-system/csi-node-driver-6btnb" Sep 10 00:13:32.130252 kubelet[2471]: E0910 00:13:32.130231 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:32.130252 kubelet[2471]: W0910 00:13:32.130249 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:32.130449 kubelet[2471]: E0910 00:13:32.130340 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:32.130527 kubelet[2471]: E0910 00:13:32.130449 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:32.130527 kubelet[2471]: W0910 00:13:32.130458 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:32.130602 kubelet[2471]: E0910 00:13:32.130491 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:32.131606 kubelet[2471]: E0910 00:13:32.131589 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:32.131606 kubelet[2471]: W0910 00:13:32.131606 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:32.131738 kubelet[2471]: E0910 00:13:32.131688 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:32.131895 kubelet[2471]: E0910 00:13:32.131883 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:32.131895 kubelet[2471]: W0910 00:13:32.131895 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:32.131991 kubelet[2471]: E0910 00:13:32.131953 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:32.132129 kubelet[2471]: E0910 00:13:32.132116 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:32.132705 kubelet[2471]: W0910 00:13:32.132684 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:32.132791 kubelet[2471]: E0910 00:13:32.132762 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:32.132924 kubelet[2471]: E0910 00:13:32.132911 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:32.132924 kubelet[2471]: W0910 00:13:32.132923 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:32.132967 kubelet[2471]: E0910 00:13:32.132933 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:32.133094 kubelet[2471]: E0910 00:13:32.133084 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:32.133129 kubelet[2471]: W0910 00:13:32.133094 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:32.133129 kubelet[2471]: E0910 00:13:32.133103 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:32.133238 kubelet[2471]: E0910 00:13:32.133228 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:32.133264 kubelet[2471]: W0910 00:13:32.133237 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:32.133264 kubelet[2471]: E0910 00:13:32.133246 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:32.133387 kubelet[2471]: E0910 00:13:32.133377 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:32.133419 kubelet[2471]: W0910 00:13:32.133388 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:32.133419 kubelet[2471]: E0910 00:13:32.133398 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:32.133577 kubelet[2471]: E0910 00:13:32.133551 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:32.133577 kubelet[2471]: W0910 00:13:32.133561 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:32.133577 kubelet[2471]: E0910 00:13:32.133569 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:32.157685 containerd[1430]: time="2025-09-10T00:13:32.157607668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lkls6,Uid:22e79169-98b3-47be-808e-ece6c1487630,Namespace:calico-system,Attempt:0,}" Sep 10 00:13:32.180720 containerd[1430]: time="2025-09-10T00:13:32.180466851Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:13:32.180720 containerd[1430]: time="2025-09-10T00:13:32.180680055Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:13:32.180720 containerd[1430]: time="2025-09-10T00:13:32.180693775Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:13:32.181181 containerd[1430]: time="2025-09-10T00:13:32.180783977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:13:32.197699 systemd[1]: Started cri-containerd-f67d5ed4e8e14837a63324d56dc3a0f1102c3e3da50b56aff3b3df1ac14e71c1.scope - libcontainer container f67d5ed4e8e14837a63324d56dc3a0f1102c3e3da50b56aff3b3df1ac14e71c1. Sep 10 00:13:32.222691 containerd[1430]: time="2025-09-10T00:13:32.222643672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lkls6,Uid:22e79169-98b3-47be-808e-ece6c1487630,Namespace:calico-system,Attempt:0,} returns sandbox id \"f67d5ed4e8e14837a63324d56dc3a0f1102c3e3da50b56aff3b3df1ac14e71c1\"" Sep 10 00:13:32.228459 kubelet[2471]: E0910 00:13:32.228432 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:32.228459 kubelet[2471]: W0910 00:13:32.228454 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:32.228626 kubelet[2471]: E0910 00:13:32.228475 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:32.231293 kubelet[2471]: E0910 00:13:32.230582 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:32.231293 kubelet[2471]: W0910 00:13:32.230594 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:32.231293 kubelet[2471]: E0910 00:13:32.230613 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:32.231293 kubelet[2471]: E0910 00:13:32.230858 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:32.231293 kubelet[2471]: W0910 00:13:32.230869 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:32.231293 kubelet[2471]: E0910 00:13:32.230899 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:32.231608 kubelet[2471]: E0910 00:13:32.231586 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:32.231608 kubelet[2471]: W0910 00:13:32.231603 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:32.231661 kubelet[2471]: E0910 00:13:32.231637 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:32.232568 kubelet[2471]: E0910 00:13:32.232543 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:32.232568 kubelet[2471]: W0910 00:13:32.232559 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:32.232686 kubelet[2471]: E0910 00:13:32.232578 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:32.232993 kubelet[2471]: E0910 00:13:32.232967 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:32.232993 kubelet[2471]: W0910 00:13:32.232982 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:32.233083 kubelet[2471]: E0910 00:13:32.233019 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:32.233177 kubelet[2471]: E0910 00:13:32.233169 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:32.233177 kubelet[2471]: W0910 00:13:32.233176 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:32.233244 kubelet[2471]: E0910 00:13:32.233220 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:32.233339 kubelet[2471]: E0910 00:13:32.233322 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:32.233339 kubelet[2471]: W0910 00:13:32.233332 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:32.233402 kubelet[2471]: E0910 00:13:32.233385 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:32.233637 kubelet[2471]: E0910 00:13:32.233616 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:32.233637 kubelet[2471]: W0910 00:13:32.233630 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:32.233716 kubelet[2471]: E0910 00:13:32.233656 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:32.234572 kubelet[2471]: E0910 00:13:32.234547 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:32.234572 kubelet[2471]: W0910 00:13:32.234563 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:32.235135 kubelet[2471]: E0910 00:13:32.234580 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:32.235464 kubelet[2471]: E0910 00:13:32.235328 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:32.235464 kubelet[2471]: W0910 00:13:32.235341 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:32.235464 kubelet[2471]: E0910 00:13:32.235360 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:32.236159 kubelet[2471]: E0910 00:13:32.236002 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:32.236159 kubelet[2471]: W0910 00:13:32.236022 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:32.236159 kubelet[2471]: E0910 00:13:32.236069 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:32.236382 kubelet[2471]: E0910 00:13:32.236273 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:32.236382 kubelet[2471]: W0910 00:13:32.236283 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:32.236382 kubelet[2471]: E0910 00:13:32.236315 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:32.237585 kubelet[2471]: E0910 00:13:32.236760 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:32.237585 kubelet[2471]: W0910 00:13:32.236788 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:32.237585 kubelet[2471]: E0910 00:13:32.236853 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:32.237585 kubelet[2471]: E0910 00:13:32.236999 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:32.237585 kubelet[2471]: W0910 00:13:32.237007 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:32.237585 kubelet[2471]: E0910 00:13:32.237572 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:32.239592 kubelet[2471]: E0910 00:13:32.238712 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:32.239592 kubelet[2471]: W0910 00:13:32.238728 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:32.239592 kubelet[2471]: E0910 00:13:32.238790 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:32.239592 kubelet[2471]: E0910 00:13:32.238872 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:32.239592 kubelet[2471]: W0910 00:13:32.238880 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:32.239592 kubelet[2471]: E0910 00:13:32.238912 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:32.239592 kubelet[2471]: E0910 00:13:32.239002 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:32.239592 kubelet[2471]: W0910 00:13:32.239009 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:32.239592 kubelet[2471]: E0910 00:13:32.239032 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:32.239592 kubelet[2471]: E0910 00:13:32.239122 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:32.240732 kubelet[2471]: W0910 00:13:32.239129 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:32.240732 kubelet[2471]: E0910 00:13:32.239148 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:32.240732 kubelet[2471]: E0910 00:13:32.239267 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:32.240732 kubelet[2471]: W0910 00:13:32.239274 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:32.240732 kubelet[2471]: E0910 00:13:32.239294 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:32.240732 kubelet[2471]: E0910 00:13:32.239387 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:32.240732 kubelet[2471]: W0910 00:13:32.239394 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:32.240732 kubelet[2471]: E0910 00:13:32.239407 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:32.240732 kubelet[2471]: E0910 00:13:32.240443 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:32.240732 kubelet[2471]: W0910 00:13:32.240456 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:32.240947 kubelet[2471]: E0910 00:13:32.240483 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:32.240947 kubelet[2471]: E0910 00:13:32.240705 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:32.240947 kubelet[2471]: W0910 00:13:32.240714 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:32.240947 kubelet[2471]: E0910 00:13:32.240729 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:32.240947 kubelet[2471]: E0910 00:13:32.240895 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:32.240947 kubelet[2471]: W0910 00:13:32.240903 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:32.241333 kubelet[2471]: E0910 00:13:32.241088 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:32.241485 kubelet[2471]: E0910 00:13:32.241468 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:32.241530 kubelet[2471]: W0910 00:13:32.241485 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:32.241530 kubelet[2471]: E0910 00:13:32.241497 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:32.253293 kubelet[2471]: E0910 00:13:32.253060 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:32.253293 kubelet[2471]: W0910 00:13:32.253080 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:32.253293 kubelet[2471]: E0910 00:13:32.253095 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:32.954723 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1854644617.mount: Deactivated successfully. Sep 10 00:13:33.828481 kubelet[2471]: E0910 00:13:33.828304 2471 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6btnb" podUID="09c5ed53-4869-4b1c-8a65-b62ac3f88415" Sep 10 00:13:34.277468 containerd[1430]: time="2025-09-10T00:13:34.277416197Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:13:34.278451 containerd[1430]: time="2025-09-10T00:13:34.278414814Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33105775" Sep 10 00:13:34.280132 containerd[1430]: time="2025-09-10T00:13:34.279256468Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:13:34.281790 containerd[1430]: time="2025-09-10T00:13:34.281761430Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:13:34.282273 containerd[1430]: time="2025-09-10T00:13:34.282239238Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 2.331212919s" Sep 10 00:13:34.282321 containerd[1430]: time="2025-09-10T00:13:34.282272279Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 10 00:13:34.284660 containerd[1430]: time="2025-09-10T00:13:34.284597318Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 10 00:13:34.293071 containerd[1430]: time="2025-09-10T00:13:34.293031340Z" level=info msg="CreateContainer within sandbox \"920828207cb846f35af8ad7b373bc319e337130259daba622b3e2fef4e252799\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 10 00:13:34.308899 containerd[1430]: time="2025-09-10T00:13:34.308857726Z" level=info msg="CreateContainer within sandbox \"920828207cb846f35af8ad7b373bc319e337130259daba622b3e2fef4e252799\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d360e591aefe58192aa55a290157b3eaecd5da3534d26605af0d5565270e9aa3\"" Sep 10 00:13:34.309484 containerd[1430]: time="2025-09-10T00:13:34.309457376Z" level=info msg="StartContainer for \"d360e591aefe58192aa55a290157b3eaecd5da3534d26605af0d5565270e9aa3\"" Sep 10 00:13:34.342698 systemd[1]: Started cri-containerd-d360e591aefe58192aa55a290157b3eaecd5da3534d26605af0d5565270e9aa3.scope - libcontainer container d360e591aefe58192aa55a290157b3eaecd5da3534d26605af0d5565270e9aa3. Sep 10 00:13:34.429713 containerd[1430]: time="2025-09-10T00:13:34.429661718Z" level=info msg="StartContainer for \"d360e591aefe58192aa55a290157b3eaecd5da3534d26605af0d5565270e9aa3\" returns successfully" Sep 10 00:13:34.895074 kubelet[2471]: E0910 00:13:34.894078 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:13:34.918734 kubelet[2471]: I0910 00:13:34.918464 2471 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7bdb699c94-x99d6" podStartSLOduration=1.585757594 podStartE2EDuration="3.918440338s" podCreationTimestamp="2025-09-10 00:13:31 +0000 UTC" firstStartedPulling="2025-09-10 00:13:31.950569431 +0000 UTC m=+17.207392164" lastFinishedPulling="2025-09-10 00:13:34.283252175 +0000 UTC m=+19.540074908" observedRunningTime="2025-09-10 00:13:34.906863783 +0000 UTC m=+20.163686516" watchObservedRunningTime="2025-09-10 00:13:34.918440338 +0000 UTC m=+20.175263071" Sep 10 00:13:34.948760 kubelet[2471]: E0910 00:13:34.948726 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:34.948760 kubelet[2471]: W0910 00:13:34.948752 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:34.948925 kubelet[2471]: E0910 00:13:34.948771 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:34.949017 kubelet[2471]: E0910 00:13:34.948984 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:34.949017 kubelet[2471]: W0910 00:13:34.948997 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:34.949017 kubelet[2471]: E0910 00:13:34.949008 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:34.949175 kubelet[2471]: E0910 00:13:34.949154 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:34.949175 kubelet[2471]: W0910 00:13:34.949166 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:34.949175 kubelet[2471]: E0910 00:13:34.949174 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:34.949327 kubelet[2471]: E0910 00:13:34.949311 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:34.949327 kubelet[2471]: W0910 00:13:34.949321 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:34.949378 kubelet[2471]: E0910 00:13:34.949329 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:34.949489 kubelet[2471]: E0910 00:13:34.949473 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:34.949489 kubelet[2471]: W0910 00:13:34.949482 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:34.949557 kubelet[2471]: E0910 00:13:34.949490 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:34.949643 kubelet[2471]: E0910 00:13:34.949633 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:34.949667 kubelet[2471]: W0910 00:13:34.949642 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:34.949667 kubelet[2471]: E0910 00:13:34.949649 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:34.949774 kubelet[2471]: E0910 00:13:34.949765 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:34.949798 kubelet[2471]: W0910 00:13:34.949774 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:34.949798 kubelet[2471]: E0910 00:13:34.949781 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:34.949920 kubelet[2471]: E0910 00:13:34.949911 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:34.949920 kubelet[2471]: W0910 00:13:34.949919 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:34.949965 kubelet[2471]: E0910 00:13:34.949926 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:34.950101 kubelet[2471]: E0910 00:13:34.950076 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:34.950101 kubelet[2471]: W0910 00:13:34.950086 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:34.950101 kubelet[2471]: E0910 00:13:34.950095 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:34.950231 kubelet[2471]: E0910 00:13:34.950222 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:34.950255 kubelet[2471]: W0910 00:13:34.950231 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:34.950255 kubelet[2471]: E0910 00:13:34.950238 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:34.950367 kubelet[2471]: E0910 00:13:34.950358 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:34.950390 kubelet[2471]: W0910 00:13:34.950366 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:34.950390 kubelet[2471]: E0910 00:13:34.950374 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:34.950506 kubelet[2471]: E0910 00:13:34.950493 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:34.950535 kubelet[2471]: W0910 00:13:34.950514 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:34.950535 kubelet[2471]: E0910 00:13:34.950523 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:34.950715 kubelet[2471]: E0910 00:13:34.950704 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:34.950715 kubelet[2471]: W0910 00:13:34.950713 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:34.950763 kubelet[2471]: E0910 00:13:34.950722 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:34.950876 kubelet[2471]: E0910 00:13:34.950866 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:34.950876 kubelet[2471]: W0910 00:13:34.950875 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:34.950927 kubelet[2471]: E0910 00:13:34.950883 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:34.951014 kubelet[2471]: E0910 00:13:34.951004 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:34.951037 kubelet[2471]: W0910 00:13:34.951013 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:34.951037 kubelet[2471]: E0910 00:13:34.951021 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:34.953534 kubelet[2471]: E0910 00:13:34.953517 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:34.953534 kubelet[2471]: W0910 00:13:34.953533 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:34.953611 kubelet[2471]: E0910 00:13:34.953546 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:34.953762 kubelet[2471]: E0910 00:13:34.953749 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:34.953762 kubelet[2471]: W0910 00:13:34.953761 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:34.953833 kubelet[2471]: E0910 00:13:34.953774 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:34.953962 kubelet[2471]: E0910 00:13:34.953949 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:34.953962 kubelet[2471]: W0910 00:13:34.953962 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:34.954012 kubelet[2471]: E0910 00:13:34.953975 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:34.954181 kubelet[2471]: E0910 00:13:34.954157 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:34.954181 kubelet[2471]: W0910 00:13:34.954169 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:34.954181 kubelet[2471]: E0910 00:13:34.954181 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:34.954324 kubelet[2471]: E0910 00:13:34.954314 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:34.954368 kubelet[2471]: W0910 00:13:34.954324 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:34.954368 kubelet[2471]: E0910 00:13:34.954336 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:34.954468 kubelet[2471]: E0910 00:13:34.954458 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:34.954468 kubelet[2471]: W0910 00:13:34.954467 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:34.954541 kubelet[2471]: E0910 00:13:34.954478 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:34.954665 kubelet[2471]: E0910 00:13:34.954654 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:34.954665 kubelet[2471]: W0910 00:13:34.954664 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:34.954749 kubelet[2471]: E0910 00:13:34.954675 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:34.955103 kubelet[2471]: E0910 00:13:34.955046 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:34.955103 kubelet[2471]: W0910 00:13:34.955061 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:34.955103 kubelet[2471]: E0910 00:13:34.955080 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:34.955325 kubelet[2471]: E0910 00:13:34.955311 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:34.955325 kubelet[2471]: W0910 00:13:34.955325 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:34.955388 kubelet[2471]: E0910 00:13:34.955338 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:34.955497 kubelet[2471]: E0910 00:13:34.955483 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:34.955497 kubelet[2471]: W0910 00:13:34.955493 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:34.955562 kubelet[2471]: E0910 00:13:34.955517 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:34.955775 kubelet[2471]: E0910 00:13:34.955681 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:34.955775 kubelet[2471]: W0910 00:13:34.955692 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:34.955775 kubelet[2471]: E0910 00:13:34.955704 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:34.955880 kubelet[2471]: E0910 00:13:34.955856 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:34.955880 kubelet[2471]: W0910 00:13:34.955875 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:34.955924 kubelet[2471]: E0910 00:13:34.955891 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:34.956154 kubelet[2471]: E0910 00:13:34.956137 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:34.956154 kubelet[2471]: W0910 00:13:34.956152 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:34.956240 kubelet[2471]: E0910 00:13:34.956169 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:34.956348 kubelet[2471]: E0910 00:13:34.956337 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:34.956348 kubelet[2471]: W0910 00:13:34.956346 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:34.956420 kubelet[2471]: E0910 00:13:34.956359 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:34.956616 kubelet[2471]: E0910 00:13:34.956490 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:34.956616 kubelet[2471]: W0910 00:13:34.956542 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:34.956616 kubelet[2471]: E0910 00:13:34.956556 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:34.956761 kubelet[2471]: E0910 00:13:34.956722 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:34.956761 kubelet[2471]: W0910 00:13:34.956734 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:34.956761 kubelet[2471]: E0910 00:13:34.956749 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:34.956964 kubelet[2471]: E0910 00:13:34.956923 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:34.956964 kubelet[2471]: W0910 00:13:34.956935 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:34.956964 kubelet[2471]: E0910 00:13:34.956943 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:34.957243 kubelet[2471]: E0910 00:13:34.957230 2471 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:13:34.957243 kubelet[2471]: W0910 00:13:34.957242 2471 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:13:34.957313 kubelet[2471]: E0910 00:13:34.957251 2471 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:13:35.341901 containerd[1430]: time="2025-09-10T00:13:35.341676276Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:13:35.342232 containerd[1430]: time="2025-09-10T00:13:35.342047082Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4266814" Sep 10 00:13:35.343977 containerd[1430]: time="2025-09-10T00:13:35.342826855Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:13:35.345125 containerd[1430]: time="2025-09-10T00:13:35.345092611Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:13:35.345842 containerd[1430]: time="2025-09-10T00:13:35.345801063Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 1.061147623s" Sep 10 00:13:35.345941 containerd[1430]: time="2025-09-10T00:13:35.345924625Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 10 00:13:35.348561 containerd[1430]: time="2025-09-10T00:13:35.348536786Z" level=info msg="CreateContainer within sandbox \"f67d5ed4e8e14837a63324d56dc3a0f1102c3e3da50b56aff3b3df1ac14e71c1\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 10 00:13:35.368276 containerd[1430]: time="2025-09-10T00:13:35.368233823Z" level=info msg="CreateContainer within sandbox \"f67d5ed4e8e14837a63324d56dc3a0f1102c3e3da50b56aff3b3df1ac14e71c1\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"fe6cbdb0c56e50c47591131c796521cbccf6fda4d2a0f7a0dc6ed89e70930f4c\"" Sep 10 00:13:35.368897 containerd[1430]: time="2025-09-10T00:13:35.368876353Z" level=info msg="StartContainer for \"fe6cbdb0c56e50c47591131c796521cbccf6fda4d2a0f7a0dc6ed89e70930f4c\"" Sep 10 00:13:35.401665 systemd[1]: Started cri-containerd-fe6cbdb0c56e50c47591131c796521cbccf6fda4d2a0f7a0dc6ed89e70930f4c.scope - libcontainer container fe6cbdb0c56e50c47591131c796521cbccf6fda4d2a0f7a0dc6ed89e70930f4c. Sep 10 00:13:35.435709 containerd[1430]: time="2025-09-10T00:13:35.435668665Z" level=info msg="StartContainer for \"fe6cbdb0c56e50c47591131c796521cbccf6fda4d2a0f7a0dc6ed89e70930f4c\" returns successfully" Sep 10 00:13:35.447749 systemd[1]: cri-containerd-fe6cbdb0c56e50c47591131c796521cbccf6fda4d2a0f7a0dc6ed89e70930f4c.scope: Deactivated successfully. Sep 10 00:13:35.505302 containerd[1430]: time="2025-09-10T00:13:35.501724726Z" level=info msg="shim disconnected" id=fe6cbdb0c56e50c47591131c796521cbccf6fda4d2a0f7a0dc6ed89e70930f4c namespace=k8s.io Sep 10 00:13:35.505302 containerd[1430]: time="2025-09-10T00:13:35.505300943Z" level=warning msg="cleaning up after shim disconnected" id=fe6cbdb0c56e50c47591131c796521cbccf6fda4d2a0f7a0dc6ed89e70930f4c namespace=k8s.io Sep 10 00:13:35.505576 containerd[1430]: time="2025-09-10T00:13:35.505317463Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 00:13:35.828920 kubelet[2471]: E0910 00:13:35.828788 2471 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6btnb" podUID="09c5ed53-4869-4b1c-8a65-b62ac3f88415" Sep 10 00:13:35.900164 kubelet[2471]: I0910 00:13:35.899624 2471 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 00:13:35.901328 kubelet[2471]: E0910 00:13:35.901251 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:13:35.901980 containerd[1430]: time="2025-09-10T00:13:35.901890630Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 10 00:13:36.292024 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fe6cbdb0c56e50c47591131c796521cbccf6fda4d2a0f7a0dc6ed89e70930f4c-rootfs.mount: Deactivated successfully. Sep 10 00:13:37.829123 kubelet[2471]: E0910 00:13:37.828719 2471 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6btnb" podUID="09c5ed53-4869-4b1c-8a65-b62ac3f88415" Sep 10 00:13:38.262354 containerd[1430]: time="2025-09-10T00:13:38.262305089Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:13:38.262791 containerd[1430]: time="2025-09-10T00:13:38.262758655Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 10 00:13:38.263494 containerd[1430]: time="2025-09-10T00:13:38.263455385Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:13:38.265312 containerd[1430]: time="2025-09-10T00:13:38.265279051Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:13:38.266452 containerd[1430]: time="2025-09-10T00:13:38.266416507Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 2.364437955s" Sep 10 00:13:38.266492 containerd[1430]: time="2025-09-10T00:13:38.266451107Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 10 00:13:38.268903 containerd[1430]: time="2025-09-10T00:13:38.268871941Z" level=info msg="CreateContainer within sandbox \"f67d5ed4e8e14837a63324d56dc3a0f1102c3e3da50b56aff3b3df1ac14e71c1\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 10 00:13:38.280583 containerd[1430]: time="2025-09-10T00:13:38.280541625Z" level=info msg="CreateContainer within sandbox \"f67d5ed4e8e14837a63324d56dc3a0f1102c3e3da50b56aff3b3df1ac14e71c1\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"9f71aa3ec4eac98efabb8f2cb0d0aab01a7336d9bb22d9c77444b7c768bff122\"" Sep 10 00:13:38.280958 containerd[1430]: time="2025-09-10T00:13:38.280920430Z" level=info msg="StartContainer for \"9f71aa3ec4eac98efabb8f2cb0d0aab01a7336d9bb22d9c77444b7c768bff122\"" Sep 10 00:13:38.308713 systemd[1]: Started cri-containerd-9f71aa3ec4eac98efabb8f2cb0d0aab01a7336d9bb22d9c77444b7c768bff122.scope - libcontainer container 9f71aa3ec4eac98efabb8f2cb0d0aab01a7336d9bb22d9c77444b7c768bff122. Sep 10 00:13:38.332764 containerd[1430]: time="2025-09-10T00:13:38.332516954Z" level=info msg="StartContainer for \"9f71aa3ec4eac98efabb8f2cb0d0aab01a7336d9bb22d9c77444b7c768bff122\" returns successfully" Sep 10 00:13:38.939915 systemd[1]: cri-containerd-9f71aa3ec4eac98efabb8f2cb0d0aab01a7336d9bb22d9c77444b7c768bff122.scope: Deactivated successfully. Sep 10 00:13:38.959636 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9f71aa3ec4eac98efabb8f2cb0d0aab01a7336d9bb22d9c77444b7c768bff122-rootfs.mount: Deactivated successfully. Sep 10 00:13:39.022680 kubelet[2471]: I0910 00:13:39.022617 2471 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 10 00:13:39.032197 containerd[1430]: time="2025-09-10T00:13:39.032041115Z" level=info msg="shim disconnected" id=9f71aa3ec4eac98efabb8f2cb0d0aab01a7336d9bb22d9c77444b7c768bff122 namespace=k8s.io Sep 10 00:13:39.032197 containerd[1430]: time="2025-09-10T00:13:39.032091476Z" level=warning msg="cleaning up after shim disconnected" id=9f71aa3ec4eac98efabb8f2cb0d0aab01a7336d9bb22d9c77444b7c768bff122 namespace=k8s.io Sep 10 00:13:39.032197 containerd[1430]: time="2025-09-10T00:13:39.032099476Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 00:13:39.063007 systemd[1]: Created slice kubepods-burstable-pod138bcf10_bfb3_4e83_915f_37b2d9c80ead.slice - libcontainer container kubepods-burstable-pod138bcf10_bfb3_4e83_915f_37b2d9c80ead.slice. Sep 10 00:13:39.073397 systemd[1]: Created slice kubepods-besteffort-pod2f58e97e_6e12_4135_b90c_0a1b0b407422.slice - libcontainer container kubepods-besteffort-pod2f58e97e_6e12_4135_b90c_0a1b0b407422.slice. Sep 10 00:13:39.083414 kubelet[2471]: I0910 00:13:39.083250 2471 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tp89x\" (UniqueName: \"kubernetes.io/projected/34aa417d-a639-4656-862d-aac7f831a9b9-kube-api-access-tp89x\") pod \"calico-apiserver-6bc6768489-gxbgn\" (UID: \"34aa417d-a639-4656-862d-aac7f831a9b9\") " pod="calico-apiserver/calico-apiserver-6bc6768489-gxbgn" Sep 10 00:13:39.083414 kubelet[2471]: I0910 00:13:39.083297 2471 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f58e97e-6e12-4135-b90c-0a1b0b407422-tigera-ca-bundle\") pod \"calico-kube-controllers-7f764b5b64-dd89z\" (UID: \"2f58e97e-6e12-4135-b90c-0a1b0b407422\") " pod="calico-system/calico-kube-controllers-7f764b5b64-dd89z" Sep 10 00:13:39.083414 kubelet[2471]: I0910 00:13:39.083315 2471 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e21db204-019f-4e2a-9a77-0eef0c7e2f3d-whisker-ca-bundle\") pod \"whisker-78574f96f6-6lwj2\" (UID: \"e21db204-019f-4e2a-9a77-0eef0c7e2f3d\") " pod="calico-system/whisker-78574f96f6-6lwj2" Sep 10 00:13:39.083414 kubelet[2471]: I0910 00:13:39.083331 2471 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1a71852-d2fe-4382-8a25-a3d286247a75-config\") pod \"goldmane-7988f88666-9zjzp\" (UID: \"d1a71852-d2fe-4382-8a25-a3d286247a75\") " pod="calico-system/goldmane-7988f88666-9zjzp" Sep 10 00:13:39.083414 kubelet[2471]: I0910 00:13:39.083350 2471 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2gfj\" (UniqueName: \"kubernetes.io/projected/778ab3ef-74e3-4341-b35b-556c4e8acdd5-kube-api-access-f2gfj\") pod \"coredns-7c65d6cfc9-fprdx\" (UID: \"778ab3ef-74e3-4341-b35b-556c4e8acdd5\") " pod="kube-system/coredns-7c65d6cfc9-fprdx" Sep 10 00:13:39.083670 kubelet[2471]: I0910 00:13:39.083367 2471 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gt2mq\" (UniqueName: \"kubernetes.io/projected/e21db204-019f-4e2a-9a77-0eef0c7e2f3d-kube-api-access-gt2mq\") pod \"whisker-78574f96f6-6lwj2\" (UID: \"e21db204-019f-4e2a-9a77-0eef0c7e2f3d\") " pod="calico-system/whisker-78574f96f6-6lwj2" Sep 10 00:13:39.083670 kubelet[2471]: I0910 00:13:39.083385 2471 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbxn5\" (UniqueName: \"kubernetes.io/projected/138bcf10-bfb3-4e83-915f-37b2d9c80ead-kube-api-access-mbxn5\") pod \"coredns-7c65d6cfc9-9575c\" (UID: \"138bcf10-bfb3-4e83-915f-37b2d9c80ead\") " pod="kube-system/coredns-7c65d6cfc9-9575c" Sep 10 00:13:39.083670 kubelet[2471]: I0910 00:13:39.083431 2471 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/778ab3ef-74e3-4341-b35b-556c4e8acdd5-config-volume\") pod \"coredns-7c65d6cfc9-fprdx\" (UID: \"778ab3ef-74e3-4341-b35b-556c4e8acdd5\") " pod="kube-system/coredns-7c65d6cfc9-fprdx" Sep 10 00:13:39.083670 kubelet[2471]: I0910 00:13:39.083466 2471 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e21db204-019f-4e2a-9a77-0eef0c7e2f3d-whisker-backend-key-pair\") pod \"whisker-78574f96f6-6lwj2\" (UID: \"e21db204-019f-4e2a-9a77-0eef0c7e2f3d\") " pod="calico-system/whisker-78574f96f6-6lwj2" Sep 10 00:13:39.083670 kubelet[2471]: I0910 00:13:39.083493 2471 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/d1a71852-d2fe-4382-8a25-a3d286247a75-goldmane-key-pair\") pod \"goldmane-7988f88666-9zjzp\" (UID: \"d1a71852-d2fe-4382-8a25-a3d286247a75\") " pod="calico-system/goldmane-7988f88666-9zjzp" Sep 10 00:13:39.083779 kubelet[2471]: I0910 00:13:39.083566 2471 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8d4gg\" (UniqueName: \"kubernetes.io/projected/279a2bd1-e5f8-4ed5-bbcf-24ce06e302a6-kube-api-access-8d4gg\") pod \"calico-apiserver-6bc6768489-w628d\" (UID: \"279a2bd1-e5f8-4ed5-bbcf-24ce06e302a6\") " pod="calico-apiserver/calico-apiserver-6bc6768489-w628d" Sep 10 00:13:39.083779 kubelet[2471]: I0910 00:13:39.083583 2471 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/138bcf10-bfb3-4e83-915f-37b2d9c80ead-config-volume\") pod \"coredns-7c65d6cfc9-9575c\" (UID: \"138bcf10-bfb3-4e83-915f-37b2d9c80ead\") " pod="kube-system/coredns-7c65d6cfc9-9575c" Sep 10 00:13:39.083779 kubelet[2471]: I0910 00:13:39.083600 2471 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdkk9\" (UniqueName: \"kubernetes.io/projected/d1a71852-d2fe-4382-8a25-a3d286247a75-kube-api-access-kdkk9\") pod \"goldmane-7988f88666-9zjzp\" (UID: \"d1a71852-d2fe-4382-8a25-a3d286247a75\") " pod="calico-system/goldmane-7988f88666-9zjzp" Sep 10 00:13:39.083779 kubelet[2471]: I0910 00:13:39.083627 2471 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/34aa417d-a639-4656-862d-aac7f831a9b9-calico-apiserver-certs\") pod \"calico-apiserver-6bc6768489-gxbgn\" (UID: \"34aa417d-a639-4656-862d-aac7f831a9b9\") " pod="calico-apiserver/calico-apiserver-6bc6768489-gxbgn" Sep 10 00:13:39.083779 kubelet[2471]: I0910 00:13:39.083644 2471 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps24m\" (UniqueName: \"kubernetes.io/projected/2f58e97e-6e12-4135-b90c-0a1b0b407422-kube-api-access-ps24m\") pod \"calico-kube-controllers-7f764b5b64-dd89z\" (UID: \"2f58e97e-6e12-4135-b90c-0a1b0b407422\") " pod="calico-system/calico-kube-controllers-7f764b5b64-dd89z" Sep 10 00:13:39.083938 kubelet[2471]: I0910 00:13:39.083660 2471 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/279a2bd1-e5f8-4ed5-bbcf-24ce06e302a6-calico-apiserver-certs\") pod \"calico-apiserver-6bc6768489-w628d\" (UID: \"279a2bd1-e5f8-4ed5-bbcf-24ce06e302a6\") " pod="calico-apiserver/calico-apiserver-6bc6768489-w628d" Sep 10 00:13:39.083938 kubelet[2471]: I0910 00:13:39.083674 2471 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1a71852-d2fe-4382-8a25-a3d286247a75-goldmane-ca-bundle\") pod \"goldmane-7988f88666-9zjzp\" (UID: \"d1a71852-d2fe-4382-8a25-a3d286247a75\") " pod="calico-system/goldmane-7988f88666-9zjzp" Sep 10 00:13:39.084122 systemd[1]: Created slice kubepods-burstable-pod778ab3ef_74e3_4341_b35b_556c4e8acdd5.slice - libcontainer container kubepods-burstable-pod778ab3ef_74e3_4341_b35b_556c4e8acdd5.slice. Sep 10 00:13:39.090393 systemd[1]: Created slice kubepods-besteffort-pod279a2bd1_e5f8_4ed5_bbcf_24ce06e302a6.slice - libcontainer container kubepods-besteffort-pod279a2bd1_e5f8_4ed5_bbcf_24ce06e302a6.slice. Sep 10 00:13:39.096356 systemd[1]: Created slice kubepods-besteffort-podd1a71852_d2fe_4382_8a25_a3d286247a75.slice - libcontainer container kubepods-besteffort-podd1a71852_d2fe_4382_8a25_a3d286247a75.slice. Sep 10 00:13:39.102305 systemd[1]: Created slice kubepods-besteffort-pode21db204_019f_4e2a_9a77_0eef0c7e2f3d.slice - libcontainer container kubepods-besteffort-pode21db204_019f_4e2a_9a77_0eef0c7e2f3d.slice. Sep 10 00:13:39.108916 systemd[1]: Created slice kubepods-besteffort-pod34aa417d_a639_4656_862d_aac7f831a9b9.slice - libcontainer container kubepods-besteffort-pod34aa417d_a639_4656_862d_aac7f831a9b9.slice. Sep 10 00:13:39.373543 kubelet[2471]: E0910 00:13:39.373388 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:13:39.374780 containerd[1430]: time="2025-09-10T00:13:39.374734922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9575c,Uid:138bcf10-bfb3-4e83-915f-37b2d9c80ead,Namespace:kube-system,Attempt:0,}" Sep 10 00:13:39.377604 containerd[1430]: time="2025-09-10T00:13:39.377572360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f764b5b64-dd89z,Uid:2f58e97e-6e12-4135-b90c-0a1b0b407422,Namespace:calico-system,Attempt:0,}" Sep 10 00:13:39.387461 kubelet[2471]: E0910 00:13:39.387402 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:13:39.388415 containerd[1430]: time="2025-09-10T00:13:39.388102182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-fprdx,Uid:778ab3ef-74e3-4341-b35b-556c4e8acdd5,Namespace:kube-system,Attempt:0,}" Sep 10 00:13:39.394740 containerd[1430]: time="2025-09-10T00:13:39.394698751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bc6768489-w628d,Uid:279a2bd1-e5f8-4ed5-bbcf-24ce06e302a6,Namespace:calico-apiserver,Attempt:0,}" Sep 10 00:13:39.401030 containerd[1430]: time="2025-09-10T00:13:39.400382307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-9zjzp,Uid:d1a71852-d2fe-4382-8a25-a3d286247a75,Namespace:calico-system,Attempt:0,}" Sep 10 00:13:39.412070 containerd[1430]: time="2025-09-10T00:13:39.412013543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78574f96f6-6lwj2,Uid:e21db204-019f-4e2a-9a77-0eef0c7e2f3d,Namespace:calico-system,Attempt:0,}" Sep 10 00:13:39.414246 containerd[1430]: time="2025-09-10T00:13:39.414199773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bc6768489-gxbgn,Uid:34aa417d-a639-4656-862d-aac7f831a9b9,Namespace:calico-apiserver,Attempt:0,}" Sep 10 00:13:39.541485 containerd[1430]: time="2025-09-10T00:13:39.541428163Z" level=error msg="Failed to destroy network for sandbox \"d62eebdda38d4548d2e3f52b4f8523eec446425cf32213f102134b444b23ebf7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:13:39.542410 containerd[1430]: time="2025-09-10T00:13:39.542350536Z" level=error msg="Failed to destroy network for sandbox \"1d818c7a41fd63b7d4631ae852b8897391bd867cf4a341f468a4a1dc715dec5d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:13:39.543233 containerd[1430]: time="2025-09-10T00:13:39.543187787Z" level=error msg="encountered an error cleaning up failed sandbox \"d62eebdda38d4548d2e3f52b4f8523eec446425cf32213f102134b444b23ebf7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:13:39.543304 containerd[1430]: time="2025-09-10T00:13:39.543251508Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-fprdx,Uid:778ab3ef-74e3-4341-b35b-556c4e8acdd5,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d62eebdda38d4548d2e3f52b4f8523eec446425cf32213f102134b444b23ebf7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:13:39.543331 containerd[1430]: time="2025-09-10T00:13:39.543209547Z" level=error msg="encountered an error cleaning up failed sandbox \"1d818c7a41fd63b7d4631ae852b8897391bd867cf4a341f468a4a1dc715dec5d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:13:39.543392 containerd[1430]: time="2025-09-10T00:13:39.543363109Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9575c,Uid:138bcf10-bfb3-4e83-915f-37b2d9c80ead,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1d818c7a41fd63b7d4631ae852b8897391bd867cf4a341f468a4a1dc715dec5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:13:39.545696 kubelet[2471]: E0910 00:13:39.545639 2471 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d818c7a41fd63b7d4631ae852b8897391bd867cf4a341f468a4a1dc715dec5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:13:39.546043 kubelet[2471]: E0910 00:13:39.545635 2471 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d62eebdda38d4548d2e3f52b4f8523eec446425cf32213f102134b444b23ebf7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:13:39.546563 kubelet[2471]: E0910 00:13:39.546279 2471 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d62eebdda38d4548d2e3f52b4f8523eec446425cf32213f102134b444b23ebf7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-fprdx" Sep 10 00:13:39.546563 kubelet[2471]: E0910 00:13:39.546321 2471 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d62eebdda38d4548d2e3f52b4f8523eec446425cf32213f102134b444b23ebf7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-fprdx" Sep 10 00:13:39.546563 kubelet[2471]: E0910 00:13:39.546423 2471 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-fprdx_kube-system(778ab3ef-74e3-4341-b35b-556c4e8acdd5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-fprdx_kube-system(778ab3ef-74e3-4341-b35b-556c4e8acdd5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d62eebdda38d4548d2e3f52b4f8523eec446425cf32213f102134b444b23ebf7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-fprdx" podUID="778ab3ef-74e3-4341-b35b-556c4e8acdd5" Sep 10 00:13:39.547120 containerd[1430]: time="2025-09-10T00:13:39.546097106Z" level=error msg="Failed to destroy network for sandbox \"2e115a12ea45474c3c9e17a2166489c5cf6672f00d5896ee6a3a6628774baf40\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:13:39.548188 kubelet[2471]: E0910 00:13:39.547544 2471 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d818c7a41fd63b7d4631ae852b8897391bd867cf4a341f468a4a1dc715dec5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-9575c" Sep 10 00:13:39.548188 kubelet[2471]: E0910 00:13:39.547586 2471 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d818c7a41fd63b7d4631ae852b8897391bd867cf4a341f468a4a1dc715dec5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-9575c" Sep 10 00:13:39.548188 kubelet[2471]: E0910 00:13:39.547630 2471 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-9575c_kube-system(138bcf10-bfb3-4e83-915f-37b2d9c80ead)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-9575c_kube-system(138bcf10-bfb3-4e83-915f-37b2d9c80ead)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1d818c7a41fd63b7d4631ae852b8897391bd867cf4a341f468a4a1dc715dec5d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-9575c" podUID="138bcf10-bfb3-4e83-915f-37b2d9c80ead" Sep 10 00:13:39.548340 containerd[1430]: time="2025-09-10T00:13:39.547897850Z" level=error msg="encountered an error cleaning up failed sandbox \"2e115a12ea45474c3c9e17a2166489c5cf6672f00d5896ee6a3a6628774baf40\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:13:39.548340 containerd[1430]: time="2025-09-10T00:13:39.547945971Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f764b5b64-dd89z,Uid:2f58e97e-6e12-4135-b90c-0a1b0b407422,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2e115a12ea45474c3c9e17a2166489c5cf6672f00d5896ee6a3a6628774baf40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:13:39.548408 kubelet[2471]: E0910 00:13:39.548120 2471 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e115a12ea45474c3c9e17a2166489c5cf6672f00d5896ee6a3a6628774baf40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:13:39.548408 kubelet[2471]: E0910 00:13:39.548153 2471 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e115a12ea45474c3c9e17a2166489c5cf6672f00d5896ee6a3a6628774baf40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f764b5b64-dd89z" Sep 10 00:13:39.548408 kubelet[2471]: E0910 00:13:39.548168 2471 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e115a12ea45474c3c9e17a2166489c5cf6672f00d5896ee6a3a6628774baf40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f764b5b64-dd89z" Sep 10 00:13:39.548481 kubelet[2471]: E0910 00:13:39.548195 2471 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7f764b5b64-dd89z_calico-system(2f58e97e-6e12-4135-b90c-0a1b0b407422)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7f764b5b64-dd89z_calico-system(2f58e97e-6e12-4135-b90c-0a1b0b407422)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2e115a12ea45474c3c9e17a2166489c5cf6672f00d5896ee6a3a6628774baf40\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7f764b5b64-dd89z" podUID="2f58e97e-6e12-4135-b90c-0a1b0b407422" Sep 10 00:13:39.561669 containerd[1430]: time="2025-09-10T00:13:39.561525993Z" level=error msg="Failed to destroy network for sandbox \"deb389a8cde9c78f119ddad7b44b9900de9441dbb8ede2f6cd31993d7b7967c8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:13:39.562123 containerd[1430]: time="2025-09-10T00:13:39.562097721Z" level=error msg="encountered an error cleaning up failed sandbox \"deb389a8cde9c78f119ddad7b44b9900de9441dbb8ede2f6cd31993d7b7967c8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:13:39.562123 containerd[1430]: time="2025-09-10T00:13:39.562172602Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78574f96f6-6lwj2,Uid:e21db204-019f-4e2a-9a77-0eef0c7e2f3d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"deb389a8cde9c78f119ddad7b44b9900de9441dbb8ede2f6cd31993d7b7967c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:13:39.562696 kubelet[2471]: E0910 00:13:39.562651 2471 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"deb389a8cde9c78f119ddad7b44b9900de9441dbb8ede2f6cd31993d7b7967c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:13:39.562761 kubelet[2471]: E0910 00:13:39.562711 2471 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"deb389a8cde9c78f119ddad7b44b9900de9441dbb8ede2f6cd31993d7b7967c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-78574f96f6-6lwj2" Sep 10 00:13:39.562761 kubelet[2471]: E0910 00:13:39.562729 2471 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"deb389a8cde9c78f119ddad7b44b9900de9441dbb8ede2f6cd31993d7b7967c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-78574f96f6-6lwj2" Sep 10 00:13:39.562828 kubelet[2471]: E0910 00:13:39.562765 2471 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-78574f96f6-6lwj2_calico-system(e21db204-019f-4e2a-9a77-0eef0c7e2f3d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-78574f96f6-6lwj2_calico-system(e21db204-019f-4e2a-9a77-0eef0c7e2f3d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"deb389a8cde9c78f119ddad7b44b9900de9441dbb8ede2f6cd31993d7b7967c8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-78574f96f6-6lwj2" podUID="e21db204-019f-4e2a-9a77-0eef0c7e2f3d" Sep 10 00:13:39.574903 containerd[1430]: time="2025-09-10T00:13:39.574829212Z" level=error msg="Failed to destroy network for sandbox \"2d43f127c1ddba66236343c437ddd27b49d78d0eaa7075097b8975be4eb06d4c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:13:39.575711 containerd[1430]: time="2025-09-10T00:13:39.575492901Z" level=error msg="encountered an error cleaning up failed sandbox \"2d43f127c1ddba66236343c437ddd27b49d78d0eaa7075097b8975be4eb06d4c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:13:39.575711 containerd[1430]: time="2025-09-10T00:13:39.575673184Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bc6768489-gxbgn,Uid:34aa417d-a639-4656-862d-aac7f831a9b9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2d43f127c1ddba66236343c437ddd27b49d78d0eaa7075097b8975be4eb06d4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:13:39.575820 containerd[1430]: time="2025-09-10T00:13:39.575697904Z" level=error msg="Failed to destroy network for sandbox \"472415d550a77d5a0800f54596aec97171fd5f8401eee0a00519943d393e801f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:13:39.576064 kubelet[2471]: E0910 00:13:39.576025 2471 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d43f127c1ddba66236343c437ddd27b49d78d0eaa7075097b8975be4eb06d4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:13:39.576124 kubelet[2471]: E0910 00:13:39.576077 2471 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d43f127c1ddba66236343c437ddd27b49d78d0eaa7075097b8975be4eb06d4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6bc6768489-gxbgn" Sep 10 00:13:39.576124 kubelet[2471]: E0910 00:13:39.576096 2471 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d43f127c1ddba66236343c437ddd27b49d78d0eaa7075097b8975be4eb06d4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6bc6768489-gxbgn" Sep 10 00:13:39.576175 kubelet[2471]: E0910 00:13:39.576133 2471 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6bc6768489-gxbgn_calico-apiserver(34aa417d-a639-4656-862d-aac7f831a9b9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6bc6768489-gxbgn_calico-apiserver(34aa417d-a639-4656-862d-aac7f831a9b9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2d43f127c1ddba66236343c437ddd27b49d78d0eaa7075097b8975be4eb06d4c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6bc6768489-gxbgn" podUID="34aa417d-a639-4656-862d-aac7f831a9b9" Sep 10 00:13:39.577556 containerd[1430]: time="2025-09-10T00:13:39.576450514Z" level=error msg="encountered an error cleaning up failed sandbox \"472415d550a77d5a0800f54596aec97171fd5f8401eee0a00519943d393e801f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:13:39.577556 containerd[1430]: time="2025-09-10T00:13:39.576559155Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-9zjzp,Uid:d1a71852-d2fe-4382-8a25-a3d286247a75,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"472415d550a77d5a0800f54596aec97171fd5f8401eee0a00519943d393e801f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:13:39.577772 kubelet[2471]: E0910 00:13:39.576750 2471 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"472415d550a77d5a0800f54596aec97171fd5f8401eee0a00519943d393e801f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:13:39.577772 kubelet[2471]: E0910 00:13:39.576782 2471 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"472415d550a77d5a0800f54596aec97171fd5f8401eee0a00519943d393e801f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-9zjzp" Sep 10 00:13:39.577772 kubelet[2471]: E0910 00:13:39.576796 2471 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"472415d550a77d5a0800f54596aec97171fd5f8401eee0a00519943d393e801f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-9zjzp" Sep 10 00:13:39.578048 kubelet[2471]: E0910 00:13:39.576823 2471 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-9zjzp_calico-system(d1a71852-d2fe-4382-8a25-a3d286247a75)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-9zjzp_calico-system(d1a71852-d2fe-4382-8a25-a3d286247a75)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"472415d550a77d5a0800f54596aec97171fd5f8401eee0a00519943d393e801f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-9zjzp" podUID="d1a71852-d2fe-4382-8a25-a3d286247a75" Sep 10 00:13:39.587662 containerd[1430]: time="2025-09-10T00:13:39.587616624Z" level=error msg="Failed to destroy network for sandbox \"89e42bdb4ebafba989d2b80772fba6857a23b7ca53958659e5e44d05fdab7658\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:13:39.587990 containerd[1430]: time="2025-09-10T00:13:39.587942188Z" level=error msg="encountered an error cleaning up failed sandbox \"89e42bdb4ebafba989d2b80772fba6857a23b7ca53958659e5e44d05fdab7658\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:13:39.588035 containerd[1430]: time="2025-09-10T00:13:39.588006109Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bc6768489-w628d,Uid:279a2bd1-e5f8-4ed5-bbcf-24ce06e302a6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"89e42bdb4ebafba989d2b80772fba6857a23b7ca53958659e5e44d05fdab7658\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:13:39.588233 kubelet[2471]: E0910 00:13:39.588195 2471 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89e42bdb4ebafba989d2b80772fba6857a23b7ca53958659e5e44d05fdab7658\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:13:39.588452 kubelet[2471]: E0910 00:13:39.588322 2471 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89e42bdb4ebafba989d2b80772fba6857a23b7ca53958659e5e44d05fdab7658\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6bc6768489-w628d" Sep 10 00:13:39.588452 kubelet[2471]: E0910 00:13:39.588344 2471 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89e42bdb4ebafba989d2b80772fba6857a23b7ca53958659e5e44d05fdab7658\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6bc6768489-w628d" Sep 10 00:13:39.588691 kubelet[2471]: E0910 00:13:39.588595 2471 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6bc6768489-w628d_calico-apiserver(279a2bd1-e5f8-4ed5-bbcf-24ce06e302a6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6bc6768489-w628d_calico-apiserver(279a2bd1-e5f8-4ed5-bbcf-24ce06e302a6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"89e42bdb4ebafba989d2b80772fba6857a23b7ca53958659e5e44d05fdab7658\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6bc6768489-w628d" podUID="279a2bd1-e5f8-4ed5-bbcf-24ce06e302a6" Sep 10 00:13:39.835286 systemd[1]: Created slice kubepods-besteffort-pod09c5ed53_4869_4b1c_8a65_b62ac3f88415.slice - libcontainer container kubepods-besteffort-pod09c5ed53_4869_4b1c_8a65_b62ac3f88415.slice. Sep 10 00:13:39.837613 containerd[1430]: time="2025-09-10T00:13:39.837579065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6btnb,Uid:09c5ed53-4869-4b1c-8a65-b62ac3f88415,Namespace:calico-system,Attempt:0,}" Sep 10 00:13:39.889343 containerd[1430]: time="2025-09-10T00:13:39.889163558Z" level=error msg="Failed to destroy network for sandbox \"d69084ff2923cc958b95e61a07b97aed2f0bbea5caddd3a838f8758611a00248\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:13:39.890305 containerd[1430]: time="2025-09-10T00:13:39.889925648Z" level=error msg="encountered an error cleaning up failed sandbox \"d69084ff2923cc958b95e61a07b97aed2f0bbea5caddd3a838f8758611a00248\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:13:39.890305 containerd[1430]: time="2025-09-10T00:13:39.889981649Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6btnb,Uid:09c5ed53-4869-4b1c-8a65-b62ac3f88415,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d69084ff2923cc958b95e61a07b97aed2f0bbea5caddd3a838f8758611a00248\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:13:39.890666 kubelet[2471]: E0910 00:13:39.890168 2471 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d69084ff2923cc958b95e61a07b97aed2f0bbea5caddd3a838f8758611a00248\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:13:39.890666 kubelet[2471]: E0910 00:13:39.890216 2471 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d69084ff2923cc958b95e61a07b97aed2f0bbea5caddd3a838f8758611a00248\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6btnb" Sep 10 00:13:39.890666 kubelet[2471]: E0910 00:13:39.890233 2471 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d69084ff2923cc958b95e61a07b97aed2f0bbea5caddd3a838f8758611a00248\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6btnb" Sep 10 00:13:39.890903 kubelet[2471]: E0910 00:13:39.890268 2471 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-6btnb_calico-system(09c5ed53-4869-4b1c-8a65-b62ac3f88415)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-6btnb_calico-system(09c5ed53-4869-4b1c-8a65-b62ac3f88415)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d69084ff2923cc958b95e61a07b97aed2f0bbea5caddd3a838f8758611a00248\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6btnb" podUID="09c5ed53-4869-4b1c-8a65-b62ac3f88415" Sep 10 00:13:39.907641 kubelet[2471]: I0910 00:13:39.907603 2471 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d69084ff2923cc958b95e61a07b97aed2f0bbea5caddd3a838f8758611a00248" Sep 10 00:13:39.908789 containerd[1430]: time="2025-09-10T00:13:39.908707261Z" level=info msg="StopPodSandbox for \"d69084ff2923cc958b95e61a07b97aed2f0bbea5caddd3a838f8758611a00248\"" Sep 10 00:13:39.908919 containerd[1430]: time="2025-09-10T00:13:39.908895423Z" level=info msg="Ensure that sandbox d69084ff2923cc958b95e61a07b97aed2f0bbea5caddd3a838f8758611a00248 in task-service has been cleanup successfully" Sep 10 00:13:39.911080 kubelet[2471]: I0910 00:13:39.910880 2471 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e115a12ea45474c3c9e17a2166489c5cf6672f00d5896ee6a3a6628774baf40" Sep 10 00:13:39.911756 containerd[1430]: time="2025-09-10T00:13:39.911440938Z" level=info msg="StopPodSandbox for \"2e115a12ea45474c3c9e17a2166489c5cf6672f00d5896ee6a3a6628774baf40\"" Sep 10 00:13:39.911756 containerd[1430]: time="2025-09-10T00:13:39.911606660Z" level=info msg="Ensure that sandbox 2e115a12ea45474c3c9e17a2166489c5cf6672f00d5896ee6a3a6628774baf40 in task-service has been cleanup successfully" Sep 10 00:13:39.915190 containerd[1430]: time="2025-09-10T00:13:39.915159788Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 10 00:13:39.918324 kubelet[2471]: I0910 00:13:39.918181 2471 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89e42bdb4ebafba989d2b80772fba6857a23b7ca53958659e5e44d05fdab7658" Sep 10 00:13:39.920387 containerd[1430]: time="2025-09-10T00:13:39.920012613Z" level=info msg="StopPodSandbox for \"89e42bdb4ebafba989d2b80772fba6857a23b7ca53958659e5e44d05fdab7658\"" Sep 10 00:13:39.920387 containerd[1430]: time="2025-09-10T00:13:39.920368578Z" level=info msg="Ensure that sandbox 89e42bdb4ebafba989d2b80772fba6857a23b7ca53958659e5e44d05fdab7658 in task-service has been cleanup successfully" Sep 10 00:13:39.920387 containerd[1430]: time="2025-09-10T00:13:39.922485446Z" level=info msg="StopPodSandbox for \"d62eebdda38d4548d2e3f52b4f8523eec446425cf32213f102134b444b23ebf7\"" Sep 10 00:13:39.920387 containerd[1430]: time="2025-09-10T00:13:39.922820251Z" level=info msg="Ensure that sandbox d62eebdda38d4548d2e3f52b4f8523eec446425cf32213f102134b444b23ebf7 in task-service has been cleanup successfully" Sep 10 00:13:39.924727 kubelet[2471]: I0910 00:13:39.920857 2471 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d62eebdda38d4548d2e3f52b4f8523eec446425cf32213f102134b444b23ebf7" Sep 10 00:13:39.924727 kubelet[2471]: I0910 00:13:39.924695 2471 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d818c7a41fd63b7d4631ae852b8897391bd867cf4a341f468a4a1dc715dec5d" Sep 10 00:13:39.925274 containerd[1430]: time="2025-09-10T00:13:39.925237163Z" level=info msg="StopPodSandbox for \"1d818c7a41fd63b7d4631ae852b8897391bd867cf4a341f468a4a1dc715dec5d\"" Sep 10 00:13:39.925411 containerd[1430]: time="2025-09-10T00:13:39.925382765Z" level=info msg="Ensure that sandbox 1d818c7a41fd63b7d4631ae852b8897391bd867cf4a341f468a4a1dc715dec5d in task-service has been cleanup successfully" Sep 10 00:13:39.932523 kubelet[2471]: I0910 00:13:39.929629 2471 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="472415d550a77d5a0800f54596aec97171fd5f8401eee0a00519943d393e801f" Sep 10 00:13:39.932523 kubelet[2471]: I0910 00:13:39.932343 2471 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d43f127c1ddba66236343c437ddd27b49d78d0eaa7075097b8975be4eb06d4c" Sep 10 00:13:39.932799 containerd[1430]: time="2025-09-10T00:13:39.930175589Z" level=info msg="StopPodSandbox for \"472415d550a77d5a0800f54596aec97171fd5f8401eee0a00519943d393e801f\"" Sep 10 00:13:39.932799 containerd[1430]: time="2025-09-10T00:13:39.930893519Z" level=info msg="Ensure that sandbox 472415d550a77d5a0800f54596aec97171fd5f8401eee0a00519943d393e801f in task-service has been cleanup successfully" Sep 10 00:13:39.935722 containerd[1430]: time="2025-09-10T00:13:39.932900426Z" level=info msg="StopPodSandbox for \"2d43f127c1ddba66236343c437ddd27b49d78d0eaa7075097b8975be4eb06d4c\"" Sep 10 00:13:39.935722 containerd[1430]: time="2025-09-10T00:13:39.933062308Z" level=info msg="Ensure that sandbox 2d43f127c1ddba66236343c437ddd27b49d78d0eaa7075097b8975be4eb06d4c in task-service has been cleanup successfully" Sep 10 00:13:39.935722 containerd[1430]: time="2025-09-10T00:13:39.934983134Z" level=info msg="StopPodSandbox for \"deb389a8cde9c78f119ddad7b44b9900de9441dbb8ede2f6cd31993d7b7967c8\"" Sep 10 00:13:39.935722 containerd[1430]: time="2025-09-10T00:13:39.935241937Z" level=info msg="Ensure that sandbox deb389a8cde9c78f119ddad7b44b9900de9441dbb8ede2f6cd31993d7b7967c8 in task-service has been cleanup successfully" Sep 10 00:13:39.935859 kubelet[2471]: I0910 00:13:39.934542 2471 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="deb389a8cde9c78f119ddad7b44b9900de9441dbb8ede2f6cd31993d7b7967c8" Sep 10 00:13:39.966776 containerd[1430]: time="2025-09-10T00:13:39.966700680Z" level=error msg="StopPodSandbox for \"2e115a12ea45474c3c9e17a2166489c5cf6672f00d5896ee6a3a6628774baf40\" failed" error="failed to destroy network for sandbox \"2e115a12ea45474c3c9e17a2166489c5cf6672f00d5896ee6a3a6628774baf40\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:13:39.967033 kubelet[2471]: E0910 00:13:39.966987 2471 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2e115a12ea45474c3c9e17a2166489c5cf6672f00d5896ee6a3a6628774baf40\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2e115a12ea45474c3c9e17a2166489c5cf6672f00d5896ee6a3a6628774baf40" Sep 10 00:13:39.967185 kubelet[2471]: E0910 00:13:39.967117 2471 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2e115a12ea45474c3c9e17a2166489c5cf6672f00d5896ee6a3a6628774baf40"} Sep 10 00:13:39.967232 kubelet[2471]: E0910 00:13:39.967196 2471 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2f58e97e-6e12-4135-b90c-0a1b0b407422\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2e115a12ea45474c3c9e17a2166489c5cf6672f00d5896ee6a3a6628774baf40\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 10 00:13:39.967232 kubelet[2471]: E0910 00:13:39.967220 2471 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2f58e97e-6e12-4135-b90c-0a1b0b407422\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2e115a12ea45474c3c9e17a2166489c5cf6672f00d5896ee6a3a6628774baf40\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7f764b5b64-dd89z" podUID="2f58e97e-6e12-4135-b90c-0a1b0b407422" Sep 10 00:13:39.980681 containerd[1430]: time="2025-09-10T00:13:39.980627588Z" level=error msg="StopPodSandbox for \"d69084ff2923cc958b95e61a07b97aed2f0bbea5caddd3a838f8758611a00248\" failed" error="failed to destroy network for sandbox \"d69084ff2923cc958b95e61a07b97aed2f0bbea5caddd3a838f8758611a00248\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:13:39.980912 kubelet[2471]: E0910 00:13:39.980857 2471 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d69084ff2923cc958b95e61a07b97aed2f0bbea5caddd3a838f8758611a00248\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d69084ff2923cc958b95e61a07b97aed2f0bbea5caddd3a838f8758611a00248" Sep 10 00:13:39.980972 kubelet[2471]: E0910 00:13:39.980925 2471 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d69084ff2923cc958b95e61a07b97aed2f0bbea5caddd3a838f8758611a00248"} Sep 10 00:13:39.980972 kubelet[2471]: E0910 00:13:39.980959 2471 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"09c5ed53-4869-4b1c-8a65-b62ac3f88415\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d69084ff2923cc958b95e61a07b97aed2f0bbea5caddd3a838f8758611a00248\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 10 00:13:39.981048 kubelet[2471]: E0910 00:13:39.980981 2471 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"09c5ed53-4869-4b1c-8a65-b62ac3f88415\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d69084ff2923cc958b95e61a07b97aed2f0bbea5caddd3a838f8758611a00248\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6btnb" podUID="09c5ed53-4869-4b1c-8a65-b62ac3f88415" Sep 10 00:13:39.981779 containerd[1430]: time="2025-09-10T00:13:39.981746563Z" level=error msg="StopPodSandbox for \"472415d550a77d5a0800f54596aec97171fd5f8401eee0a00519943d393e801f\" failed" error="failed to destroy network for sandbox \"472415d550a77d5a0800f54596aec97171fd5f8401eee0a00519943d393e801f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:13:39.981966 kubelet[2471]: E0910 00:13:39.981903 2471 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"472415d550a77d5a0800f54596aec97171fd5f8401eee0a00519943d393e801f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="472415d550a77d5a0800f54596aec97171fd5f8401eee0a00519943d393e801f" Sep 10 00:13:39.981997 kubelet[2471]: E0910 00:13:39.981972 2471 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"472415d550a77d5a0800f54596aec97171fd5f8401eee0a00519943d393e801f"} Sep 10 00:13:39.982028 kubelet[2471]: E0910 00:13:39.981998 2471 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d1a71852-d2fe-4382-8a25-a3d286247a75\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"472415d550a77d5a0800f54596aec97171fd5f8401eee0a00519943d393e801f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 10 00:13:39.982068 kubelet[2471]: E0910 00:13:39.982028 2471 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d1a71852-d2fe-4382-8a25-a3d286247a75\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"472415d550a77d5a0800f54596aec97171fd5f8401eee0a00519943d393e801f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-9zjzp" podUID="d1a71852-d2fe-4382-8a25-a3d286247a75" Sep 10 00:13:39.998923 containerd[1430]: time="2025-09-10T00:13:39.998876073Z" level=error msg="StopPodSandbox for \"89e42bdb4ebafba989d2b80772fba6857a23b7ca53958659e5e44d05fdab7658\" failed" error="failed to destroy network for sandbox \"89e42bdb4ebafba989d2b80772fba6857a23b7ca53958659e5e44d05fdab7658\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:13:40.000083 kubelet[2471]: E0910 00:13:39.999050 2471 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"89e42bdb4ebafba989d2b80772fba6857a23b7ca53958659e5e44d05fdab7658\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="89e42bdb4ebafba989d2b80772fba6857a23b7ca53958659e5e44d05fdab7658" Sep 10 00:13:40.000083 kubelet[2471]: E0910 00:13:39.999082 2471 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"89e42bdb4ebafba989d2b80772fba6857a23b7ca53958659e5e44d05fdab7658"} Sep 10 00:13:40.000083 kubelet[2471]: E0910 00:13:39.999113 2471 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"279a2bd1-e5f8-4ed5-bbcf-24ce06e302a6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"89e42bdb4ebafba989d2b80772fba6857a23b7ca53958659e5e44d05fdab7658\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 10 00:13:40.000083 kubelet[2471]: E0910 00:13:39.999132 2471 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"279a2bd1-e5f8-4ed5-bbcf-24ce06e302a6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"89e42bdb4ebafba989d2b80772fba6857a23b7ca53958659e5e44d05fdab7658\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6bc6768489-w628d" podUID="279a2bd1-e5f8-4ed5-bbcf-24ce06e302a6" Sep 10 00:13:40.000340 containerd[1430]: time="2025-09-10T00:13:40.000305572Z" level=error msg="StopPodSandbox for \"2d43f127c1ddba66236343c437ddd27b49d78d0eaa7075097b8975be4eb06d4c\" failed" error="failed to destroy network for sandbox \"2d43f127c1ddba66236343c437ddd27b49d78d0eaa7075097b8975be4eb06d4c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:13:40.000411 containerd[1430]: time="2025-09-10T00:13:40.000388053Z" level=error msg="StopPodSandbox for \"deb389a8cde9c78f119ddad7b44b9900de9441dbb8ede2f6cd31993d7b7967c8\" failed" error="failed to destroy network for sandbox \"deb389a8cde9c78f119ddad7b44b9900de9441dbb8ede2f6cd31993d7b7967c8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:13:40.000555 kubelet[2471]: E0910 00:13:40.000471 2471 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"deb389a8cde9c78f119ddad7b44b9900de9441dbb8ede2f6cd31993d7b7967c8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="deb389a8cde9c78f119ddad7b44b9900de9441dbb8ede2f6cd31993d7b7967c8" Sep 10 00:13:40.000612 kubelet[2471]: E0910 00:13:40.000565 2471 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"deb389a8cde9c78f119ddad7b44b9900de9441dbb8ede2f6cd31993d7b7967c8"} Sep 10 00:13:40.000612 kubelet[2471]: E0910 00:13:40.000590 2471 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e21db204-019f-4e2a-9a77-0eef0c7e2f3d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"deb389a8cde9c78f119ddad7b44b9900de9441dbb8ede2f6cd31993d7b7967c8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 10 00:13:40.000680 kubelet[2471]: E0910 00:13:40.000524 2471 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2d43f127c1ddba66236343c437ddd27b49d78d0eaa7075097b8975be4eb06d4c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2d43f127c1ddba66236343c437ddd27b49d78d0eaa7075097b8975be4eb06d4c" Sep 10 00:13:40.000680 kubelet[2471]: E0910 00:13:40.000662 2471 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2d43f127c1ddba66236343c437ddd27b49d78d0eaa7075097b8975be4eb06d4c"} Sep 10 00:13:40.000726 kubelet[2471]: E0910 00:13:40.000692 2471 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"34aa417d-a639-4656-862d-aac7f831a9b9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2d43f127c1ddba66236343c437ddd27b49d78d0eaa7075097b8975be4eb06d4c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 10 00:13:40.000726 kubelet[2471]: E0910 00:13:40.000610 2471 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e21db204-019f-4e2a-9a77-0eef0c7e2f3d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"deb389a8cde9c78f119ddad7b44b9900de9441dbb8ede2f6cd31993d7b7967c8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-78574f96f6-6lwj2" podUID="e21db204-019f-4e2a-9a77-0eef0c7e2f3d" Sep 10 00:13:40.000726 kubelet[2471]: E0910 00:13:40.000713 2471 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"34aa417d-a639-4656-862d-aac7f831a9b9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2d43f127c1ddba66236343c437ddd27b49d78d0eaa7075097b8975be4eb06d4c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6bc6768489-gxbgn" podUID="34aa417d-a639-4656-862d-aac7f831a9b9" Sep 10 00:13:40.001674 containerd[1430]: time="2025-09-10T00:13:40.001643710Z" level=error msg="StopPodSandbox for \"d62eebdda38d4548d2e3f52b4f8523eec446425cf32213f102134b444b23ebf7\" failed" error="failed to destroy network for sandbox \"d62eebdda38d4548d2e3f52b4f8523eec446425cf32213f102134b444b23ebf7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:13:40.001792 kubelet[2471]: E0910 00:13:40.001768 2471 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d62eebdda38d4548d2e3f52b4f8523eec446425cf32213f102134b444b23ebf7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d62eebdda38d4548d2e3f52b4f8523eec446425cf32213f102134b444b23ebf7" Sep 10 00:13:40.001821 kubelet[2471]: E0910 00:13:40.001796 2471 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d62eebdda38d4548d2e3f52b4f8523eec446425cf32213f102134b444b23ebf7"} Sep 10 00:13:40.001847 kubelet[2471]: E0910 00:13:40.001820 2471 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"778ab3ef-74e3-4341-b35b-556c4e8acdd5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d62eebdda38d4548d2e3f52b4f8523eec446425cf32213f102134b444b23ebf7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 10 00:13:40.001847 kubelet[2471]: E0910 00:13:40.001837 2471 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"778ab3ef-74e3-4341-b35b-556c4e8acdd5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d62eebdda38d4548d2e3f52b4f8523eec446425cf32213f102134b444b23ebf7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-fprdx" podUID="778ab3ef-74e3-4341-b35b-556c4e8acdd5" Sep 10 00:13:40.003383 containerd[1430]: time="2025-09-10T00:13:40.003346652Z" level=error msg="StopPodSandbox for \"1d818c7a41fd63b7d4631ae852b8897391bd867cf4a341f468a4a1dc715dec5d\" failed" error="failed to destroy network for sandbox \"1d818c7a41fd63b7d4631ae852b8897391bd867cf4a341f468a4a1dc715dec5d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:13:40.003515 kubelet[2471]: E0910 00:13:40.003488 2471 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1d818c7a41fd63b7d4631ae852b8897391bd867cf4a341f468a4a1dc715dec5d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1d818c7a41fd63b7d4631ae852b8897391bd867cf4a341f468a4a1dc715dec5d" Sep 10 00:13:40.003547 kubelet[2471]: E0910 00:13:40.003519 2471 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1d818c7a41fd63b7d4631ae852b8897391bd867cf4a341f468a4a1dc715dec5d"} Sep 10 00:13:40.003547 kubelet[2471]: E0910 00:13:40.003540 2471 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"138bcf10-bfb3-4e83-915f-37b2d9c80ead\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1d818c7a41fd63b7d4631ae852b8897391bd867cf4a341f468a4a1dc715dec5d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 10 00:13:40.003644 kubelet[2471]: E0910 00:13:40.003557 2471 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"138bcf10-bfb3-4e83-915f-37b2d9c80ead\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1d818c7a41fd63b7d4631ae852b8897391bd867cf4a341f468a4a1dc715dec5d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-9575c" podUID="138bcf10-bfb3-4e83-915f-37b2d9c80ead" Sep 10 00:13:40.278707 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d62eebdda38d4548d2e3f52b4f8523eec446425cf32213f102134b444b23ebf7-shm.mount: Deactivated successfully. Sep 10 00:13:40.278801 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2e115a12ea45474c3c9e17a2166489c5cf6672f00d5896ee6a3a6628774baf40-shm.mount: Deactivated successfully. Sep 10 00:13:40.278852 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1d818c7a41fd63b7d4631ae852b8897391bd867cf4a341f468a4a1dc715dec5d-shm.mount: Deactivated successfully. Sep 10 00:13:43.637787 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount949320934.mount: Deactivated successfully. Sep 10 00:13:43.868567 containerd[1430]: time="2025-09-10T00:13:43.868486680Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:13:43.869724 containerd[1430]: time="2025-09-10T00:13:43.869682493Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 10 00:13:43.870975 containerd[1430]: time="2025-09-10T00:13:43.870947628Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:13:43.873621 containerd[1430]: time="2025-09-10T00:13:43.873435056Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:13:43.874074 containerd[1430]: time="2025-09-10T00:13:43.874049983Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 3.958846755s" Sep 10 00:13:43.874144 containerd[1430]: time="2025-09-10T00:13:43.874080464Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 10 00:13:43.881969 containerd[1430]: time="2025-09-10T00:13:43.881853872Z" level=info msg="CreateContainer within sandbox \"f67d5ed4e8e14837a63324d56dc3a0f1102c3e3da50b56aff3b3df1ac14e71c1\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 10 00:13:43.903679 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3971383870.mount: Deactivated successfully. Sep 10 00:13:43.908359 containerd[1430]: time="2025-09-10T00:13:43.908304295Z" level=info msg="CreateContainer within sandbox \"f67d5ed4e8e14837a63324d56dc3a0f1102c3e3da50b56aff3b3df1ac14e71c1\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d46df2df61a23dcfef70df2e209be1b6af48f5eeb4dc071bdee5e864e8476135\"" Sep 10 00:13:43.908990 containerd[1430]: time="2025-09-10T00:13:43.908811420Z" level=info msg="StartContainer for \"d46df2df61a23dcfef70df2e209be1b6af48f5eeb4dc071bdee5e864e8476135\"" Sep 10 00:13:43.963693 systemd[1]: Started cri-containerd-d46df2df61a23dcfef70df2e209be1b6af48f5eeb4dc071bdee5e864e8476135.scope - libcontainer container d46df2df61a23dcfef70df2e209be1b6af48f5eeb4dc071bdee5e864e8476135. Sep 10 00:13:43.989888 containerd[1430]: time="2025-09-10T00:13:43.989836906Z" level=info msg="StartContainer for \"d46df2df61a23dcfef70df2e209be1b6af48f5eeb4dc071bdee5e864e8476135\" returns successfully" Sep 10 00:13:44.113781 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 10 00:13:44.113918 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 10 00:13:44.234122 containerd[1430]: time="2025-09-10T00:13:44.234002717Z" level=info msg="StopPodSandbox for \"deb389a8cde9c78f119ddad7b44b9900de9441dbb8ede2f6cd31993d7b7967c8\"" Sep 10 00:13:44.402877 containerd[1430]: 2025-09-10 00:13:44.322 [INFO][3772] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="deb389a8cde9c78f119ddad7b44b9900de9441dbb8ede2f6cd31993d7b7967c8" Sep 10 00:13:44.402877 containerd[1430]: 2025-09-10 00:13:44.326 [INFO][3772] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="deb389a8cde9c78f119ddad7b44b9900de9441dbb8ede2f6cd31993d7b7967c8" iface="eth0" netns="/var/run/netns/cni-5eedc313-08ce-3c81-9ead-c5bf1888e835" Sep 10 00:13:44.402877 containerd[1430]: 2025-09-10 00:13:44.327 [INFO][3772] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="deb389a8cde9c78f119ddad7b44b9900de9441dbb8ede2f6cd31993d7b7967c8" iface="eth0" netns="/var/run/netns/cni-5eedc313-08ce-3c81-9ead-c5bf1888e835" Sep 10 00:13:44.402877 containerd[1430]: 2025-09-10 00:13:44.327 [INFO][3772] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="deb389a8cde9c78f119ddad7b44b9900de9441dbb8ede2f6cd31993d7b7967c8" iface="eth0" netns="/var/run/netns/cni-5eedc313-08ce-3c81-9ead-c5bf1888e835" Sep 10 00:13:44.402877 containerd[1430]: 2025-09-10 00:13:44.327 [INFO][3772] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="deb389a8cde9c78f119ddad7b44b9900de9441dbb8ede2f6cd31993d7b7967c8" Sep 10 00:13:44.402877 containerd[1430]: 2025-09-10 00:13:44.328 [INFO][3772] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="deb389a8cde9c78f119ddad7b44b9900de9441dbb8ede2f6cd31993d7b7967c8" Sep 10 00:13:44.402877 containerd[1430]: 2025-09-10 00:13:44.386 [INFO][3783] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="deb389a8cde9c78f119ddad7b44b9900de9441dbb8ede2f6cd31993d7b7967c8" HandleID="k8s-pod-network.deb389a8cde9c78f119ddad7b44b9900de9441dbb8ede2f6cd31993d7b7967c8" Workload="localhost-k8s-whisker--78574f96f6--6lwj2-eth0" Sep 10 00:13:44.402877 containerd[1430]: 2025-09-10 00:13:44.386 [INFO][3783] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:13:44.402877 containerd[1430]: 2025-09-10 00:13:44.386 [INFO][3783] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:13:44.402877 containerd[1430]: 2025-09-10 00:13:44.396 [WARNING][3783] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="deb389a8cde9c78f119ddad7b44b9900de9441dbb8ede2f6cd31993d7b7967c8" HandleID="k8s-pod-network.deb389a8cde9c78f119ddad7b44b9900de9441dbb8ede2f6cd31993d7b7967c8" Workload="localhost-k8s-whisker--78574f96f6--6lwj2-eth0" Sep 10 00:13:44.402877 containerd[1430]: 2025-09-10 00:13:44.396 [INFO][3783] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="deb389a8cde9c78f119ddad7b44b9900de9441dbb8ede2f6cd31993d7b7967c8" HandleID="k8s-pod-network.deb389a8cde9c78f119ddad7b44b9900de9441dbb8ede2f6cd31993d7b7967c8" Workload="localhost-k8s-whisker--78574f96f6--6lwj2-eth0" Sep 10 00:13:44.402877 containerd[1430]: 2025-09-10 00:13:44.399 [INFO][3783] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:13:44.402877 containerd[1430]: 2025-09-10 00:13:44.401 [INFO][3772] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="deb389a8cde9c78f119ddad7b44b9900de9441dbb8ede2f6cd31993d7b7967c8" Sep 10 00:13:44.403370 containerd[1430]: time="2025-09-10T00:13:44.403009056Z" level=info msg="TearDown network for sandbox \"deb389a8cde9c78f119ddad7b44b9900de9441dbb8ede2f6cd31993d7b7967c8\" successfully" Sep 10 00:13:44.403370 containerd[1430]: time="2025-09-10T00:13:44.403036336Z" level=info msg="StopPodSandbox for \"deb389a8cde9c78f119ddad7b44b9900de9441dbb8ede2f6cd31993d7b7967c8\" returns successfully" Sep 10 00:13:44.419865 kubelet[2471]: I0910 00:13:44.419611 2471 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e21db204-019f-4e2a-9a77-0eef0c7e2f3d-whisker-backend-key-pair\") pod \"e21db204-019f-4e2a-9a77-0eef0c7e2f3d\" (UID: \"e21db204-019f-4e2a-9a77-0eef0c7e2f3d\") " Sep 10 00:13:44.420443 kubelet[2471]: I0910 00:13:44.419889 2471 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gt2mq\" (UniqueName: \"kubernetes.io/projected/e21db204-019f-4e2a-9a77-0eef0c7e2f3d-kube-api-access-gt2mq\") pod \"e21db204-019f-4e2a-9a77-0eef0c7e2f3d\" (UID: \"e21db204-019f-4e2a-9a77-0eef0c7e2f3d\") " Sep 10 00:13:44.420443 kubelet[2471]: I0910 00:13:44.419944 2471 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e21db204-019f-4e2a-9a77-0eef0c7e2f3d-whisker-ca-bundle\") pod \"e21db204-019f-4e2a-9a77-0eef0c7e2f3d\" (UID: \"e21db204-019f-4e2a-9a77-0eef0c7e2f3d\") " Sep 10 00:13:44.425604 kubelet[2471]: I0910 00:13:44.425569 2471 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e21db204-019f-4e2a-9a77-0eef0c7e2f3d-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "e21db204-019f-4e2a-9a77-0eef0c7e2f3d" (UID: "e21db204-019f-4e2a-9a77-0eef0c7e2f3d"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 10 00:13:44.427455 kubelet[2471]: I0910 00:13:44.427266 2471 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e21db204-019f-4e2a-9a77-0eef0c7e2f3d-kube-api-access-gt2mq" (OuterVolumeSpecName: "kube-api-access-gt2mq") pod "e21db204-019f-4e2a-9a77-0eef0c7e2f3d" (UID: "e21db204-019f-4e2a-9a77-0eef0c7e2f3d"). InnerVolumeSpecName "kube-api-access-gt2mq". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 10 00:13:44.427455 kubelet[2471]: I0910 00:13:44.427443 2471 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e21db204-019f-4e2a-9a77-0eef0c7e2f3d-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "e21db204-019f-4e2a-9a77-0eef0c7e2f3d" (UID: "e21db204-019f-4e2a-9a77-0eef0c7e2f3d"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 10 00:13:44.520861 kubelet[2471]: I0910 00:13:44.520751 2471 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e21db204-019f-4e2a-9a77-0eef0c7e2f3d-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 10 00:13:44.520861 kubelet[2471]: I0910 00:13:44.520788 2471 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e21db204-019f-4e2a-9a77-0eef0c7e2f3d-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 10 00:13:44.520861 kubelet[2471]: I0910 00:13:44.520799 2471 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gt2mq\" (UniqueName: \"kubernetes.io/projected/e21db204-019f-4e2a-9a77-0eef0c7e2f3d-kube-api-access-gt2mq\") on node \"localhost\" DevicePath \"\"" Sep 10 00:13:44.638950 systemd[1]: run-netns-cni\x2d5eedc313\x2d08ce\x2d3c81\x2d9ead\x2dc5bf1888e835.mount: Deactivated successfully. Sep 10 00:13:44.639040 systemd[1]: var-lib-kubelet-pods-e21db204\x2d019f\x2d4e2a\x2d9a77\x2d0eef0c7e2f3d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgt2mq.mount: Deactivated successfully. Sep 10 00:13:44.639101 systemd[1]: var-lib-kubelet-pods-e21db204\x2d019f\x2d4e2a\x2d9a77\x2d0eef0c7e2f3d-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 10 00:13:44.834048 systemd[1]: Removed slice kubepods-besteffort-pode21db204_019f_4e2a_9a77_0eef0c7e2f3d.slice - libcontainer container kubepods-besteffort-pode21db204_019f_4e2a_9a77_0eef0c7e2f3d.slice. Sep 10 00:13:44.963900 kubelet[2471]: I0910 00:13:44.962832 2471 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-lkls6" podStartSLOduration=2.312544448 podStartE2EDuration="13.962814974s" podCreationTimestamp="2025-09-10 00:13:31 +0000 UTC" firstStartedPulling="2025-09-10 00:13:32.224467665 +0000 UTC m=+17.481290398" lastFinishedPulling="2025-09-10 00:13:43.874738191 +0000 UTC m=+29.131560924" observedRunningTime="2025-09-10 00:13:44.962516251 +0000 UTC m=+30.219338984" watchObservedRunningTime="2025-09-10 00:13:44.962814974 +0000 UTC m=+30.219637707" Sep 10 00:13:45.035018 systemd[1]: Created slice kubepods-besteffort-pod7b761d6f_6ad4_4fb1_9733_285bf7cbfe63.slice - libcontainer container kubepods-besteffort-pod7b761d6f_6ad4_4fb1_9733_285bf7cbfe63.slice. Sep 10 00:13:45.124526 kubelet[2471]: I0910 00:13:45.124396 2471 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7b761d6f-6ad4-4fb1-9733-285bf7cbfe63-whisker-backend-key-pair\") pod \"whisker-784888f8c-qb2rw\" (UID: \"7b761d6f-6ad4-4fb1-9733-285bf7cbfe63\") " pod="calico-system/whisker-784888f8c-qb2rw" Sep 10 00:13:45.124734 kubelet[2471]: I0910 00:13:45.124706 2471 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmbh5\" (UniqueName: \"kubernetes.io/projected/7b761d6f-6ad4-4fb1-9733-285bf7cbfe63-kube-api-access-xmbh5\") pod \"whisker-784888f8c-qb2rw\" (UID: \"7b761d6f-6ad4-4fb1-9733-285bf7cbfe63\") " pod="calico-system/whisker-784888f8c-qb2rw" Sep 10 00:13:45.124772 kubelet[2471]: I0910 00:13:45.124743 2471 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b761d6f-6ad4-4fb1-9733-285bf7cbfe63-whisker-ca-bundle\") pod \"whisker-784888f8c-qb2rw\" (UID: \"7b761d6f-6ad4-4fb1-9733-285bf7cbfe63\") " pod="calico-system/whisker-784888f8c-qb2rw" Sep 10 00:13:45.340140 containerd[1430]: time="2025-09-10T00:13:45.340032067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-784888f8c-qb2rw,Uid:7b761d6f-6ad4-4fb1-9733-285bf7cbfe63,Namespace:calico-system,Attempt:0,}" Sep 10 00:13:45.446433 systemd-networkd[1370]: caliaf6bcf8c82d: Link UP Sep 10 00:13:45.446758 systemd-networkd[1370]: caliaf6bcf8c82d: Gained carrier Sep 10 00:13:45.460136 containerd[1430]: 2025-09-10 00:13:45.368 [INFO][3826] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 10 00:13:45.460136 containerd[1430]: 2025-09-10 00:13:45.381 [INFO][3826] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--784888f8c--qb2rw-eth0 whisker-784888f8c- calico-system 7b761d6f-6ad4-4fb1-9733-285bf7cbfe63 922 0 2025-09-10 00:13:44 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:784888f8c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-784888f8c-qb2rw eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] caliaf6bcf8c82d [] [] }} ContainerID="1166bdf2463e6c1bde385ea18b35d3d90e4cec67a1ac5d5118188099935f8a3c" Namespace="calico-system" Pod="whisker-784888f8c-qb2rw" WorkloadEndpoint="localhost-k8s-whisker--784888f8c--qb2rw-" Sep 10 00:13:45.460136 containerd[1430]: 2025-09-10 00:13:45.382 [INFO][3826] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1166bdf2463e6c1bde385ea18b35d3d90e4cec67a1ac5d5118188099935f8a3c" Namespace="calico-system" Pod="whisker-784888f8c-qb2rw" WorkloadEndpoint="localhost-k8s-whisker--784888f8c--qb2rw-eth0" Sep 10 00:13:45.460136 containerd[1430]: 2025-09-10 00:13:45.403 [INFO][3841] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1166bdf2463e6c1bde385ea18b35d3d90e4cec67a1ac5d5118188099935f8a3c" HandleID="k8s-pod-network.1166bdf2463e6c1bde385ea18b35d3d90e4cec67a1ac5d5118188099935f8a3c" Workload="localhost-k8s-whisker--784888f8c--qb2rw-eth0" Sep 10 00:13:45.460136 containerd[1430]: 2025-09-10 00:13:45.403 [INFO][3841] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1166bdf2463e6c1bde385ea18b35d3d90e4cec67a1ac5d5118188099935f8a3c" HandleID="k8s-pod-network.1166bdf2463e6c1bde385ea18b35d3d90e4cec67a1ac5d5118188099935f8a3c" Workload="localhost-k8s-whisker--784888f8c--qb2rw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001374d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-784888f8c-qb2rw", "timestamp":"2025-09-10 00:13:45.40347522 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 00:13:45.460136 containerd[1430]: 2025-09-10 00:13:45.403 [INFO][3841] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:13:45.460136 containerd[1430]: 2025-09-10 00:13:45.403 [INFO][3841] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:13:45.460136 containerd[1430]: 2025-09-10 00:13:45.403 [INFO][3841] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 00:13:45.460136 containerd[1430]: 2025-09-10 00:13:45.414 [INFO][3841] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1166bdf2463e6c1bde385ea18b35d3d90e4cec67a1ac5d5118188099935f8a3c" host="localhost" Sep 10 00:13:45.460136 containerd[1430]: 2025-09-10 00:13:45.419 [INFO][3841] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 00:13:45.460136 containerd[1430]: 2025-09-10 00:13:45.424 [INFO][3841] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 00:13:45.460136 containerd[1430]: 2025-09-10 00:13:45.425 [INFO][3841] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 00:13:45.460136 containerd[1430]: 2025-09-10 00:13:45.428 [INFO][3841] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 00:13:45.460136 containerd[1430]: 2025-09-10 00:13:45.428 [INFO][3841] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1166bdf2463e6c1bde385ea18b35d3d90e4cec67a1ac5d5118188099935f8a3c" host="localhost" Sep 10 00:13:45.460136 containerd[1430]: 2025-09-10 00:13:45.429 [INFO][3841] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1166bdf2463e6c1bde385ea18b35d3d90e4cec67a1ac5d5118188099935f8a3c Sep 10 00:13:45.460136 containerd[1430]: 2025-09-10 00:13:45.434 [INFO][3841] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1166bdf2463e6c1bde385ea18b35d3d90e4cec67a1ac5d5118188099935f8a3c" host="localhost" Sep 10 00:13:45.460136 containerd[1430]: 2025-09-10 00:13:45.438 [INFO][3841] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.1166bdf2463e6c1bde385ea18b35d3d90e4cec67a1ac5d5118188099935f8a3c" host="localhost" Sep 10 00:13:45.460136 containerd[1430]: 2025-09-10 00:13:45.438 [INFO][3841] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.1166bdf2463e6c1bde385ea18b35d3d90e4cec67a1ac5d5118188099935f8a3c" host="localhost" Sep 10 00:13:45.460136 containerd[1430]: 2025-09-10 00:13:45.438 [INFO][3841] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:13:45.460136 containerd[1430]: 2025-09-10 00:13:45.438 [INFO][3841] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="1166bdf2463e6c1bde385ea18b35d3d90e4cec67a1ac5d5118188099935f8a3c" HandleID="k8s-pod-network.1166bdf2463e6c1bde385ea18b35d3d90e4cec67a1ac5d5118188099935f8a3c" Workload="localhost-k8s-whisker--784888f8c--qb2rw-eth0" Sep 10 00:13:45.461049 containerd[1430]: 2025-09-10 00:13:45.440 [INFO][3826] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1166bdf2463e6c1bde385ea18b35d3d90e4cec67a1ac5d5118188099935f8a3c" Namespace="calico-system" Pod="whisker-784888f8c-qb2rw" WorkloadEndpoint="localhost-k8s-whisker--784888f8c--qb2rw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--784888f8c--qb2rw-eth0", GenerateName:"whisker-784888f8c-", Namespace:"calico-system", SelfLink:"", UID:"7b761d6f-6ad4-4fb1-9733-285bf7cbfe63", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 13, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"784888f8c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-784888f8c-qb2rw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliaf6bcf8c82d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:13:45.461049 containerd[1430]: 2025-09-10 00:13:45.440 [INFO][3826] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="1166bdf2463e6c1bde385ea18b35d3d90e4cec67a1ac5d5118188099935f8a3c" Namespace="calico-system" Pod="whisker-784888f8c-qb2rw" WorkloadEndpoint="localhost-k8s-whisker--784888f8c--qb2rw-eth0" Sep 10 00:13:45.461049 containerd[1430]: 2025-09-10 00:13:45.440 [INFO][3826] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaf6bcf8c82d ContainerID="1166bdf2463e6c1bde385ea18b35d3d90e4cec67a1ac5d5118188099935f8a3c" Namespace="calico-system" Pod="whisker-784888f8c-qb2rw" WorkloadEndpoint="localhost-k8s-whisker--784888f8c--qb2rw-eth0" Sep 10 00:13:45.461049 containerd[1430]: 2025-09-10 00:13:45.447 [INFO][3826] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1166bdf2463e6c1bde385ea18b35d3d90e4cec67a1ac5d5118188099935f8a3c" Namespace="calico-system" Pod="whisker-784888f8c-qb2rw" WorkloadEndpoint="localhost-k8s-whisker--784888f8c--qb2rw-eth0" Sep 10 00:13:45.461049 containerd[1430]: 2025-09-10 00:13:45.447 [INFO][3826] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1166bdf2463e6c1bde385ea18b35d3d90e4cec67a1ac5d5118188099935f8a3c" Namespace="calico-system" Pod="whisker-784888f8c-qb2rw" WorkloadEndpoint="localhost-k8s-whisker--784888f8c--qb2rw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--784888f8c--qb2rw-eth0", GenerateName:"whisker-784888f8c-", Namespace:"calico-system", SelfLink:"", UID:"7b761d6f-6ad4-4fb1-9733-285bf7cbfe63", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 13, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"784888f8c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1166bdf2463e6c1bde385ea18b35d3d90e4cec67a1ac5d5118188099935f8a3c", Pod:"whisker-784888f8c-qb2rw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliaf6bcf8c82d", MAC:"d2:eb:5c:8d:80:7c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:13:45.461049 containerd[1430]: 2025-09-10 00:13:45.458 [INFO][3826] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1166bdf2463e6c1bde385ea18b35d3d90e4cec67a1ac5d5118188099935f8a3c" Namespace="calico-system" Pod="whisker-784888f8c-qb2rw" WorkloadEndpoint="localhost-k8s-whisker--784888f8c--qb2rw-eth0" Sep 10 00:13:45.474187 containerd[1430]: time="2025-09-10T00:13:45.473740164Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:13:45.474187 containerd[1430]: time="2025-09-10T00:13:45.474156889Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:13:45.474349 containerd[1430]: time="2025-09-10T00:13:45.474169849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:13:45.474349 containerd[1430]: time="2025-09-10T00:13:45.474251690Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:13:45.493784 systemd[1]: Started cri-containerd-1166bdf2463e6c1bde385ea18b35d3d90e4cec67a1ac5d5118188099935f8a3c.scope - libcontainer container 1166bdf2463e6c1bde385ea18b35d3d90e4cec67a1ac5d5118188099935f8a3c. Sep 10 00:13:45.512633 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:13:45.558113 containerd[1430]: time="2025-09-10T00:13:45.557749975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-784888f8c-qb2rw,Uid:7b761d6f-6ad4-4fb1-9733-285bf7cbfe63,Namespace:calico-system,Attempt:0,} returns sandbox id \"1166bdf2463e6c1bde385ea18b35d3d90e4cec67a1ac5d5118188099935f8a3c\"" Sep 10 00:13:45.563748 containerd[1430]: time="2025-09-10T00:13:45.562778868Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 10 00:13:46.702297 containerd[1430]: time="2025-09-10T00:13:46.698977568Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:13:46.702297 containerd[1430]: time="2025-09-10T00:13:46.699998938Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 10 00:13:46.702297 containerd[1430]: time="2025-09-10T00:13:46.700720026Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:13:46.732280 containerd[1430]: time="2025-09-10T00:13:46.732228668Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:13:46.733358 containerd[1430]: time="2025-09-10T00:13:46.733319919Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 1.169759843s" Sep 10 00:13:46.733358 containerd[1430]: time="2025-09-10T00:13:46.733356279Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 10 00:13:46.738110 containerd[1430]: time="2025-09-10T00:13:46.738074128Z" level=info msg="CreateContainer within sandbox \"1166bdf2463e6c1bde385ea18b35d3d90e4cec67a1ac5d5118188099935f8a3c\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 10 00:13:46.831031 kubelet[2471]: I0910 00:13:46.830977 2471 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e21db204-019f-4e2a-9a77-0eef0c7e2f3d" path="/var/lib/kubelet/pods/e21db204-019f-4e2a-9a77-0eef0c7e2f3d/volumes" Sep 10 00:13:46.861989 containerd[1430]: time="2025-09-10T00:13:46.861943714Z" level=info msg="CreateContainer within sandbox \"1166bdf2463e6c1bde385ea18b35d3d90e4cec67a1ac5d5118188099935f8a3c\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"735ac036cdd1b87958833df5e48f336b6b65ec5f558afbe31f2bdce5282c1d4b\"" Sep 10 00:13:46.862600 containerd[1430]: time="2025-09-10T00:13:46.862576040Z" level=info msg="StartContainer for \"735ac036cdd1b87958833df5e48f336b6b65ec5f558afbe31f2bdce5282c1d4b\"" Sep 10 00:13:46.882204 systemd[1]: run-containerd-runc-k8s.io-735ac036cdd1b87958833df5e48f336b6b65ec5f558afbe31f2bdce5282c1d4b-runc.TpDrFy.mount: Deactivated successfully. Sep 10 00:13:46.894750 systemd[1]: Started cri-containerd-735ac036cdd1b87958833df5e48f336b6b65ec5f558afbe31f2bdce5282c1d4b.scope - libcontainer container 735ac036cdd1b87958833df5e48f336b6b65ec5f558afbe31f2bdce5282c1d4b. Sep 10 00:13:46.920072 containerd[1430]: time="2025-09-10T00:13:46.920009627Z" level=info msg="StartContainer for \"735ac036cdd1b87958833df5e48f336b6b65ec5f558afbe31f2bdce5282c1d4b\" returns successfully" Sep 10 00:13:46.922160 containerd[1430]: time="2025-09-10T00:13:46.922131969Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 10 00:13:47.190704 systemd-networkd[1370]: caliaf6bcf8c82d: Gained IPv6LL Sep 10 00:13:48.504841 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2423677108.mount: Deactivated successfully. Sep 10 00:13:48.554547 containerd[1430]: time="2025-09-10T00:13:48.554487246Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:13:48.555492 containerd[1430]: time="2025-09-10T00:13:48.555466055Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 10 00:13:48.556238 containerd[1430]: time="2025-09-10T00:13:48.556216662Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:13:48.565297 containerd[1430]: time="2025-09-10T00:13:48.565259869Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:13:48.571348 containerd[1430]: time="2025-09-10T00:13:48.571300046Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 1.649134117s" Sep 10 00:13:48.571348 containerd[1430]: time="2025-09-10T00:13:48.571344607Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 10 00:13:48.573757 containerd[1430]: time="2025-09-10T00:13:48.573725629Z" level=info msg="CreateContainer within sandbox \"1166bdf2463e6c1bde385ea18b35d3d90e4cec67a1ac5d5118188099935f8a3c\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 10 00:13:48.586295 containerd[1430]: time="2025-09-10T00:13:48.586245069Z" level=info msg="CreateContainer within sandbox \"1166bdf2463e6c1bde385ea18b35d3d90e4cec67a1ac5d5118188099935f8a3c\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"8116cfc675a877ac2a5fb73b08077e50ff965eb6e270bb0c9da8d0272eb1bf69\"" Sep 10 00:13:48.586760 containerd[1430]: time="2025-09-10T00:13:48.586734193Z" level=info msg="StartContainer for \"8116cfc675a877ac2a5fb73b08077e50ff965eb6e270bb0c9da8d0272eb1bf69\"" Sep 10 00:13:48.620680 systemd[1]: Started cri-containerd-8116cfc675a877ac2a5fb73b08077e50ff965eb6e270bb0c9da8d0272eb1bf69.scope - libcontainer container 8116cfc675a877ac2a5fb73b08077e50ff965eb6e270bb0c9da8d0272eb1bf69. Sep 10 00:13:48.655870 containerd[1430]: time="2025-09-10T00:13:48.655745412Z" level=info msg="StartContainer for \"8116cfc675a877ac2a5fb73b08077e50ff965eb6e270bb0c9da8d0272eb1bf69\" returns successfully" Sep 10 00:13:48.974832 kubelet[2471]: I0910 00:13:48.974766 2471 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-784888f8c-qb2rw" podStartSLOduration=1.964601821 podStartE2EDuration="4.974750295s" podCreationTimestamp="2025-09-10 00:13:44 +0000 UTC" firstStartedPulling="2025-09-10 00:13:45.561795938 +0000 UTC m=+30.818618671" lastFinishedPulling="2025-09-10 00:13:48.571944452 +0000 UTC m=+33.828767145" observedRunningTime="2025-09-10 00:13:48.974487852 +0000 UTC m=+34.231310585" watchObservedRunningTime="2025-09-10 00:13:48.974750295 +0000 UTC m=+34.231573028" Sep 10 00:13:50.830680 containerd[1430]: time="2025-09-10T00:13:50.830556271Z" level=info msg="StopPodSandbox for \"89e42bdb4ebafba989d2b80772fba6857a23b7ca53958659e5e44d05fdab7658\"" Sep 10 00:13:50.927640 containerd[1430]: 2025-09-10 00:13:50.886 [INFO][4247] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="89e42bdb4ebafba989d2b80772fba6857a23b7ca53958659e5e44d05fdab7658" Sep 10 00:13:50.927640 containerd[1430]: 2025-09-10 00:13:50.886 [INFO][4247] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="89e42bdb4ebafba989d2b80772fba6857a23b7ca53958659e5e44d05fdab7658" iface="eth0" netns="/var/run/netns/cni-baf005c5-a091-3e1e-7121-c9c47aab0300" Sep 10 00:13:50.927640 containerd[1430]: 2025-09-10 00:13:50.887 [INFO][4247] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="89e42bdb4ebafba989d2b80772fba6857a23b7ca53958659e5e44d05fdab7658" iface="eth0" netns="/var/run/netns/cni-baf005c5-a091-3e1e-7121-c9c47aab0300" Sep 10 00:13:50.927640 containerd[1430]: 2025-09-10 00:13:50.887 [INFO][4247] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="89e42bdb4ebafba989d2b80772fba6857a23b7ca53958659e5e44d05fdab7658" iface="eth0" netns="/var/run/netns/cni-baf005c5-a091-3e1e-7121-c9c47aab0300" Sep 10 00:13:50.927640 containerd[1430]: 2025-09-10 00:13:50.887 [INFO][4247] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="89e42bdb4ebafba989d2b80772fba6857a23b7ca53958659e5e44d05fdab7658" Sep 10 00:13:50.927640 containerd[1430]: 2025-09-10 00:13:50.887 [INFO][4247] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="89e42bdb4ebafba989d2b80772fba6857a23b7ca53958659e5e44d05fdab7658" Sep 10 00:13:50.927640 containerd[1430]: 2025-09-10 00:13:50.908 [INFO][4256] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="89e42bdb4ebafba989d2b80772fba6857a23b7ca53958659e5e44d05fdab7658" HandleID="k8s-pod-network.89e42bdb4ebafba989d2b80772fba6857a23b7ca53958659e5e44d05fdab7658" Workload="localhost-k8s-calico--apiserver--6bc6768489--w628d-eth0" Sep 10 00:13:50.927640 containerd[1430]: 2025-09-10 00:13:50.909 [INFO][4256] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:13:50.927640 containerd[1430]: 2025-09-10 00:13:50.909 [INFO][4256] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:13:50.927640 containerd[1430]: 2025-09-10 00:13:50.920 [WARNING][4256] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="89e42bdb4ebafba989d2b80772fba6857a23b7ca53958659e5e44d05fdab7658" HandleID="k8s-pod-network.89e42bdb4ebafba989d2b80772fba6857a23b7ca53958659e5e44d05fdab7658" Workload="localhost-k8s-calico--apiserver--6bc6768489--w628d-eth0" Sep 10 00:13:50.927640 containerd[1430]: 2025-09-10 00:13:50.920 [INFO][4256] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="89e42bdb4ebafba989d2b80772fba6857a23b7ca53958659e5e44d05fdab7658" HandleID="k8s-pod-network.89e42bdb4ebafba989d2b80772fba6857a23b7ca53958659e5e44d05fdab7658" Workload="localhost-k8s-calico--apiserver--6bc6768489--w628d-eth0" Sep 10 00:13:50.927640 containerd[1430]: 2025-09-10 00:13:50.923 [INFO][4256] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:13:50.927640 containerd[1430]: 2025-09-10 00:13:50.925 [INFO][4247] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="89e42bdb4ebafba989d2b80772fba6857a23b7ca53958659e5e44d05fdab7658" Sep 10 00:13:50.930288 containerd[1430]: time="2025-09-10T00:13:50.929848198Z" level=info msg="TearDown network for sandbox \"89e42bdb4ebafba989d2b80772fba6857a23b7ca53958659e5e44d05fdab7658\" successfully" Sep 10 00:13:50.930288 containerd[1430]: time="2025-09-10T00:13:50.929875999Z" level=info msg="StopPodSandbox for \"89e42bdb4ebafba989d2b80772fba6857a23b7ca53958659e5e44d05fdab7658\" returns successfully" Sep 10 00:13:50.929790 systemd[1]: run-netns-cni\x2dbaf005c5\x2da091\x2d3e1e\x2d7121\x2dc9c47aab0300.mount: Deactivated successfully. Sep 10 00:13:50.931542 containerd[1430]: time="2025-09-10T00:13:50.930750927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bc6768489-w628d,Uid:279a2bd1-e5f8-4ed5-bbcf-24ce06e302a6,Namespace:calico-apiserver,Attempt:1,}" Sep 10 00:13:51.074243 systemd-networkd[1370]: calid50230b23d2: Link UP Sep 10 00:13:51.074378 systemd-networkd[1370]: calid50230b23d2: Gained carrier Sep 10 00:13:51.095263 containerd[1430]: 2025-09-10 00:13:50.966 [INFO][4272] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 10 00:13:51.095263 containerd[1430]: 2025-09-10 00:13:50.988 [INFO][4272] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6bc6768489--w628d-eth0 calico-apiserver-6bc6768489- calico-apiserver 279a2bd1-e5f8-4ed5-bbcf-24ce06e302a6 956 0 2025-09-10 00:13:28 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6bc6768489 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6bc6768489-w628d eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid50230b23d2 [] [] }} ContainerID="cdd9848092fa05e43c969e6cffae700f727f78c089e3d39f4287d0f93fdad33a" Namespace="calico-apiserver" Pod="calico-apiserver-6bc6768489-w628d" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bc6768489--w628d-" Sep 10 00:13:51.095263 containerd[1430]: 2025-09-10 00:13:50.988 [INFO][4272] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cdd9848092fa05e43c969e6cffae700f727f78c089e3d39f4287d0f93fdad33a" Namespace="calico-apiserver" Pod="calico-apiserver-6bc6768489-w628d" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bc6768489--w628d-eth0" Sep 10 00:13:51.095263 containerd[1430]: 2025-09-10 00:13:51.017 [INFO][4288] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cdd9848092fa05e43c969e6cffae700f727f78c089e3d39f4287d0f93fdad33a" HandleID="k8s-pod-network.cdd9848092fa05e43c969e6cffae700f727f78c089e3d39f4287d0f93fdad33a" Workload="localhost-k8s-calico--apiserver--6bc6768489--w628d-eth0" Sep 10 00:13:51.095263 containerd[1430]: 2025-09-10 00:13:51.017 [INFO][4288] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cdd9848092fa05e43c969e6cffae700f727f78c089e3d39f4287d0f93fdad33a" HandleID="k8s-pod-network.cdd9848092fa05e43c969e6cffae700f727f78c089e3d39f4287d0f93fdad33a" Workload="localhost-k8s-calico--apiserver--6bc6768489--w628d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004cd50), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6bc6768489-w628d", "timestamp":"2025-09-10 00:13:51.017076494 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 00:13:51.095263 containerd[1430]: 2025-09-10 00:13:51.017 [INFO][4288] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:13:51.095263 containerd[1430]: 2025-09-10 00:13:51.017 [INFO][4288] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:13:51.095263 containerd[1430]: 2025-09-10 00:13:51.017 [INFO][4288] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 00:13:51.095263 containerd[1430]: 2025-09-10 00:13:51.030 [INFO][4288] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cdd9848092fa05e43c969e6cffae700f727f78c089e3d39f4287d0f93fdad33a" host="localhost" Sep 10 00:13:51.095263 containerd[1430]: 2025-09-10 00:13:51.036 [INFO][4288] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 00:13:51.095263 containerd[1430]: 2025-09-10 00:13:51.043 [INFO][4288] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 00:13:51.095263 containerd[1430]: 2025-09-10 00:13:51.045 [INFO][4288] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 00:13:51.095263 containerd[1430]: 2025-09-10 00:13:51.048 [INFO][4288] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 00:13:51.095263 containerd[1430]: 2025-09-10 00:13:51.048 [INFO][4288] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cdd9848092fa05e43c969e6cffae700f727f78c089e3d39f4287d0f93fdad33a" host="localhost" Sep 10 00:13:51.095263 containerd[1430]: 2025-09-10 00:13:51.050 [INFO][4288] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cdd9848092fa05e43c969e6cffae700f727f78c089e3d39f4287d0f93fdad33a Sep 10 00:13:51.095263 containerd[1430]: 2025-09-10 00:13:51.059 [INFO][4288] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cdd9848092fa05e43c969e6cffae700f727f78c089e3d39f4287d0f93fdad33a" host="localhost" Sep 10 00:13:51.095263 containerd[1430]: 2025-09-10 00:13:51.065 [INFO][4288] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.cdd9848092fa05e43c969e6cffae700f727f78c089e3d39f4287d0f93fdad33a" host="localhost" Sep 10 00:13:51.095263 containerd[1430]: 2025-09-10 00:13:51.065 [INFO][4288] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.cdd9848092fa05e43c969e6cffae700f727f78c089e3d39f4287d0f93fdad33a" host="localhost" Sep 10 00:13:51.095263 containerd[1430]: 2025-09-10 00:13:51.065 [INFO][4288] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:13:51.095263 containerd[1430]: 2025-09-10 00:13:51.065 [INFO][4288] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="cdd9848092fa05e43c969e6cffae700f727f78c089e3d39f4287d0f93fdad33a" HandleID="k8s-pod-network.cdd9848092fa05e43c969e6cffae700f727f78c089e3d39f4287d0f93fdad33a" Workload="localhost-k8s-calico--apiserver--6bc6768489--w628d-eth0" Sep 10 00:13:51.096045 containerd[1430]: 2025-09-10 00:13:51.070 [INFO][4272] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cdd9848092fa05e43c969e6cffae700f727f78c089e3d39f4287d0f93fdad33a" Namespace="calico-apiserver" Pod="calico-apiserver-6bc6768489-w628d" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bc6768489--w628d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6bc6768489--w628d-eth0", GenerateName:"calico-apiserver-6bc6768489-", Namespace:"calico-apiserver", SelfLink:"", UID:"279a2bd1-e5f8-4ed5-bbcf-24ce06e302a6", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 13, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bc6768489", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6bc6768489-w628d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid50230b23d2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:13:51.096045 containerd[1430]: 2025-09-10 00:13:51.070 [INFO][4272] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="cdd9848092fa05e43c969e6cffae700f727f78c089e3d39f4287d0f93fdad33a" Namespace="calico-apiserver" Pod="calico-apiserver-6bc6768489-w628d" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bc6768489--w628d-eth0" Sep 10 00:13:51.096045 containerd[1430]: 2025-09-10 00:13:51.070 [INFO][4272] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid50230b23d2 ContainerID="cdd9848092fa05e43c969e6cffae700f727f78c089e3d39f4287d0f93fdad33a" Namespace="calico-apiserver" Pod="calico-apiserver-6bc6768489-w628d" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bc6768489--w628d-eth0" Sep 10 00:13:51.096045 containerd[1430]: 2025-09-10 00:13:51.072 [INFO][4272] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cdd9848092fa05e43c969e6cffae700f727f78c089e3d39f4287d0f93fdad33a" Namespace="calico-apiserver" Pod="calico-apiserver-6bc6768489-w628d" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bc6768489--w628d-eth0" Sep 10 00:13:51.096045 containerd[1430]: 2025-09-10 00:13:51.072 [INFO][4272] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cdd9848092fa05e43c969e6cffae700f727f78c089e3d39f4287d0f93fdad33a" Namespace="calico-apiserver" Pod="calico-apiserver-6bc6768489-w628d" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bc6768489--w628d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6bc6768489--w628d-eth0", GenerateName:"calico-apiserver-6bc6768489-", Namespace:"calico-apiserver", SelfLink:"", UID:"279a2bd1-e5f8-4ed5-bbcf-24ce06e302a6", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 13, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bc6768489", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cdd9848092fa05e43c969e6cffae700f727f78c089e3d39f4287d0f93fdad33a", Pod:"calico-apiserver-6bc6768489-w628d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid50230b23d2", MAC:"d6:2d:60:25:50:63", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:13:51.096045 containerd[1430]: 2025-09-10 00:13:51.092 [INFO][4272] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cdd9848092fa05e43c969e6cffae700f727f78c089e3d39f4287d0f93fdad33a" Namespace="calico-apiserver" Pod="calico-apiserver-6bc6768489-w628d" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bc6768489--w628d-eth0" Sep 10 00:13:51.118324 containerd[1430]: time="2025-09-10T00:13:51.117781287Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:13:51.118324 containerd[1430]: time="2025-09-10T00:13:51.117983809Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:13:51.118324 containerd[1430]: time="2025-09-10T00:13:51.118013049Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:13:51.118324 containerd[1430]: time="2025-09-10T00:13:51.118177930Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:13:51.144685 systemd[1]: Started cri-containerd-cdd9848092fa05e43c969e6cffae700f727f78c089e3d39f4287d0f93fdad33a.scope - libcontainer container cdd9848092fa05e43c969e6cffae700f727f78c089e3d39f4287d0f93fdad33a. Sep 10 00:13:51.154633 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:13:51.179028 containerd[1430]: time="2025-09-10T00:13:51.178978537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bc6768489-w628d,Uid:279a2bd1-e5f8-4ed5-bbcf-24ce06e302a6,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"cdd9848092fa05e43c969e6cffae700f727f78c089e3d39f4287d0f93fdad33a\"" Sep 10 00:13:51.180647 containerd[1430]: time="2025-09-10T00:13:51.180617751Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 10 00:13:51.829368 containerd[1430]: time="2025-09-10T00:13:51.829316254Z" level=info msg="StopPodSandbox for \"472415d550a77d5a0800f54596aec97171fd5f8401eee0a00519943d393e801f\"" Sep 10 00:13:51.829937 containerd[1430]: time="2025-09-10T00:13:51.829690497Z" level=info msg="StopPodSandbox for \"2e115a12ea45474c3c9e17a2166489c5cf6672f00d5896ee6a3a6628774baf40\"" Sep 10 00:13:51.835715 containerd[1430]: time="2025-09-10T00:13:51.835679309Z" level=info msg="StopPodSandbox for \"1d818c7a41fd63b7d4631ae852b8897391bd867cf4a341f468a4a1dc715dec5d\"" Sep 10 00:13:51.941815 containerd[1430]: 2025-09-10 00:13:51.897 [INFO][4389] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="472415d550a77d5a0800f54596aec97171fd5f8401eee0a00519943d393e801f" Sep 10 00:13:51.941815 containerd[1430]: 2025-09-10 00:13:51.897 [INFO][4389] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="472415d550a77d5a0800f54596aec97171fd5f8401eee0a00519943d393e801f" iface="eth0" netns="/var/run/netns/cni-1a695bae-cfbf-2865-0646-4e8a79bccab9" Sep 10 00:13:51.941815 containerd[1430]: 2025-09-10 00:13:51.899 [INFO][4389] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="472415d550a77d5a0800f54596aec97171fd5f8401eee0a00519943d393e801f" iface="eth0" netns="/var/run/netns/cni-1a695bae-cfbf-2865-0646-4e8a79bccab9" Sep 10 00:13:51.941815 containerd[1430]: 2025-09-10 00:13:51.900 [INFO][4389] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="472415d550a77d5a0800f54596aec97171fd5f8401eee0a00519943d393e801f" iface="eth0" netns="/var/run/netns/cni-1a695bae-cfbf-2865-0646-4e8a79bccab9" Sep 10 00:13:51.941815 containerd[1430]: 2025-09-10 00:13:51.900 [INFO][4389] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="472415d550a77d5a0800f54596aec97171fd5f8401eee0a00519943d393e801f" Sep 10 00:13:51.941815 containerd[1430]: 2025-09-10 00:13:51.900 [INFO][4389] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="472415d550a77d5a0800f54596aec97171fd5f8401eee0a00519943d393e801f" Sep 10 00:13:51.941815 containerd[1430]: 2025-09-10 00:13:51.923 [INFO][4423] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="472415d550a77d5a0800f54596aec97171fd5f8401eee0a00519943d393e801f" HandleID="k8s-pod-network.472415d550a77d5a0800f54596aec97171fd5f8401eee0a00519943d393e801f" Workload="localhost-k8s-goldmane--7988f88666--9zjzp-eth0" Sep 10 00:13:51.941815 containerd[1430]: 2025-09-10 00:13:51.923 [INFO][4423] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:13:51.941815 containerd[1430]: 2025-09-10 00:13:51.923 [INFO][4423] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:13:51.941815 containerd[1430]: 2025-09-10 00:13:51.933 [WARNING][4423] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="472415d550a77d5a0800f54596aec97171fd5f8401eee0a00519943d393e801f" HandleID="k8s-pod-network.472415d550a77d5a0800f54596aec97171fd5f8401eee0a00519943d393e801f" Workload="localhost-k8s-goldmane--7988f88666--9zjzp-eth0" Sep 10 00:13:51.941815 containerd[1430]: 2025-09-10 00:13:51.933 [INFO][4423] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="472415d550a77d5a0800f54596aec97171fd5f8401eee0a00519943d393e801f" HandleID="k8s-pod-network.472415d550a77d5a0800f54596aec97171fd5f8401eee0a00519943d393e801f" Workload="localhost-k8s-goldmane--7988f88666--9zjzp-eth0" Sep 10 00:13:51.941815 containerd[1430]: 2025-09-10 00:13:51.936 [INFO][4423] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:13:51.941815 containerd[1430]: 2025-09-10 00:13:51.940 [INFO][4389] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="472415d550a77d5a0800f54596aec97171fd5f8401eee0a00519943d393e801f" Sep 10 00:13:51.942213 containerd[1430]: time="2025-09-10T00:13:51.942014670Z" level=info msg="TearDown network for sandbox \"472415d550a77d5a0800f54596aec97171fd5f8401eee0a00519943d393e801f\" successfully" Sep 10 00:13:51.942213 containerd[1430]: time="2025-09-10T00:13:51.942041911Z" level=info msg="StopPodSandbox for \"472415d550a77d5a0800f54596aec97171fd5f8401eee0a00519943d393e801f\" returns successfully" Sep 10 00:13:51.944457 systemd[1]: run-netns-cni\x2d1a695bae\x2dcfbf\x2d2865\x2d0646\x2d4e8a79bccab9.mount: Deactivated successfully. Sep 10 00:13:51.945355 containerd[1430]: time="2025-09-10T00:13:51.945151338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-9zjzp,Uid:d1a71852-d2fe-4382-8a25-a3d286247a75,Namespace:calico-system,Attempt:1,}" Sep 10 00:13:51.956262 containerd[1430]: 2025-09-10 00:13:51.900 [INFO][4388] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2e115a12ea45474c3c9e17a2166489c5cf6672f00d5896ee6a3a6628774baf40" Sep 10 00:13:51.956262 containerd[1430]: 2025-09-10 00:13:51.900 [INFO][4388] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2e115a12ea45474c3c9e17a2166489c5cf6672f00d5896ee6a3a6628774baf40" iface="eth0" netns="/var/run/netns/cni-c5831dfd-510b-8aeb-f5e1-68c0505122a6" Sep 10 00:13:51.956262 containerd[1430]: 2025-09-10 00:13:51.901 [INFO][4388] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2e115a12ea45474c3c9e17a2166489c5cf6672f00d5896ee6a3a6628774baf40" iface="eth0" netns="/var/run/netns/cni-c5831dfd-510b-8aeb-f5e1-68c0505122a6" Sep 10 00:13:51.956262 containerd[1430]: 2025-09-10 00:13:51.901 [INFO][4388] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2e115a12ea45474c3c9e17a2166489c5cf6672f00d5896ee6a3a6628774baf40" iface="eth0" netns="/var/run/netns/cni-c5831dfd-510b-8aeb-f5e1-68c0505122a6" Sep 10 00:13:51.956262 containerd[1430]: 2025-09-10 00:13:51.901 [INFO][4388] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2e115a12ea45474c3c9e17a2166489c5cf6672f00d5896ee6a3a6628774baf40" Sep 10 00:13:51.956262 containerd[1430]: 2025-09-10 00:13:51.901 [INFO][4388] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2e115a12ea45474c3c9e17a2166489c5cf6672f00d5896ee6a3a6628774baf40" Sep 10 00:13:51.956262 containerd[1430]: 2025-09-10 00:13:51.924 [INFO][4427] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2e115a12ea45474c3c9e17a2166489c5cf6672f00d5896ee6a3a6628774baf40" HandleID="k8s-pod-network.2e115a12ea45474c3c9e17a2166489c5cf6672f00d5896ee6a3a6628774baf40" Workload="localhost-k8s-calico--kube--controllers--7f764b5b64--dd89z-eth0" Sep 10 00:13:51.956262 containerd[1430]: 2025-09-10 00:13:51.924 [INFO][4427] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:13:51.956262 containerd[1430]: 2025-09-10 00:13:51.936 [INFO][4427] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:13:51.956262 containerd[1430]: 2025-09-10 00:13:51.948 [WARNING][4427] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2e115a12ea45474c3c9e17a2166489c5cf6672f00d5896ee6a3a6628774baf40" HandleID="k8s-pod-network.2e115a12ea45474c3c9e17a2166489c5cf6672f00d5896ee6a3a6628774baf40" Workload="localhost-k8s-calico--kube--controllers--7f764b5b64--dd89z-eth0" Sep 10 00:13:51.956262 containerd[1430]: 2025-09-10 00:13:51.948 [INFO][4427] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2e115a12ea45474c3c9e17a2166489c5cf6672f00d5896ee6a3a6628774baf40" HandleID="k8s-pod-network.2e115a12ea45474c3c9e17a2166489c5cf6672f00d5896ee6a3a6628774baf40" Workload="localhost-k8s-calico--kube--controllers--7f764b5b64--dd89z-eth0" Sep 10 00:13:51.956262 containerd[1430]: 2025-09-10 00:13:51.950 [INFO][4427] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:13:51.956262 containerd[1430]: 2025-09-10 00:13:51.952 [INFO][4388] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2e115a12ea45474c3c9e17a2166489c5cf6672f00d5896ee6a3a6628774baf40" Sep 10 00:13:51.957617 containerd[1430]: time="2025-09-10T00:13:51.957583765Z" level=info msg="TearDown network for sandbox \"2e115a12ea45474c3c9e17a2166489c5cf6672f00d5896ee6a3a6628774baf40\" successfully" Sep 10 00:13:51.957662 containerd[1430]: time="2025-09-10T00:13:51.957618246Z" level=info msg="StopPodSandbox for \"2e115a12ea45474c3c9e17a2166489c5cf6672f00d5896ee6a3a6628774baf40\" returns successfully" Sep 10 00:13:51.958676 containerd[1430]: time="2025-09-10T00:13:51.958599574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f764b5b64-dd89z,Uid:2f58e97e-6e12-4135-b90c-0a1b0b407422,Namespace:calico-system,Attempt:1,}" Sep 10 00:13:51.959246 systemd[1]: run-netns-cni\x2dc5831dfd\x2d510b\x2d8aeb\x2df5e1\x2d68c0505122a6.mount: Deactivated successfully. Sep 10 00:13:51.982847 containerd[1430]: 2025-09-10 00:13:51.898 [INFO][4409] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1d818c7a41fd63b7d4631ae852b8897391bd867cf4a341f468a4a1dc715dec5d" Sep 10 00:13:51.982847 containerd[1430]: 2025-09-10 00:13:51.898 [INFO][4409] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1d818c7a41fd63b7d4631ae852b8897391bd867cf4a341f468a4a1dc715dec5d" iface="eth0" netns="/var/run/netns/cni-49702ec4-6458-ec48-1521-9d69bbba0677" Sep 10 00:13:51.982847 containerd[1430]: 2025-09-10 00:13:51.899 [INFO][4409] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1d818c7a41fd63b7d4631ae852b8897391bd867cf4a341f468a4a1dc715dec5d" iface="eth0" netns="/var/run/netns/cni-49702ec4-6458-ec48-1521-9d69bbba0677" Sep 10 00:13:51.982847 containerd[1430]: 2025-09-10 00:13:51.900 [INFO][4409] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1d818c7a41fd63b7d4631ae852b8897391bd867cf4a341f468a4a1dc715dec5d" iface="eth0" netns="/var/run/netns/cni-49702ec4-6458-ec48-1521-9d69bbba0677" Sep 10 00:13:51.982847 containerd[1430]: 2025-09-10 00:13:51.900 [INFO][4409] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1d818c7a41fd63b7d4631ae852b8897391bd867cf4a341f468a4a1dc715dec5d" Sep 10 00:13:51.982847 containerd[1430]: 2025-09-10 00:13:51.900 [INFO][4409] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1d818c7a41fd63b7d4631ae852b8897391bd867cf4a341f468a4a1dc715dec5d" Sep 10 00:13:51.982847 containerd[1430]: 2025-09-10 00:13:51.928 [INFO][4425] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1d818c7a41fd63b7d4631ae852b8897391bd867cf4a341f468a4a1dc715dec5d" HandleID="k8s-pod-network.1d818c7a41fd63b7d4631ae852b8897391bd867cf4a341f468a4a1dc715dec5d" Workload="localhost-k8s-coredns--7c65d6cfc9--9575c-eth0" Sep 10 00:13:51.982847 containerd[1430]: 2025-09-10 00:13:51.928 [INFO][4425] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:13:51.982847 containerd[1430]: 2025-09-10 00:13:51.951 [INFO][4425] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:13:51.982847 containerd[1430]: 2025-09-10 00:13:51.967 [WARNING][4425] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1d818c7a41fd63b7d4631ae852b8897391bd867cf4a341f468a4a1dc715dec5d" HandleID="k8s-pod-network.1d818c7a41fd63b7d4631ae852b8897391bd867cf4a341f468a4a1dc715dec5d" Workload="localhost-k8s-coredns--7c65d6cfc9--9575c-eth0" Sep 10 00:13:51.982847 containerd[1430]: 2025-09-10 00:13:51.967 [INFO][4425] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1d818c7a41fd63b7d4631ae852b8897391bd867cf4a341f468a4a1dc715dec5d" HandleID="k8s-pod-network.1d818c7a41fd63b7d4631ae852b8897391bd867cf4a341f468a4a1dc715dec5d" Workload="localhost-k8s-coredns--7c65d6cfc9--9575c-eth0" Sep 10 00:13:51.982847 containerd[1430]: 2025-09-10 00:13:51.970 [INFO][4425] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:13:51.982847 containerd[1430]: 2025-09-10 00:13:51.975 [INFO][4409] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1d818c7a41fd63b7d4631ae852b8897391bd867cf4a341f468a4a1dc715dec5d" Sep 10 00:13:51.984463 containerd[1430]: time="2025-09-10T00:13:51.983384349Z" level=info msg="TearDown network for sandbox \"1d818c7a41fd63b7d4631ae852b8897391bd867cf4a341f468a4a1dc715dec5d\" successfully" Sep 10 00:13:51.984463 containerd[1430]: time="2025-09-10T00:13:51.983420029Z" level=info msg="StopPodSandbox for \"1d818c7a41fd63b7d4631ae852b8897391bd867cf4a341f468a4a1dc715dec5d\" returns successfully" Sep 10 00:13:51.984578 kubelet[2471]: E0910 00:13:51.983667 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:13:51.984822 containerd[1430]: time="2025-09-10T00:13:51.984477318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9575c,Uid:138bcf10-bfb3-4e83-915f-37b2d9c80ead,Namespace:kube-system,Attempt:1,}" Sep 10 00:13:52.124784 systemd-networkd[1370]: calica46d2300cd: Link UP Sep 10 00:13:52.125714 systemd-networkd[1370]: calica46d2300cd: Gained carrier Sep 10 00:13:52.141137 containerd[1430]: 2025-09-10 00:13:51.992 [INFO][4448] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 10 00:13:52.141137 containerd[1430]: 2025-09-10 00:13:52.016 [INFO][4448] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7988f88666--9zjzp-eth0 goldmane-7988f88666- calico-system d1a71852-d2fe-4382-8a25-a3d286247a75 965 0 2025-09-10 00:13:31 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7988f88666-9zjzp eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calica46d2300cd [] [] }} ContainerID="e874f8a086ccd5e2aca1101fb74180dd7b1e6345e182eb750c32bcdfc7a47a4d" Namespace="calico-system" Pod="goldmane-7988f88666-9zjzp" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--9zjzp-" Sep 10 00:13:52.141137 containerd[1430]: 2025-09-10 00:13:52.016 [INFO][4448] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e874f8a086ccd5e2aca1101fb74180dd7b1e6345e182eb750c32bcdfc7a47a4d" Namespace="calico-system" Pod="goldmane-7988f88666-9zjzp" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--9zjzp-eth0" Sep 10 00:13:52.141137 containerd[1430]: 2025-09-10 00:13:52.074 [INFO][4496] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e874f8a086ccd5e2aca1101fb74180dd7b1e6345e182eb750c32bcdfc7a47a4d" HandleID="k8s-pod-network.e874f8a086ccd5e2aca1101fb74180dd7b1e6345e182eb750c32bcdfc7a47a4d" Workload="localhost-k8s-goldmane--7988f88666--9zjzp-eth0" Sep 10 00:13:52.141137 containerd[1430]: 2025-09-10 00:13:52.074 [INFO][4496] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e874f8a086ccd5e2aca1101fb74180dd7b1e6345e182eb750c32bcdfc7a47a4d" HandleID="k8s-pod-network.e874f8a086ccd5e2aca1101fb74180dd7b1e6345e182eb750c32bcdfc7a47a4d" Workload="localhost-k8s-goldmane--7988f88666--9zjzp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400035d730), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7988f88666-9zjzp", "timestamp":"2025-09-10 00:13:52.074211877 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 00:13:52.141137 containerd[1430]: 2025-09-10 00:13:52.074 [INFO][4496] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:13:52.141137 containerd[1430]: 2025-09-10 00:13:52.074 [INFO][4496] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:13:52.141137 containerd[1430]: 2025-09-10 00:13:52.074 [INFO][4496] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 00:13:52.141137 containerd[1430]: 2025-09-10 00:13:52.086 [INFO][4496] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e874f8a086ccd5e2aca1101fb74180dd7b1e6345e182eb750c32bcdfc7a47a4d" host="localhost" Sep 10 00:13:52.141137 containerd[1430]: 2025-09-10 00:13:52.090 [INFO][4496] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 00:13:52.141137 containerd[1430]: 2025-09-10 00:13:52.095 [INFO][4496] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 00:13:52.141137 containerd[1430]: 2025-09-10 00:13:52.098 [INFO][4496] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 00:13:52.141137 containerd[1430]: 2025-09-10 00:13:52.101 [INFO][4496] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 00:13:52.141137 containerd[1430]: 2025-09-10 00:13:52.101 [INFO][4496] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e874f8a086ccd5e2aca1101fb74180dd7b1e6345e182eb750c32bcdfc7a47a4d" host="localhost" Sep 10 00:13:52.141137 containerd[1430]: 2025-09-10 00:13:52.108 [INFO][4496] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e874f8a086ccd5e2aca1101fb74180dd7b1e6345e182eb750c32bcdfc7a47a4d Sep 10 00:13:52.141137 containerd[1430]: 2025-09-10 00:13:52.112 [INFO][4496] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e874f8a086ccd5e2aca1101fb74180dd7b1e6345e182eb750c32bcdfc7a47a4d" host="localhost" Sep 10 00:13:52.141137 containerd[1430]: 2025-09-10 00:13:52.118 [INFO][4496] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.e874f8a086ccd5e2aca1101fb74180dd7b1e6345e182eb750c32bcdfc7a47a4d" host="localhost" Sep 10 00:13:52.141137 containerd[1430]: 2025-09-10 00:13:52.119 [INFO][4496] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.e874f8a086ccd5e2aca1101fb74180dd7b1e6345e182eb750c32bcdfc7a47a4d" host="localhost" Sep 10 00:13:52.141137 containerd[1430]: 2025-09-10 00:13:52.119 [INFO][4496] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:13:52.141137 containerd[1430]: 2025-09-10 00:13:52.119 [INFO][4496] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="e874f8a086ccd5e2aca1101fb74180dd7b1e6345e182eb750c32bcdfc7a47a4d" HandleID="k8s-pod-network.e874f8a086ccd5e2aca1101fb74180dd7b1e6345e182eb750c32bcdfc7a47a4d" Workload="localhost-k8s-goldmane--7988f88666--9zjzp-eth0" Sep 10 00:13:52.143437 containerd[1430]: 2025-09-10 00:13:52.122 [INFO][4448] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e874f8a086ccd5e2aca1101fb74180dd7b1e6345e182eb750c32bcdfc7a47a4d" Namespace="calico-system" Pod="goldmane-7988f88666-9zjzp" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--9zjzp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--9zjzp-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"d1a71852-d2fe-4382-8a25-a3d286247a75", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 13, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7988f88666-9zjzp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calica46d2300cd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:13:52.143437 containerd[1430]: 2025-09-10 00:13:52.122 [INFO][4448] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="e874f8a086ccd5e2aca1101fb74180dd7b1e6345e182eb750c32bcdfc7a47a4d" Namespace="calico-system" Pod="goldmane-7988f88666-9zjzp" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--9zjzp-eth0" Sep 10 00:13:52.143437 containerd[1430]: 2025-09-10 00:13:52.122 [INFO][4448] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calica46d2300cd ContainerID="e874f8a086ccd5e2aca1101fb74180dd7b1e6345e182eb750c32bcdfc7a47a4d" Namespace="calico-system" Pod="goldmane-7988f88666-9zjzp" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--9zjzp-eth0" Sep 10 00:13:52.143437 containerd[1430]: 2025-09-10 00:13:52.124 [INFO][4448] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e874f8a086ccd5e2aca1101fb74180dd7b1e6345e182eb750c32bcdfc7a47a4d" Namespace="calico-system" Pod="goldmane-7988f88666-9zjzp" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--9zjzp-eth0" Sep 10 00:13:52.143437 containerd[1430]: 2025-09-10 00:13:52.125 [INFO][4448] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e874f8a086ccd5e2aca1101fb74180dd7b1e6345e182eb750c32bcdfc7a47a4d" Namespace="calico-system" Pod="goldmane-7988f88666-9zjzp" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--9zjzp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--9zjzp-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"d1a71852-d2fe-4382-8a25-a3d286247a75", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 13, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e874f8a086ccd5e2aca1101fb74180dd7b1e6345e182eb750c32bcdfc7a47a4d", Pod:"goldmane-7988f88666-9zjzp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calica46d2300cd", MAC:"42:47:2d:1a:ae:da", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:13:52.143437 containerd[1430]: 2025-09-10 00:13:52.137 [INFO][4448] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e874f8a086ccd5e2aca1101fb74180dd7b1e6345e182eb750c32bcdfc7a47a4d" Namespace="calico-system" Pod="goldmane-7988f88666-9zjzp" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--9zjzp-eth0" Sep 10 00:13:52.171548 containerd[1430]: time="2025-09-10T00:13:52.171346135Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:13:52.171548 containerd[1430]: time="2025-09-10T00:13:52.171433615Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:13:52.171548 containerd[1430]: time="2025-09-10T00:13:52.171444655Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:13:52.171720 containerd[1430]: time="2025-09-10T00:13:52.171621617Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:13:52.202794 systemd[1]: Started cri-containerd-e874f8a086ccd5e2aca1101fb74180dd7b1e6345e182eb750c32bcdfc7a47a4d.scope - libcontainer container e874f8a086ccd5e2aca1101fb74180dd7b1e6345e182eb750c32bcdfc7a47a4d. Sep 10 00:13:52.225214 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:13:52.239870 systemd-networkd[1370]: calie40bc9eb9c4: Link UP Sep 10 00:13:52.242877 systemd-networkd[1370]: calie40bc9eb9c4: Gained carrier Sep 10 00:13:52.251168 containerd[1430]: time="2025-09-10T00:13:52.251028685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-9zjzp,Uid:d1a71852-d2fe-4382-8a25-a3d286247a75,Namespace:calico-system,Attempt:1,} returns sandbox id \"e874f8a086ccd5e2aca1101fb74180dd7b1e6345e182eb750c32bcdfc7a47a4d\"" Sep 10 00:13:52.261008 containerd[1430]: 2025-09-10 00:13:52.002 [INFO][4458] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 10 00:13:52.261008 containerd[1430]: 2025-09-10 00:13:52.022 [INFO][4458] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7f764b5b64--dd89z-eth0 calico-kube-controllers-7f764b5b64- calico-system 2f58e97e-6e12-4135-b90c-0a1b0b407422 966 0 2025-09-10 00:13:32 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7f764b5b64 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7f764b5b64-dd89z eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie40bc9eb9c4 [] [] }} ContainerID="0a27bced0f808e0bec550c849e07387a6f59054710e9802eb2a8367c7616872e" Namespace="calico-system" Pod="calico-kube-controllers-7f764b5b64-dd89z" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f764b5b64--dd89z-" Sep 10 00:13:52.261008 containerd[1430]: 2025-09-10 00:13:52.022 [INFO][4458] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0a27bced0f808e0bec550c849e07387a6f59054710e9802eb2a8367c7616872e" Namespace="calico-system" Pod="calico-kube-controllers-7f764b5b64-dd89z" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f764b5b64--dd89z-eth0" Sep 10 00:13:52.261008 containerd[1430]: 2025-09-10 00:13:52.091 [INFO][4490] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0a27bced0f808e0bec550c849e07387a6f59054710e9802eb2a8367c7616872e" HandleID="k8s-pod-network.0a27bced0f808e0bec550c849e07387a6f59054710e9802eb2a8367c7616872e" Workload="localhost-k8s-calico--kube--controllers--7f764b5b64--dd89z-eth0" Sep 10 00:13:52.261008 containerd[1430]: 2025-09-10 00:13:52.091 [INFO][4490] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0a27bced0f808e0bec550c849e07387a6f59054710e9802eb2a8367c7616872e" HandleID="k8s-pod-network.0a27bced0f808e0bec550c849e07387a6f59054710e9802eb2a8367c7616872e" Workload="localhost-k8s-calico--kube--controllers--7f764b5b64--dd89z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d5a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7f764b5b64-dd89z", "timestamp":"2025-09-10 00:13:52.091056219 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 00:13:52.261008 containerd[1430]: 2025-09-10 00:13:52.091 [INFO][4490] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:13:52.261008 containerd[1430]: 2025-09-10 00:13:52.119 [INFO][4490] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:13:52.261008 containerd[1430]: 2025-09-10 00:13:52.119 [INFO][4490] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 00:13:52.261008 containerd[1430]: 2025-09-10 00:13:52.188 [INFO][4490] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0a27bced0f808e0bec550c849e07387a6f59054710e9802eb2a8367c7616872e" host="localhost" Sep 10 00:13:52.261008 containerd[1430]: 2025-09-10 00:13:52.197 [INFO][4490] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 00:13:52.261008 containerd[1430]: 2025-09-10 00:13:52.203 [INFO][4490] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 00:13:52.261008 containerd[1430]: 2025-09-10 00:13:52.206 [INFO][4490] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 00:13:52.261008 containerd[1430]: 2025-09-10 00:13:52.211 [INFO][4490] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 00:13:52.261008 containerd[1430]: 2025-09-10 00:13:52.211 [INFO][4490] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0a27bced0f808e0bec550c849e07387a6f59054710e9802eb2a8367c7616872e" host="localhost" Sep 10 00:13:52.261008 containerd[1430]: 2025-09-10 00:13:52.213 [INFO][4490] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0a27bced0f808e0bec550c849e07387a6f59054710e9802eb2a8367c7616872e Sep 10 00:13:52.261008 containerd[1430]: 2025-09-10 00:13:52.219 [INFO][4490] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0a27bced0f808e0bec550c849e07387a6f59054710e9802eb2a8367c7616872e" host="localhost" Sep 10 00:13:52.261008 containerd[1430]: 2025-09-10 00:13:52.228 [INFO][4490] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.0a27bced0f808e0bec550c849e07387a6f59054710e9802eb2a8367c7616872e" host="localhost" Sep 10 00:13:52.261008 containerd[1430]: 2025-09-10 00:13:52.228 [INFO][4490] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.0a27bced0f808e0bec550c849e07387a6f59054710e9802eb2a8367c7616872e" host="localhost" Sep 10 00:13:52.261008 containerd[1430]: 2025-09-10 00:13:52.228 [INFO][4490] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:13:52.261008 containerd[1430]: 2025-09-10 00:13:52.228 [INFO][4490] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="0a27bced0f808e0bec550c849e07387a6f59054710e9802eb2a8367c7616872e" HandleID="k8s-pod-network.0a27bced0f808e0bec550c849e07387a6f59054710e9802eb2a8367c7616872e" Workload="localhost-k8s-calico--kube--controllers--7f764b5b64--dd89z-eth0" Sep 10 00:13:52.261665 containerd[1430]: 2025-09-10 00:13:52.233 [INFO][4458] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0a27bced0f808e0bec550c849e07387a6f59054710e9802eb2a8367c7616872e" Namespace="calico-system" Pod="calico-kube-controllers-7f764b5b64-dd89z" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f764b5b64--dd89z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7f764b5b64--dd89z-eth0", GenerateName:"calico-kube-controllers-7f764b5b64-", Namespace:"calico-system", SelfLink:"", UID:"2f58e97e-6e12-4135-b90c-0a1b0b407422", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 13, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f764b5b64", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7f764b5b64-dd89z", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie40bc9eb9c4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:13:52.261665 containerd[1430]: 2025-09-10 00:13:52.233 [INFO][4458] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="0a27bced0f808e0bec550c849e07387a6f59054710e9802eb2a8367c7616872e" Namespace="calico-system" Pod="calico-kube-controllers-7f764b5b64-dd89z" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f764b5b64--dd89z-eth0" Sep 10 00:13:52.261665 containerd[1430]: 2025-09-10 00:13:52.233 [INFO][4458] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie40bc9eb9c4 ContainerID="0a27bced0f808e0bec550c849e07387a6f59054710e9802eb2a8367c7616872e" Namespace="calico-system" Pod="calico-kube-controllers-7f764b5b64-dd89z" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f764b5b64--dd89z-eth0" Sep 10 00:13:52.261665 containerd[1430]: 2025-09-10 00:13:52.244 [INFO][4458] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0a27bced0f808e0bec550c849e07387a6f59054710e9802eb2a8367c7616872e" Namespace="calico-system" Pod="calico-kube-controllers-7f764b5b64-dd89z" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f764b5b64--dd89z-eth0" Sep 10 00:13:52.261665 containerd[1430]: 2025-09-10 00:13:52.246 [INFO][4458] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0a27bced0f808e0bec550c849e07387a6f59054710e9802eb2a8367c7616872e" Namespace="calico-system" Pod="calico-kube-controllers-7f764b5b64-dd89z" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f764b5b64--dd89z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7f764b5b64--dd89z-eth0", GenerateName:"calico-kube-controllers-7f764b5b64-", Namespace:"calico-system", SelfLink:"", UID:"2f58e97e-6e12-4135-b90c-0a1b0b407422", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 13, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f764b5b64", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0a27bced0f808e0bec550c849e07387a6f59054710e9802eb2a8367c7616872e", Pod:"calico-kube-controllers-7f764b5b64-dd89z", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie40bc9eb9c4", MAC:"da:b2:4c:b0:9f:70", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:13:52.261665 containerd[1430]: 2025-09-10 00:13:52.259 [INFO][4458] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0a27bced0f808e0bec550c849e07387a6f59054710e9802eb2a8367c7616872e" Namespace="calico-system" Pod="calico-kube-controllers-7f764b5b64-dd89z" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f764b5b64--dd89z-eth0" Sep 10 00:13:52.279785 containerd[1430]: time="2025-09-10T00:13:52.279535165Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:13:52.279785 containerd[1430]: time="2025-09-10T00:13:52.279644206Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:13:52.279785 containerd[1430]: time="2025-09-10T00:13:52.279662006Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:13:52.280153 containerd[1430]: time="2025-09-10T00:13:52.279978288Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:13:52.299675 systemd[1]: Started cri-containerd-0a27bced0f808e0bec550c849e07387a6f59054710e9802eb2a8367c7616872e.scope - libcontainer container 0a27bced0f808e0bec550c849e07387a6f59054710e9802eb2a8367c7616872e. Sep 10 00:13:52.313434 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:13:52.338302 containerd[1430]: time="2025-09-10T00:13:52.338240578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f764b5b64-dd89z,Uid:2f58e97e-6e12-4135-b90c-0a1b0b407422,Namespace:calico-system,Attempt:1,} returns sandbox id \"0a27bced0f808e0bec550c849e07387a6f59054710e9802eb2a8367c7616872e\"" Sep 10 00:13:52.340800 systemd-networkd[1370]: calia0d3215865b: Link UP Sep 10 00:13:52.341214 systemd-networkd[1370]: calia0d3215865b: Gained carrier Sep 10 00:13:52.365271 containerd[1430]: 2025-09-10 00:13:52.034 [INFO][4475] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 10 00:13:52.365271 containerd[1430]: 2025-09-10 00:13:52.060 [INFO][4475] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--9575c-eth0 coredns-7c65d6cfc9- kube-system 138bcf10-bfb3-4e83-915f-37b2d9c80ead 967 0 2025-09-10 00:13:20 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-9575c eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia0d3215865b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="78a011b78a26092b5c8f33ef62e4f842e4057dd80a615447c08ae92476b92627" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9575c" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--9575c-" Sep 10 00:13:52.365271 containerd[1430]: 2025-09-10 00:13:52.060 [INFO][4475] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="78a011b78a26092b5c8f33ef62e4f842e4057dd80a615447c08ae92476b92627" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9575c" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--9575c-eth0" Sep 10 00:13:52.365271 containerd[1430]: 2025-09-10 00:13:52.111 [INFO][4505] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="78a011b78a26092b5c8f33ef62e4f842e4057dd80a615447c08ae92476b92627" HandleID="k8s-pod-network.78a011b78a26092b5c8f33ef62e4f842e4057dd80a615447c08ae92476b92627" Workload="localhost-k8s-coredns--7c65d6cfc9--9575c-eth0" Sep 10 00:13:52.365271 containerd[1430]: 2025-09-10 00:13:52.111 [INFO][4505] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="78a011b78a26092b5c8f33ef62e4f842e4057dd80a615447c08ae92476b92627" HandleID="k8s-pod-network.78a011b78a26092b5c8f33ef62e4f842e4057dd80a615447c08ae92476b92627" Workload="localhost-k8s-coredns--7c65d6cfc9--9575c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000533950), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-9575c", "timestamp":"2025-09-10 00:13:52.111061827 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 00:13:52.365271 containerd[1430]: 2025-09-10 00:13:52.111 [INFO][4505] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:13:52.365271 containerd[1430]: 2025-09-10 00:13:52.229 [INFO][4505] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:13:52.365271 containerd[1430]: 2025-09-10 00:13:52.229 [INFO][4505] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 00:13:52.365271 containerd[1430]: 2025-09-10 00:13:52.288 [INFO][4505] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.78a011b78a26092b5c8f33ef62e4f842e4057dd80a615447c08ae92476b92627" host="localhost" Sep 10 00:13:52.365271 containerd[1430]: 2025-09-10 00:13:52.298 [INFO][4505] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 00:13:52.365271 containerd[1430]: 2025-09-10 00:13:52.306 [INFO][4505] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 00:13:52.365271 containerd[1430]: 2025-09-10 00:13:52.309 [INFO][4505] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 00:13:52.365271 containerd[1430]: 2025-09-10 00:13:52.313 [INFO][4505] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 00:13:52.365271 containerd[1430]: 2025-09-10 00:13:52.313 [INFO][4505] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.78a011b78a26092b5c8f33ef62e4f842e4057dd80a615447c08ae92476b92627" host="localhost" Sep 10 00:13:52.365271 containerd[1430]: 2025-09-10 00:13:52.315 [INFO][4505] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.78a011b78a26092b5c8f33ef62e4f842e4057dd80a615447c08ae92476b92627 Sep 10 00:13:52.365271 containerd[1430]: 2025-09-10 00:13:52.320 [INFO][4505] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.78a011b78a26092b5c8f33ef62e4f842e4057dd80a615447c08ae92476b92627" host="localhost" Sep 10 00:13:52.365271 containerd[1430]: 2025-09-10 00:13:52.332 [INFO][4505] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.78a011b78a26092b5c8f33ef62e4f842e4057dd80a615447c08ae92476b92627" host="localhost" Sep 10 00:13:52.365271 containerd[1430]: 2025-09-10 00:13:52.332 [INFO][4505] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.78a011b78a26092b5c8f33ef62e4f842e4057dd80a615447c08ae92476b92627" host="localhost" Sep 10 00:13:52.365271 containerd[1430]: 2025-09-10 00:13:52.332 [INFO][4505] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:13:52.365271 containerd[1430]: 2025-09-10 00:13:52.332 [INFO][4505] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="78a011b78a26092b5c8f33ef62e4f842e4057dd80a615447c08ae92476b92627" HandleID="k8s-pod-network.78a011b78a26092b5c8f33ef62e4f842e4057dd80a615447c08ae92476b92627" Workload="localhost-k8s-coredns--7c65d6cfc9--9575c-eth0" Sep 10 00:13:52.365975 containerd[1430]: 2025-09-10 00:13:52.335 [INFO][4475] cni-plugin/k8s.go 418: Populated endpoint ContainerID="78a011b78a26092b5c8f33ef62e4f842e4057dd80a615447c08ae92476b92627" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9575c" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--9575c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--9575c-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"138bcf10-bfb3-4e83-915f-37b2d9c80ead", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 13, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-9575c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia0d3215865b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:13:52.365975 containerd[1430]: 2025-09-10 00:13:52.335 [INFO][4475] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="78a011b78a26092b5c8f33ef62e4f842e4057dd80a615447c08ae92476b92627" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9575c" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--9575c-eth0" Sep 10 00:13:52.365975 containerd[1430]: 2025-09-10 00:13:52.335 [INFO][4475] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia0d3215865b ContainerID="78a011b78a26092b5c8f33ef62e4f842e4057dd80a615447c08ae92476b92627" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9575c" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--9575c-eth0" Sep 10 00:13:52.365975 containerd[1430]: 2025-09-10 00:13:52.341 [INFO][4475] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="78a011b78a26092b5c8f33ef62e4f842e4057dd80a615447c08ae92476b92627" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9575c" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--9575c-eth0" Sep 10 00:13:52.365975 containerd[1430]: 2025-09-10 00:13:52.348 [INFO][4475] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="78a011b78a26092b5c8f33ef62e4f842e4057dd80a615447c08ae92476b92627" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9575c" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--9575c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--9575c-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"138bcf10-bfb3-4e83-915f-37b2d9c80ead", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 13, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"78a011b78a26092b5c8f33ef62e4f842e4057dd80a615447c08ae92476b92627", Pod:"coredns-7c65d6cfc9-9575c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia0d3215865b", MAC:"be:28:e1:2e:da:41", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:13:52.365975 containerd[1430]: 2025-09-10 00:13:52.362 [INFO][4475] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="78a011b78a26092b5c8f33ef62e4f842e4057dd80a615447c08ae92476b92627" Namespace="kube-system" Pod="coredns-7c65d6cfc9-9575c" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--9575c-eth0" Sep 10 00:13:52.388433 containerd[1430]: time="2025-09-10T00:13:52.388214639Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:13:52.388433 containerd[1430]: time="2025-09-10T00:13:52.388282199Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:13:52.388433 containerd[1430]: time="2025-09-10T00:13:52.388294039Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:13:52.388433 containerd[1430]: time="2025-09-10T00:13:52.388380200Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:13:52.407670 systemd[1]: Started cri-containerd-78a011b78a26092b5c8f33ef62e4f842e4057dd80a615447c08ae92476b92627.scope - libcontainer container 78a011b78a26092b5c8f33ef62e4f842e4057dd80a615447c08ae92476b92627. Sep 10 00:13:52.418642 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:13:52.434397 containerd[1430]: time="2025-09-10T00:13:52.434362947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9575c,Uid:138bcf10-bfb3-4e83-915f-37b2d9c80ead,Namespace:kube-system,Attempt:1,} returns sandbox id \"78a011b78a26092b5c8f33ef62e4f842e4057dd80a615447c08ae92476b92627\"" Sep 10 00:13:52.435410 kubelet[2471]: E0910 00:13:52.435302 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:13:52.438813 containerd[1430]: time="2025-09-10T00:13:52.438763304Z" level=info msg="CreateContainer within sandbox \"78a011b78a26092b5c8f33ef62e4f842e4057dd80a615447c08ae92476b92627\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 10 00:13:52.455161 containerd[1430]: time="2025-09-10T00:13:52.455062041Z" level=info msg="CreateContainer within sandbox \"78a011b78a26092b5c8f33ef62e4f842e4057dd80a615447c08ae92476b92627\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"54690193c197c1a4b680b9381c58fa7ddde20c9ba49271988d29ea78fb911b6e\"" Sep 10 00:13:52.455695 containerd[1430]: time="2025-09-10T00:13:52.455667406Z" level=info msg="StartContainer for \"54690193c197c1a4b680b9381c58fa7ddde20c9ba49271988d29ea78fb911b6e\"" Sep 10 00:13:52.481690 systemd[1]: Started cri-containerd-54690193c197c1a4b680b9381c58fa7ddde20c9ba49271988d29ea78fb911b6e.scope - libcontainer container 54690193c197c1a4b680b9381c58fa7ddde20c9ba49271988d29ea78fb911b6e. Sep 10 00:13:52.511372 containerd[1430]: time="2025-09-10T00:13:52.511108272Z" level=info msg="StartContainer for \"54690193c197c1a4b680b9381c58fa7ddde20c9ba49271988d29ea78fb911b6e\" returns successfully" Sep 10 00:13:52.630662 systemd-networkd[1370]: calid50230b23d2: Gained IPv6LL Sep 10 00:13:52.830007 containerd[1430]: time="2025-09-10T00:13:52.829600832Z" level=info msg="StopPodSandbox for \"d69084ff2923cc958b95e61a07b97aed2f0bbea5caddd3a838f8758611a00248\"" Sep 10 00:13:52.926011 containerd[1430]: 2025-09-10 00:13:52.879 [INFO][4737] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d69084ff2923cc958b95e61a07b97aed2f0bbea5caddd3a838f8758611a00248" Sep 10 00:13:52.926011 containerd[1430]: 2025-09-10 00:13:52.879 [INFO][4737] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d69084ff2923cc958b95e61a07b97aed2f0bbea5caddd3a838f8758611a00248" iface="eth0" netns="/var/run/netns/cni-4fc8c4b8-4468-24e9-d654-03fa0058947f" Sep 10 00:13:52.926011 containerd[1430]: 2025-09-10 00:13:52.879 [INFO][4737] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d69084ff2923cc958b95e61a07b97aed2f0bbea5caddd3a838f8758611a00248" iface="eth0" netns="/var/run/netns/cni-4fc8c4b8-4468-24e9-d654-03fa0058947f" Sep 10 00:13:52.926011 containerd[1430]: 2025-09-10 00:13:52.880 [INFO][4737] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d69084ff2923cc958b95e61a07b97aed2f0bbea5caddd3a838f8758611a00248" iface="eth0" netns="/var/run/netns/cni-4fc8c4b8-4468-24e9-d654-03fa0058947f" Sep 10 00:13:52.926011 containerd[1430]: 2025-09-10 00:13:52.880 [INFO][4737] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d69084ff2923cc958b95e61a07b97aed2f0bbea5caddd3a838f8758611a00248" Sep 10 00:13:52.926011 containerd[1430]: 2025-09-10 00:13:52.881 [INFO][4737] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d69084ff2923cc958b95e61a07b97aed2f0bbea5caddd3a838f8758611a00248" Sep 10 00:13:52.926011 containerd[1430]: 2025-09-10 00:13:52.910 [INFO][4746] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d69084ff2923cc958b95e61a07b97aed2f0bbea5caddd3a838f8758611a00248" HandleID="k8s-pod-network.d69084ff2923cc958b95e61a07b97aed2f0bbea5caddd3a838f8758611a00248" Workload="localhost-k8s-csi--node--driver--6btnb-eth0" Sep 10 00:13:52.926011 containerd[1430]: 2025-09-10 00:13:52.910 [INFO][4746] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:13:52.926011 containerd[1430]: 2025-09-10 00:13:52.910 [INFO][4746] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:13:52.926011 containerd[1430]: 2025-09-10 00:13:52.920 [WARNING][4746] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d69084ff2923cc958b95e61a07b97aed2f0bbea5caddd3a838f8758611a00248" HandleID="k8s-pod-network.d69084ff2923cc958b95e61a07b97aed2f0bbea5caddd3a838f8758611a00248" Workload="localhost-k8s-csi--node--driver--6btnb-eth0" Sep 10 00:13:52.926011 containerd[1430]: 2025-09-10 00:13:52.920 [INFO][4746] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d69084ff2923cc958b95e61a07b97aed2f0bbea5caddd3a838f8758611a00248" HandleID="k8s-pod-network.d69084ff2923cc958b95e61a07b97aed2f0bbea5caddd3a838f8758611a00248" Workload="localhost-k8s-csi--node--driver--6btnb-eth0" Sep 10 00:13:52.926011 containerd[1430]: 2025-09-10 00:13:52.922 [INFO][4746] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:13:52.926011 containerd[1430]: 2025-09-10 00:13:52.924 [INFO][4737] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d69084ff2923cc958b95e61a07b97aed2f0bbea5caddd3a838f8758611a00248" Sep 10 00:13:52.927182 containerd[1430]: time="2025-09-10T00:13:52.926115283Z" level=info msg="TearDown network for sandbox \"d69084ff2923cc958b95e61a07b97aed2f0bbea5caddd3a838f8758611a00248\" successfully" Sep 10 00:13:52.927182 containerd[1430]: time="2025-09-10T00:13:52.926150724Z" level=info msg="StopPodSandbox for \"d69084ff2923cc958b95e61a07b97aed2f0bbea5caddd3a838f8758611a00248\" returns successfully" Sep 10 00:13:52.927182 containerd[1430]: time="2025-09-10T00:13:52.926820729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6btnb,Uid:09c5ed53-4869-4b1c-8a65-b62ac3f88415,Namespace:calico-system,Attempt:1,}" Sep 10 00:13:52.954078 systemd[1]: run-netns-cni\x2d4fc8c4b8\x2d4468\x2d24e9\x2dd654\x2d03fa0058947f.mount: Deactivated successfully. Sep 10 00:13:52.954177 systemd[1]: run-netns-cni\x2d49702ec4\x2d6458\x2dec48\x2d1521\x2d9d69bbba0677.mount: Deactivated successfully. Sep 10 00:13:52.982517 kubelet[2471]: E0910 00:13:52.982338 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:13:53.017092 kubelet[2471]: I0910 00:13:53.016923 2471 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-9575c" podStartSLOduration=33.016902643 podStartE2EDuration="33.016902643s" podCreationTimestamp="2025-09-10 00:13:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:13:52.996816318 +0000 UTC m=+38.253639091" watchObservedRunningTime="2025-09-10 00:13:53.016902643 +0000 UTC m=+38.273725376" Sep 10 00:13:53.090404 systemd-networkd[1370]: cali808ac0ad1fd: Link UP Sep 10 00:13:53.093473 systemd-networkd[1370]: cali808ac0ad1fd: Gained carrier Sep 10 00:13:53.109468 containerd[1430]: 2025-09-10 00:13:52.965 [INFO][4754] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 10 00:13:53.109468 containerd[1430]: 2025-09-10 00:13:52.989 [INFO][4754] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--6btnb-eth0 csi-node-driver- calico-system 09c5ed53-4869-4b1c-8a65-b62ac3f88415 987 0 2025-09-10 00:13:32 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-6btnb eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali808ac0ad1fd [] [] }} ContainerID="4434cbe969e6287ab844dffe2ac371ff4460c05bd3d9a16caccb0b53e084b8c7" Namespace="calico-system" Pod="csi-node-driver-6btnb" WorkloadEndpoint="localhost-k8s-csi--node--driver--6btnb-" Sep 10 00:13:53.109468 containerd[1430]: 2025-09-10 00:13:52.990 [INFO][4754] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4434cbe969e6287ab844dffe2ac371ff4460c05bd3d9a16caccb0b53e084b8c7" Namespace="calico-system" Pod="csi-node-driver-6btnb" WorkloadEndpoint="localhost-k8s-csi--node--driver--6btnb-eth0" Sep 10 00:13:53.109468 containerd[1430]: 2025-09-10 00:13:53.041 [INFO][4770] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4434cbe969e6287ab844dffe2ac371ff4460c05bd3d9a16caccb0b53e084b8c7" HandleID="k8s-pod-network.4434cbe969e6287ab844dffe2ac371ff4460c05bd3d9a16caccb0b53e084b8c7" Workload="localhost-k8s-csi--node--driver--6btnb-eth0" Sep 10 00:13:53.109468 containerd[1430]: 2025-09-10 00:13:53.041 [INFO][4770] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4434cbe969e6287ab844dffe2ac371ff4460c05bd3d9a16caccb0b53e084b8c7" HandleID="k8s-pod-network.4434cbe969e6287ab844dffe2ac371ff4460c05bd3d9a16caccb0b53e084b8c7" Workload="localhost-k8s-csi--node--driver--6btnb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400012e4f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-6btnb", "timestamp":"2025-09-10 00:13:53.041108481 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 00:13:53.109468 containerd[1430]: 2025-09-10 00:13:53.041 [INFO][4770] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:13:53.109468 containerd[1430]: 2025-09-10 00:13:53.041 [INFO][4770] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:13:53.109468 containerd[1430]: 2025-09-10 00:13:53.041 [INFO][4770] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 00:13:53.109468 containerd[1430]: 2025-09-10 00:13:53.052 [INFO][4770] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4434cbe969e6287ab844dffe2ac371ff4460c05bd3d9a16caccb0b53e084b8c7" host="localhost" Sep 10 00:13:53.109468 containerd[1430]: 2025-09-10 00:13:53.056 [INFO][4770] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 00:13:53.109468 containerd[1430]: 2025-09-10 00:13:53.061 [INFO][4770] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 00:13:53.109468 containerd[1430]: 2025-09-10 00:13:53.063 [INFO][4770] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 00:13:53.109468 containerd[1430]: 2025-09-10 00:13:53.065 [INFO][4770] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 00:13:53.109468 containerd[1430]: 2025-09-10 00:13:53.065 [INFO][4770] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4434cbe969e6287ab844dffe2ac371ff4460c05bd3d9a16caccb0b53e084b8c7" host="localhost" Sep 10 00:13:53.109468 containerd[1430]: 2025-09-10 00:13:53.067 [INFO][4770] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4434cbe969e6287ab844dffe2ac371ff4460c05bd3d9a16caccb0b53e084b8c7 Sep 10 00:13:53.109468 containerd[1430]: 2025-09-10 00:13:53.073 [INFO][4770] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4434cbe969e6287ab844dffe2ac371ff4460c05bd3d9a16caccb0b53e084b8c7" host="localhost" Sep 10 00:13:53.109468 containerd[1430]: 2025-09-10 00:13:53.083 [INFO][4770] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.4434cbe969e6287ab844dffe2ac371ff4460c05bd3d9a16caccb0b53e084b8c7" host="localhost" Sep 10 00:13:53.109468 containerd[1430]: 2025-09-10 00:13:53.084 [INFO][4770] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.4434cbe969e6287ab844dffe2ac371ff4460c05bd3d9a16caccb0b53e084b8c7" host="localhost" Sep 10 00:13:53.109468 containerd[1430]: 2025-09-10 00:13:53.084 [INFO][4770] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:13:53.109468 containerd[1430]: 2025-09-10 00:13:53.084 [INFO][4770] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="4434cbe969e6287ab844dffe2ac371ff4460c05bd3d9a16caccb0b53e084b8c7" HandleID="k8s-pod-network.4434cbe969e6287ab844dffe2ac371ff4460c05bd3d9a16caccb0b53e084b8c7" Workload="localhost-k8s-csi--node--driver--6btnb-eth0" Sep 10 00:13:53.110132 containerd[1430]: 2025-09-10 00:13:53.087 [INFO][4754] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4434cbe969e6287ab844dffe2ac371ff4460c05bd3d9a16caccb0b53e084b8c7" Namespace="calico-system" Pod="csi-node-driver-6btnb" WorkloadEndpoint="localhost-k8s-csi--node--driver--6btnb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6btnb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"09c5ed53-4869-4b1c-8a65-b62ac3f88415", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 13, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-6btnb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali808ac0ad1fd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:13:53.110132 containerd[1430]: 2025-09-10 00:13:53.087 [INFO][4754] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="4434cbe969e6287ab844dffe2ac371ff4460c05bd3d9a16caccb0b53e084b8c7" Namespace="calico-system" Pod="csi-node-driver-6btnb" WorkloadEndpoint="localhost-k8s-csi--node--driver--6btnb-eth0" Sep 10 00:13:53.110132 containerd[1430]: 2025-09-10 00:13:53.087 [INFO][4754] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali808ac0ad1fd ContainerID="4434cbe969e6287ab844dffe2ac371ff4460c05bd3d9a16caccb0b53e084b8c7" Namespace="calico-system" Pod="csi-node-driver-6btnb" WorkloadEndpoint="localhost-k8s-csi--node--driver--6btnb-eth0" Sep 10 00:13:53.110132 containerd[1430]: 2025-09-10 00:13:53.093 [INFO][4754] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4434cbe969e6287ab844dffe2ac371ff4460c05bd3d9a16caccb0b53e084b8c7" Namespace="calico-system" Pod="csi-node-driver-6btnb" WorkloadEndpoint="localhost-k8s-csi--node--driver--6btnb-eth0" Sep 10 00:13:53.110132 containerd[1430]: 2025-09-10 00:13:53.094 [INFO][4754] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4434cbe969e6287ab844dffe2ac371ff4460c05bd3d9a16caccb0b53e084b8c7" Namespace="calico-system" Pod="csi-node-driver-6btnb" WorkloadEndpoint="localhost-k8s-csi--node--driver--6btnb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6btnb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"09c5ed53-4869-4b1c-8a65-b62ac3f88415", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 13, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4434cbe969e6287ab844dffe2ac371ff4460c05bd3d9a16caccb0b53e084b8c7", Pod:"csi-node-driver-6btnb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali808ac0ad1fd", MAC:"5a:b5:9d:88:55:b8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:13:53.110132 containerd[1430]: 2025-09-10 00:13:53.106 [INFO][4754] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4434cbe969e6287ab844dffe2ac371ff4460c05bd3d9a16caccb0b53e084b8c7" Namespace="calico-system" Pod="csi-node-driver-6btnb" WorkloadEndpoint="localhost-k8s-csi--node--driver--6btnb-eth0" Sep 10 00:13:53.188214 containerd[1430]: time="2025-09-10T00:13:53.187651799Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:13:53.188214 containerd[1430]: time="2025-09-10T00:13:53.187717319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:13:53.188214 containerd[1430]: time="2025-09-10T00:13:53.187732719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:13:53.188214 containerd[1430]: time="2025-09-10T00:13:53.187809120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:13:53.216629 systemd[1]: Started cri-containerd-4434cbe969e6287ab844dffe2ac371ff4460c05bd3d9a16caccb0b53e084b8c7.scope - libcontainer container 4434cbe969e6287ab844dffe2ac371ff4460c05bd3d9a16caccb0b53e084b8c7. Sep 10 00:13:53.228481 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:13:53.249481 containerd[1430]: time="2025-09-10T00:13:53.249410743Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:13:53.251036 containerd[1430]: time="2025-09-10T00:13:53.250925956Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 10 00:13:53.251910 containerd[1430]: time="2025-09-10T00:13:53.251664002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6btnb,Uid:09c5ed53-4869-4b1c-8a65-b62ac3f88415,Namespace:calico-system,Attempt:1,} returns sandbox id \"4434cbe969e6287ab844dffe2ac371ff4460c05bd3d9a16caccb0b53e084b8c7\"" Sep 10 00:13:53.252534 containerd[1430]: time="2025-09-10T00:13:53.252387048Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:13:53.255710 containerd[1430]: time="2025-09-10T00:13:53.255651834Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:13:53.257029 containerd[1430]: time="2025-09-10T00:13:53.256987965Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 2.076337333s" Sep 10 00:13:53.257212 containerd[1430]: time="2025-09-10T00:13:53.257111406Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 10 00:13:53.259018 containerd[1430]: time="2025-09-10T00:13:53.258893421Z" level=info msg="CreateContainer within sandbox \"cdd9848092fa05e43c969e6cffae700f727f78c089e3d39f4287d0f93fdad33a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 10 00:13:53.259453 containerd[1430]: time="2025-09-10T00:13:53.259284304Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 10 00:13:53.271592 systemd-networkd[1370]: calie40bc9eb9c4: Gained IPv6LL Sep 10 00:13:53.275072 containerd[1430]: time="2025-09-10T00:13:53.274874552Z" level=info msg="CreateContainer within sandbox \"cdd9848092fa05e43c969e6cffae700f727f78c089e3d39f4287d0f93fdad33a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"80088e4f89e43f1217aa38c1136154de263c79553e48f4d6513348705e7683fc\"" Sep 10 00:13:53.276404 containerd[1430]: time="2025-09-10T00:13:53.275402596Z" level=info msg="StartContainer for \"80088e4f89e43f1217aa38c1136154de263c79553e48f4d6513348705e7683fc\"" Sep 10 00:13:53.305667 systemd[1]: Started cri-containerd-80088e4f89e43f1217aa38c1136154de263c79553e48f4d6513348705e7683fc.scope - libcontainer container 80088e4f89e43f1217aa38c1136154de263c79553e48f4d6513348705e7683fc. Sep 10 00:13:53.359386 containerd[1430]: time="2025-09-10T00:13:53.359213601Z" level=info msg="StartContainer for \"80088e4f89e43f1217aa38c1136154de263c79553e48f4d6513348705e7683fc\" returns successfully" Sep 10 00:13:53.526632 systemd-networkd[1370]: calica46d2300cd: Gained IPv6LL Sep 10 00:13:53.829972 containerd[1430]: time="2025-09-10T00:13:53.829704566Z" level=info msg="StopPodSandbox for \"d62eebdda38d4548d2e3f52b4f8523eec446425cf32213f102134b444b23ebf7\"" Sep 10 00:13:53.830355 containerd[1430]: time="2025-09-10T00:13:53.830126129Z" level=info msg="StopPodSandbox for \"2d43f127c1ddba66236343c437ddd27b49d78d0eaa7075097b8975be4eb06d4c\"" Sep 10 00:13:53.932354 containerd[1430]: 2025-09-10 00:13:53.883 [INFO][4922] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d62eebdda38d4548d2e3f52b4f8523eec446425cf32213f102134b444b23ebf7" Sep 10 00:13:53.932354 containerd[1430]: 2025-09-10 00:13:53.884 [INFO][4922] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d62eebdda38d4548d2e3f52b4f8523eec446425cf32213f102134b444b23ebf7" iface="eth0" netns="/var/run/netns/cni-1268a8f8-c376-ed49-9326-dd11a56f5be7" Sep 10 00:13:53.932354 containerd[1430]: 2025-09-10 00:13:53.885 [INFO][4922] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d62eebdda38d4548d2e3f52b4f8523eec446425cf32213f102134b444b23ebf7" iface="eth0" netns="/var/run/netns/cni-1268a8f8-c376-ed49-9326-dd11a56f5be7" Sep 10 00:13:53.932354 containerd[1430]: 2025-09-10 00:13:53.886 [INFO][4922] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d62eebdda38d4548d2e3f52b4f8523eec446425cf32213f102134b444b23ebf7" iface="eth0" netns="/var/run/netns/cni-1268a8f8-c376-ed49-9326-dd11a56f5be7" Sep 10 00:13:53.932354 containerd[1430]: 2025-09-10 00:13:53.886 [INFO][4922] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d62eebdda38d4548d2e3f52b4f8523eec446425cf32213f102134b444b23ebf7" Sep 10 00:13:53.932354 containerd[1430]: 2025-09-10 00:13:53.886 [INFO][4922] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d62eebdda38d4548d2e3f52b4f8523eec446425cf32213f102134b444b23ebf7" Sep 10 00:13:53.932354 containerd[1430]: 2025-09-10 00:13:53.911 [INFO][4939] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d62eebdda38d4548d2e3f52b4f8523eec446425cf32213f102134b444b23ebf7" HandleID="k8s-pod-network.d62eebdda38d4548d2e3f52b4f8523eec446425cf32213f102134b444b23ebf7" Workload="localhost-k8s-coredns--7c65d6cfc9--fprdx-eth0" Sep 10 00:13:53.932354 containerd[1430]: 2025-09-10 00:13:53.911 [INFO][4939] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:13:53.932354 containerd[1430]: 2025-09-10 00:13:53.911 [INFO][4939] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:13:53.932354 containerd[1430]: 2025-09-10 00:13:53.924 [WARNING][4939] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d62eebdda38d4548d2e3f52b4f8523eec446425cf32213f102134b444b23ebf7" HandleID="k8s-pod-network.d62eebdda38d4548d2e3f52b4f8523eec446425cf32213f102134b444b23ebf7" Workload="localhost-k8s-coredns--7c65d6cfc9--fprdx-eth0" Sep 10 00:13:53.932354 containerd[1430]: 2025-09-10 00:13:53.924 [INFO][4939] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d62eebdda38d4548d2e3f52b4f8523eec446425cf32213f102134b444b23ebf7" HandleID="k8s-pod-network.d62eebdda38d4548d2e3f52b4f8523eec446425cf32213f102134b444b23ebf7" Workload="localhost-k8s-coredns--7c65d6cfc9--fprdx-eth0" Sep 10 00:13:53.932354 containerd[1430]: 2025-09-10 00:13:53.927 [INFO][4939] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:13:53.932354 containerd[1430]: 2025-09-10 00:13:53.930 [INFO][4922] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d62eebdda38d4548d2e3f52b4f8523eec446425cf32213f102134b444b23ebf7" Sep 10 00:13:53.932941 containerd[1430]: time="2025-09-10T00:13:53.932466926Z" level=info msg="TearDown network for sandbox \"d62eebdda38d4548d2e3f52b4f8523eec446425cf32213f102134b444b23ebf7\" successfully" Sep 10 00:13:53.932941 containerd[1430]: time="2025-09-10T00:13:53.932490486Z" level=info msg="StopPodSandbox for \"d62eebdda38d4548d2e3f52b4f8523eec446425cf32213f102134b444b23ebf7\" returns successfully" Sep 10 00:13:53.933846 kubelet[2471]: E0910 00:13:53.933346 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:13:53.935185 containerd[1430]: time="2025-09-10T00:13:53.934857385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-fprdx,Uid:778ab3ef-74e3-4341-b35b-556c4e8acdd5,Namespace:kube-system,Attempt:1,}" Sep 10 00:13:53.943741 containerd[1430]: 2025-09-10 00:13:53.898 [INFO][4927] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2d43f127c1ddba66236343c437ddd27b49d78d0eaa7075097b8975be4eb06d4c" Sep 10 00:13:53.943741 containerd[1430]: 2025-09-10 00:13:53.899 [INFO][4927] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2d43f127c1ddba66236343c437ddd27b49d78d0eaa7075097b8975be4eb06d4c" iface="eth0" netns="/var/run/netns/cni-1eb1511b-7f50-4b38-3a23-6582f6d0b48f" Sep 10 00:13:53.943741 containerd[1430]: 2025-09-10 00:13:53.899 [INFO][4927] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2d43f127c1ddba66236343c437ddd27b49d78d0eaa7075097b8975be4eb06d4c" iface="eth0" netns="/var/run/netns/cni-1eb1511b-7f50-4b38-3a23-6582f6d0b48f" Sep 10 00:13:53.943741 containerd[1430]: 2025-09-10 00:13:53.899 [INFO][4927] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2d43f127c1ddba66236343c437ddd27b49d78d0eaa7075097b8975be4eb06d4c" iface="eth0" netns="/var/run/netns/cni-1eb1511b-7f50-4b38-3a23-6582f6d0b48f" Sep 10 00:13:53.943741 containerd[1430]: 2025-09-10 00:13:53.899 [INFO][4927] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2d43f127c1ddba66236343c437ddd27b49d78d0eaa7075097b8975be4eb06d4c" Sep 10 00:13:53.943741 containerd[1430]: 2025-09-10 00:13:53.899 [INFO][4927] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2d43f127c1ddba66236343c437ddd27b49d78d0eaa7075097b8975be4eb06d4c" Sep 10 00:13:53.943741 containerd[1430]: 2025-09-10 00:13:53.922 [INFO][4945] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2d43f127c1ddba66236343c437ddd27b49d78d0eaa7075097b8975be4eb06d4c" HandleID="k8s-pod-network.2d43f127c1ddba66236343c437ddd27b49d78d0eaa7075097b8975be4eb06d4c" Workload="localhost-k8s-calico--apiserver--6bc6768489--gxbgn-eth0" Sep 10 00:13:53.943741 containerd[1430]: 2025-09-10 00:13:53.922 [INFO][4945] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:13:53.943741 containerd[1430]: 2025-09-10 00:13:53.927 [INFO][4945] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:13:53.943741 containerd[1430]: 2025-09-10 00:13:53.936 [WARNING][4945] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2d43f127c1ddba66236343c437ddd27b49d78d0eaa7075097b8975be4eb06d4c" HandleID="k8s-pod-network.2d43f127c1ddba66236343c437ddd27b49d78d0eaa7075097b8975be4eb06d4c" Workload="localhost-k8s-calico--apiserver--6bc6768489--gxbgn-eth0" Sep 10 00:13:53.943741 containerd[1430]: 2025-09-10 00:13:53.936 [INFO][4945] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2d43f127c1ddba66236343c437ddd27b49d78d0eaa7075097b8975be4eb06d4c" HandleID="k8s-pod-network.2d43f127c1ddba66236343c437ddd27b49d78d0eaa7075097b8975be4eb06d4c" Workload="localhost-k8s-calico--apiserver--6bc6768489--gxbgn-eth0" Sep 10 00:13:53.943741 containerd[1430]: 2025-09-10 00:13:53.938 [INFO][4945] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:13:53.943741 containerd[1430]: 2025-09-10 00:13:53.940 [INFO][4927] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2d43f127c1ddba66236343c437ddd27b49d78d0eaa7075097b8975be4eb06d4c" Sep 10 00:13:53.944418 containerd[1430]: time="2025-09-10T00:13:53.943884539Z" level=info msg="TearDown network for sandbox \"2d43f127c1ddba66236343c437ddd27b49d78d0eaa7075097b8975be4eb06d4c\" successfully" Sep 10 00:13:53.944418 containerd[1430]: time="2025-09-10T00:13:53.943903259Z" level=info msg="StopPodSandbox for \"2d43f127c1ddba66236343c437ddd27b49d78d0eaa7075097b8975be4eb06d4c\" returns successfully" Sep 10 00:13:53.944966 containerd[1430]: time="2025-09-10T00:13:53.944796826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bc6768489-gxbgn,Uid:34aa417d-a639-4656-862d-aac7f831a9b9,Namespace:calico-apiserver,Attempt:1,}" Sep 10 00:13:53.947359 systemd[1]: run-netns-cni\x2d1eb1511b\x2d7f50\x2d4b38\x2d3a23\x2d6582f6d0b48f.mount: Deactivated successfully. Sep 10 00:13:53.947460 systemd[1]: run-netns-cni\x2d1268a8f8\x2dc376\x2ded49\x2d9326\x2ddd11a56f5be7.mount: Deactivated successfully. Sep 10 00:13:53.998832 kubelet[2471]: E0910 00:13:53.998794 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:13:54.097261 systemd-networkd[1370]: cali4276c933052: Link UP Sep 10 00:13:54.097395 systemd-networkd[1370]: cali4276c933052: Gained carrier Sep 10 00:13:54.110719 kubelet[2471]: I0910 00:13:54.110632 2471 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6bc6768489-w628d" podStartSLOduration=24.033062614 podStartE2EDuration="26.110611877s" podCreationTimestamp="2025-09-10 00:13:28 +0000 UTC" firstStartedPulling="2025-09-10 00:13:51.180175308 +0000 UTC m=+36.436998041" lastFinishedPulling="2025-09-10 00:13:53.257724571 +0000 UTC m=+38.514547304" observedRunningTime="2025-09-10 00:13:54.01030732 +0000 UTC m=+39.267130053" watchObservedRunningTime="2025-09-10 00:13:54.110611877 +0000 UTC m=+39.367434570" Sep 10 00:13:54.113568 containerd[1430]: 2025-09-10 00:13:53.982 [INFO][4955] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 10 00:13:54.113568 containerd[1430]: 2025-09-10 00:13:54.003 [INFO][4955] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--fprdx-eth0 coredns-7c65d6cfc9- kube-system 778ab3ef-74e3-4341-b35b-556c4e8acdd5 1009 0 2025-09-10 00:13:20 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-fprdx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4276c933052 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="aeb3435b992ce3412be85016e50f6acc18819bc90799234f47f77ff6818d8c7a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fprdx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--fprdx-" Sep 10 00:13:54.113568 containerd[1430]: 2025-09-10 00:13:54.004 [INFO][4955] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="aeb3435b992ce3412be85016e50f6acc18819bc90799234f47f77ff6818d8c7a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fprdx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--fprdx-eth0" Sep 10 00:13:54.113568 containerd[1430]: 2025-09-10 00:13:54.041 [INFO][4987] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aeb3435b992ce3412be85016e50f6acc18819bc90799234f47f77ff6818d8c7a" HandleID="k8s-pod-network.aeb3435b992ce3412be85016e50f6acc18819bc90799234f47f77ff6818d8c7a" Workload="localhost-k8s-coredns--7c65d6cfc9--fprdx-eth0" Sep 10 00:13:54.113568 containerd[1430]: 2025-09-10 00:13:54.041 [INFO][4987] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="aeb3435b992ce3412be85016e50f6acc18819bc90799234f47f77ff6818d8c7a" HandleID="k8s-pod-network.aeb3435b992ce3412be85016e50f6acc18819bc90799234f47f77ff6818d8c7a" Workload="localhost-k8s-coredns--7c65d6cfc9--fprdx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000140e70), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-fprdx", "timestamp":"2025-09-10 00:13:54.04181009 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 00:13:54.113568 containerd[1430]: 2025-09-10 00:13:54.042 [INFO][4987] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:13:54.113568 containerd[1430]: 2025-09-10 00:13:54.042 [INFO][4987] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:13:54.113568 containerd[1430]: 2025-09-10 00:13:54.042 [INFO][4987] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 00:13:54.113568 containerd[1430]: 2025-09-10 00:13:54.055 [INFO][4987] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.aeb3435b992ce3412be85016e50f6acc18819bc90799234f47f77ff6818d8c7a" host="localhost" Sep 10 00:13:54.113568 containerd[1430]: 2025-09-10 00:13:54.064 [INFO][4987] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 00:13:54.113568 containerd[1430]: 2025-09-10 00:13:54.070 [INFO][4987] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 00:13:54.113568 containerd[1430]: 2025-09-10 00:13:54.073 [INFO][4987] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 00:13:54.113568 containerd[1430]: 2025-09-10 00:13:54.075 [INFO][4987] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 00:13:54.113568 containerd[1430]: 2025-09-10 00:13:54.075 [INFO][4987] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.aeb3435b992ce3412be85016e50f6acc18819bc90799234f47f77ff6818d8c7a" host="localhost" Sep 10 00:13:54.113568 containerd[1430]: 2025-09-10 00:13:54.078 [INFO][4987] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.aeb3435b992ce3412be85016e50f6acc18819bc90799234f47f77ff6818d8c7a Sep 10 00:13:54.113568 containerd[1430]: 2025-09-10 00:13:54.082 [INFO][4987] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.aeb3435b992ce3412be85016e50f6acc18819bc90799234f47f77ff6818d8c7a" host="localhost" Sep 10 00:13:54.113568 containerd[1430]: 2025-09-10 00:13:54.088 [INFO][4987] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.aeb3435b992ce3412be85016e50f6acc18819bc90799234f47f77ff6818d8c7a" host="localhost" Sep 10 00:13:54.113568 containerd[1430]: 2025-09-10 00:13:54.088 [INFO][4987] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.aeb3435b992ce3412be85016e50f6acc18819bc90799234f47f77ff6818d8c7a" host="localhost" Sep 10 00:13:54.113568 containerd[1430]: 2025-09-10 00:13:54.089 [INFO][4987] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:13:54.113568 containerd[1430]: 2025-09-10 00:13:54.089 [INFO][4987] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="aeb3435b992ce3412be85016e50f6acc18819bc90799234f47f77ff6818d8c7a" HandleID="k8s-pod-network.aeb3435b992ce3412be85016e50f6acc18819bc90799234f47f77ff6818d8c7a" Workload="localhost-k8s-coredns--7c65d6cfc9--fprdx-eth0" Sep 10 00:13:54.114137 containerd[1430]: 2025-09-10 00:13:54.092 [INFO][4955] cni-plugin/k8s.go 418: Populated endpoint ContainerID="aeb3435b992ce3412be85016e50f6acc18819bc90799234f47f77ff6818d8c7a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fprdx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--fprdx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--fprdx-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"778ab3ef-74e3-4341-b35b-556c4e8acdd5", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 13, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-fprdx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4276c933052", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:13:54.114137 containerd[1430]: 2025-09-10 00:13:54.094 [INFO][4955] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="aeb3435b992ce3412be85016e50f6acc18819bc90799234f47f77ff6818d8c7a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fprdx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--fprdx-eth0" Sep 10 00:13:54.114137 containerd[1430]: 2025-09-10 00:13:54.094 [INFO][4955] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4276c933052 ContainerID="aeb3435b992ce3412be85016e50f6acc18819bc90799234f47f77ff6818d8c7a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fprdx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--fprdx-eth0" Sep 10 00:13:54.114137 containerd[1430]: 2025-09-10 00:13:54.099 [INFO][4955] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aeb3435b992ce3412be85016e50f6acc18819bc90799234f47f77ff6818d8c7a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fprdx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--fprdx-eth0" Sep 10 00:13:54.114137 containerd[1430]: 2025-09-10 00:13:54.099 [INFO][4955] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="aeb3435b992ce3412be85016e50f6acc18819bc90799234f47f77ff6818d8c7a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fprdx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--fprdx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--fprdx-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"778ab3ef-74e3-4341-b35b-556c4e8acdd5", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 13, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"aeb3435b992ce3412be85016e50f6acc18819bc90799234f47f77ff6818d8c7a", Pod:"coredns-7c65d6cfc9-fprdx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4276c933052", MAC:"52:b5:82:b8:5f:c1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:13:54.114137 containerd[1430]: 2025-09-10 00:13:54.111 [INFO][4955] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="aeb3435b992ce3412be85016e50f6acc18819bc90799234f47f77ff6818d8c7a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fprdx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--fprdx-eth0" Sep 10 00:13:54.137379 containerd[1430]: time="2025-09-10T00:13:54.137298169Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:13:54.137379 containerd[1430]: time="2025-09-10T00:13:54.137344649Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:13:54.137379 containerd[1430]: time="2025-09-10T00:13:54.137356209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:13:54.137663 containerd[1430]: time="2025-09-10T00:13:54.137417650Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:13:54.159016 systemd[1]: Started cri-containerd-aeb3435b992ce3412be85016e50f6acc18819bc90799234f47f77ff6818d8c7a.scope - libcontainer container aeb3435b992ce3412be85016e50f6acc18819bc90799234f47f77ff6818d8c7a. Sep 10 00:13:54.166629 systemd-networkd[1370]: calia0d3215865b: Gained IPv6LL Sep 10 00:13:54.167088 systemd-networkd[1370]: cali808ac0ad1fd: Gained IPv6LL Sep 10 00:13:54.172598 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:13:54.195329 containerd[1430]: time="2025-09-10T00:13:54.195291510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-fprdx,Uid:778ab3ef-74e3-4341-b35b-556c4e8acdd5,Namespace:kube-system,Attempt:1,} returns sandbox id \"aeb3435b992ce3412be85016e50f6acc18819bc90799234f47f77ff6818d8c7a\"" Sep 10 00:13:54.196222 kubelet[2471]: E0910 00:13:54.196161 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:13:54.200694 containerd[1430]: time="2025-09-10T00:13:54.200539672Z" level=info msg="CreateContainer within sandbox \"aeb3435b992ce3412be85016e50f6acc18819bc90799234f47f77ff6818d8c7a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 10 00:13:54.218864 systemd-networkd[1370]: cali5b2a78a0105: Link UP Sep 10 00:13:54.219557 systemd-networkd[1370]: cali5b2a78a0105: Gained carrier Sep 10 00:13:54.234769 containerd[1430]: time="2025-09-10T00:13:54.234029738Z" level=info msg="CreateContainer within sandbox \"aeb3435b992ce3412be85016e50f6acc18819bc90799234f47f77ff6818d8c7a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ae61c4c649a2dc4f8be7fdbaf343cf8842d8983e918434afba0f1ef7186896c4\"" Sep 10 00:13:54.236600 containerd[1430]: time="2025-09-10T00:13:54.235956433Z" level=info msg="StartContainer for \"ae61c4c649a2dc4f8be7fdbaf343cf8842d8983e918434afba0f1ef7186896c4\"" Sep 10 00:13:54.241891 containerd[1430]: 2025-09-10 00:13:54.000 [INFO][4967] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 10 00:13:54.241891 containerd[1430]: 2025-09-10 00:13:54.021 [INFO][4967] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6bc6768489--gxbgn-eth0 calico-apiserver-6bc6768489- calico-apiserver 34aa417d-a639-4656-862d-aac7f831a9b9 1010 0 2025-09-10 00:13:28 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6bc6768489 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6bc6768489-gxbgn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5b2a78a0105 [] [] }} ContainerID="d524110e64c55aa32146be1fcf958ecf06169f0170e7e32e1837750e2164a58e" Namespace="calico-apiserver" Pod="calico-apiserver-6bc6768489-gxbgn" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bc6768489--gxbgn-" Sep 10 00:13:54.241891 containerd[1430]: 2025-09-10 00:13:54.021 [INFO][4967] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d524110e64c55aa32146be1fcf958ecf06169f0170e7e32e1837750e2164a58e" Namespace="calico-apiserver" Pod="calico-apiserver-6bc6768489-gxbgn" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bc6768489--gxbgn-eth0" Sep 10 00:13:54.241891 containerd[1430]: 2025-09-10 00:13:54.055 [INFO][4995] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d524110e64c55aa32146be1fcf958ecf06169f0170e7e32e1837750e2164a58e" HandleID="k8s-pod-network.d524110e64c55aa32146be1fcf958ecf06169f0170e7e32e1837750e2164a58e" Workload="localhost-k8s-calico--apiserver--6bc6768489--gxbgn-eth0" Sep 10 00:13:54.241891 containerd[1430]: 2025-09-10 00:13:54.055 [INFO][4995] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d524110e64c55aa32146be1fcf958ecf06169f0170e7e32e1837750e2164a58e" HandleID="k8s-pod-network.d524110e64c55aa32146be1fcf958ecf06169f0170e7e32e1837750e2164a58e" Workload="localhost-k8s-calico--apiserver--6bc6768489--gxbgn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c2fe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6bc6768489-gxbgn", "timestamp":"2025-09-10 00:13:54.055534919 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 00:13:54.241891 containerd[1430]: 2025-09-10 00:13:54.055 [INFO][4995] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:13:54.241891 containerd[1430]: 2025-09-10 00:13:54.089 [INFO][4995] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:13:54.241891 containerd[1430]: 2025-09-10 00:13:54.089 [INFO][4995] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 00:13:54.241891 containerd[1430]: 2025-09-10 00:13:54.156 [INFO][4995] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d524110e64c55aa32146be1fcf958ecf06169f0170e7e32e1837750e2164a58e" host="localhost" Sep 10 00:13:54.241891 containerd[1430]: 2025-09-10 00:13:54.165 [INFO][4995] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 00:13:54.241891 containerd[1430]: 2025-09-10 00:13:54.176 [INFO][4995] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 00:13:54.241891 containerd[1430]: 2025-09-10 00:13:54.178 [INFO][4995] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 00:13:54.241891 containerd[1430]: 2025-09-10 00:13:54.182 [INFO][4995] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 00:13:54.241891 containerd[1430]: 2025-09-10 00:13:54.182 [INFO][4995] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d524110e64c55aa32146be1fcf958ecf06169f0170e7e32e1837750e2164a58e" host="localhost" Sep 10 00:13:54.241891 containerd[1430]: 2025-09-10 00:13:54.187 [INFO][4995] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d524110e64c55aa32146be1fcf958ecf06169f0170e7e32e1837750e2164a58e Sep 10 00:13:54.241891 containerd[1430]: 2025-09-10 00:13:54.192 [INFO][4995] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d524110e64c55aa32146be1fcf958ecf06169f0170e7e32e1837750e2164a58e" host="localhost" Sep 10 00:13:54.241891 containerd[1430]: 2025-09-10 00:13:54.206 [INFO][4995] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.d524110e64c55aa32146be1fcf958ecf06169f0170e7e32e1837750e2164a58e" host="localhost" Sep 10 00:13:54.241891 containerd[1430]: 2025-09-10 00:13:54.206 [INFO][4995] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.d524110e64c55aa32146be1fcf958ecf06169f0170e7e32e1837750e2164a58e" host="localhost" Sep 10 00:13:54.241891 containerd[1430]: 2025-09-10 00:13:54.206 [INFO][4995] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:13:54.241891 containerd[1430]: 2025-09-10 00:13:54.206 [INFO][4995] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="d524110e64c55aa32146be1fcf958ecf06169f0170e7e32e1837750e2164a58e" HandleID="k8s-pod-network.d524110e64c55aa32146be1fcf958ecf06169f0170e7e32e1837750e2164a58e" Workload="localhost-k8s-calico--apiserver--6bc6768489--gxbgn-eth0" Sep 10 00:13:54.242403 containerd[1430]: 2025-09-10 00:13:54.211 [INFO][4967] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d524110e64c55aa32146be1fcf958ecf06169f0170e7e32e1837750e2164a58e" Namespace="calico-apiserver" Pod="calico-apiserver-6bc6768489-gxbgn" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bc6768489--gxbgn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6bc6768489--gxbgn-eth0", GenerateName:"calico-apiserver-6bc6768489-", Namespace:"calico-apiserver", SelfLink:"", UID:"34aa417d-a639-4656-862d-aac7f831a9b9", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 13, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bc6768489", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6bc6768489-gxbgn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5b2a78a0105", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:13:54.242403 containerd[1430]: 2025-09-10 00:13:54.213 [INFO][4967] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="d524110e64c55aa32146be1fcf958ecf06169f0170e7e32e1837750e2164a58e" Namespace="calico-apiserver" Pod="calico-apiserver-6bc6768489-gxbgn" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bc6768489--gxbgn-eth0" Sep 10 00:13:54.242403 containerd[1430]: 2025-09-10 00:13:54.213 [INFO][4967] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5b2a78a0105 ContainerID="d524110e64c55aa32146be1fcf958ecf06169f0170e7e32e1837750e2164a58e" Namespace="calico-apiserver" Pod="calico-apiserver-6bc6768489-gxbgn" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bc6768489--gxbgn-eth0" Sep 10 00:13:54.242403 containerd[1430]: 2025-09-10 00:13:54.219 [INFO][4967] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d524110e64c55aa32146be1fcf958ecf06169f0170e7e32e1837750e2164a58e" Namespace="calico-apiserver" Pod="calico-apiserver-6bc6768489-gxbgn" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bc6768489--gxbgn-eth0" Sep 10 00:13:54.242403 containerd[1430]: 2025-09-10 00:13:54.220 [INFO][4967] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d524110e64c55aa32146be1fcf958ecf06169f0170e7e32e1837750e2164a58e" Namespace="calico-apiserver" Pod="calico-apiserver-6bc6768489-gxbgn" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bc6768489--gxbgn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6bc6768489--gxbgn-eth0", GenerateName:"calico-apiserver-6bc6768489-", Namespace:"calico-apiserver", SelfLink:"", UID:"34aa417d-a639-4656-862d-aac7f831a9b9", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 13, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bc6768489", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d524110e64c55aa32146be1fcf958ecf06169f0170e7e32e1837750e2164a58e", Pod:"calico-apiserver-6bc6768489-gxbgn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5b2a78a0105", MAC:"5a:a7:f8:b8:68:20", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:13:54.242403 containerd[1430]: 2025-09-10 00:13:54.237 [INFO][4967] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d524110e64c55aa32146be1fcf958ecf06169f0170e7e32e1837750e2164a58e" Namespace="calico-apiserver" Pod="calico-apiserver-6bc6768489-gxbgn" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bc6768489--gxbgn-eth0" Sep 10 00:13:54.261540 containerd[1430]: time="2025-09-10T00:13:54.261438876Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:13:54.261540 containerd[1430]: time="2025-09-10T00:13:54.261539076Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:13:54.261540 containerd[1430]: time="2025-09-10T00:13:54.261557557Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:13:54.261738 containerd[1430]: time="2025-09-10T00:13:54.261632077Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:13:54.268647 systemd[1]: Started cri-containerd-ae61c4c649a2dc4f8be7fdbaf343cf8842d8983e918434afba0f1ef7186896c4.scope - libcontainer container ae61c4c649a2dc4f8be7fdbaf343cf8842d8983e918434afba0f1ef7186896c4. Sep 10 00:13:54.280095 systemd[1]: Started cri-containerd-d524110e64c55aa32146be1fcf958ecf06169f0170e7e32e1837750e2164a58e.scope - libcontainer container d524110e64c55aa32146be1fcf958ecf06169f0170e7e32e1837750e2164a58e. Sep 10 00:13:54.296777 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:13:54.301071 containerd[1430]: time="2025-09-10T00:13:54.301036190Z" level=info msg="StartContainer for \"ae61c4c649a2dc4f8be7fdbaf343cf8842d8983e918434afba0f1ef7186896c4\" returns successfully" Sep 10 00:13:54.332345 containerd[1430]: time="2025-09-10T00:13:54.332274999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bc6768489-gxbgn,Uid:34aa417d-a639-4656-862d-aac7f831a9b9,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"d524110e64c55aa32146be1fcf958ecf06169f0170e7e32e1837750e2164a58e\"" Sep 10 00:13:54.337095 containerd[1430]: time="2025-09-10T00:13:54.336673474Z" level=info msg="CreateContainer within sandbox \"d524110e64c55aa32146be1fcf958ecf06169f0170e7e32e1837750e2164a58e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 10 00:13:54.358491 containerd[1430]: time="2025-09-10T00:13:54.356296550Z" level=info msg="CreateContainer within sandbox \"d524110e64c55aa32146be1fcf958ecf06169f0170e7e32e1837750e2164a58e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ca9995734352bbffcc3eb9ca143932a990364c3c85489f45a95fa8b8de2d012b\"" Sep 10 00:13:54.362540 containerd[1430]: time="2025-09-10T00:13:54.362372038Z" level=info msg="StartContainer for \"ca9995734352bbffcc3eb9ca143932a990364c3c85489f45a95fa8b8de2d012b\"" Sep 10 00:13:54.394669 systemd[1]: Started cri-containerd-ca9995734352bbffcc3eb9ca143932a990364c3c85489f45a95fa8b8de2d012b.scope - libcontainer container ca9995734352bbffcc3eb9ca143932a990364c3c85489f45a95fa8b8de2d012b. Sep 10 00:13:54.459255 containerd[1430]: time="2025-09-10T00:13:54.457442673Z" level=info msg="StartContainer for \"ca9995734352bbffcc3eb9ca143932a990364c3c85489f45a95fa8b8de2d012b\" returns successfully" Sep 10 00:13:55.011588 kubelet[2471]: E0910 00:13:55.011188 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:13:55.020215 kubelet[2471]: I0910 00:13:55.020184 2471 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 00:13:55.020881 kubelet[2471]: E0910 00:13:55.020856 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:13:55.136794 kubelet[2471]: I0910 00:13:55.136740 2471 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-fprdx" podStartSLOduration=35.136720324 podStartE2EDuration="35.136720324s" podCreationTimestamp="2025-09-10 00:13:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:13:55.052337231 +0000 UTC m=+40.309159964" watchObservedRunningTime="2025-09-10 00:13:55.136720324 +0000 UTC m=+40.393543057" Sep 10 00:13:55.165734 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3624526397.mount: Deactivated successfully. Sep 10 00:13:55.171497 kubelet[2471]: I0910 00:13:55.171440 2471 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6bc6768489-gxbgn" podStartSLOduration=27.171421352 podStartE2EDuration="27.171421352s" podCreationTimestamp="2025-09-10 00:13:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:13:55.13748405 +0000 UTC m=+40.394306783" watchObservedRunningTime="2025-09-10 00:13:55.171421352 +0000 UTC m=+40.428244045" Sep 10 00:13:55.632886 containerd[1430]: time="2025-09-10T00:13:55.632805322Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:13:55.634214 containerd[1430]: time="2025-09-10T00:13:55.633659249Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 10 00:13:55.634613 containerd[1430]: time="2025-09-10T00:13:55.634581336Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:13:55.638907 containerd[1430]: time="2025-09-10T00:13:55.638874209Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:13:55.640690 containerd[1430]: time="2025-09-10T00:13:55.640191179Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 2.380874595s" Sep 10 00:13:55.640690 containerd[1430]: time="2025-09-10T00:13:55.640226700Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 10 00:13:55.642695 containerd[1430]: time="2025-09-10T00:13:55.642668959Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 10 00:13:55.643530 containerd[1430]: time="2025-09-10T00:13:55.643495325Z" level=info msg="CreateContainer within sandbox \"e874f8a086ccd5e2aca1101fb74180dd7b1e6345e182eb750c32bcdfc7a47a4d\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 10 00:13:55.657366 containerd[1430]: time="2025-09-10T00:13:55.657201631Z" level=info msg="CreateContainer within sandbox \"e874f8a086ccd5e2aca1101fb74180dd7b1e6345e182eb750c32bcdfc7a47a4d\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"8819ba53af93d7e2d814b2b3cc7378a2ac12a35bd020df5f98b77cec8ee35bb4\"" Sep 10 00:13:55.659336 containerd[1430]: time="2025-09-10T00:13:55.658974725Z" level=info msg="StartContainer for \"8819ba53af93d7e2d814b2b3cc7378a2ac12a35bd020df5f98b77cec8ee35bb4\"" Sep 10 00:13:55.687863 systemd[1]: Started cri-containerd-8819ba53af93d7e2d814b2b3cc7378a2ac12a35bd020df5f98b77cec8ee35bb4.scope - libcontainer container 8819ba53af93d7e2d814b2b3cc7378a2ac12a35bd020df5f98b77cec8ee35bb4. Sep 10 00:13:55.736634 containerd[1430]: time="2025-09-10T00:13:55.736595565Z" level=info msg="StartContainer for \"8819ba53af93d7e2d814b2b3cc7378a2ac12a35bd020df5f98b77cec8ee35bb4\" returns successfully" Sep 10 00:13:55.830638 systemd-networkd[1370]: cali5b2a78a0105: Gained IPv6LL Sep 10 00:13:56.022628 systemd-networkd[1370]: cali4276c933052: Gained IPv6LL Sep 10 00:13:56.024173 kubelet[2471]: E0910 00:13:56.024055 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:13:56.024655 kubelet[2471]: I0910 00:13:56.024515 2471 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 00:13:56.035093 kubelet[2471]: I0910 00:13:56.035040 2471 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-9zjzp" podStartSLOduration=21.646780222 podStartE2EDuration="25.035022828s" podCreationTimestamp="2025-09-10 00:13:31 +0000 UTC" firstStartedPulling="2025-09-10 00:13:52.252905061 +0000 UTC m=+37.509727794" lastFinishedPulling="2025-09-10 00:13:55.641147667 +0000 UTC m=+40.897970400" observedRunningTime="2025-09-10 00:13:56.034710066 +0000 UTC m=+41.291532839" watchObservedRunningTime="2025-09-10 00:13:56.035022828 +0000 UTC m=+41.291845561" Sep 10 00:13:56.151231 kubelet[2471]: I0910 00:13:56.151183 2471 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 00:13:56.151616 kubelet[2471]: E0910 00:13:56.151561 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:13:56.478402 systemd[1]: Started sshd@7-10.0.0.106:22-10.0.0.1:54496.service - OpenSSH per-connection server daemon (10.0.0.1:54496). Sep 10 00:13:56.548540 sshd[5288]: Accepted publickey for core from 10.0.0.1 port 54496 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:13:56.550160 sshd[5288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:13:56.555697 systemd-logind[1417]: New session 8 of user core. Sep 10 00:13:56.565669 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 10 00:13:56.856673 sshd[5288]: pam_unix(sshd:session): session closed for user core Sep 10 00:13:56.860633 systemd[1]: sshd@7-10.0.0.106:22-10.0.0.1:54496.service: Deactivated successfully. Sep 10 00:13:56.862244 systemd[1]: session-8.scope: Deactivated successfully. Sep 10 00:13:56.863651 systemd-logind[1417]: Session 8 logged out. Waiting for processes to exit. Sep 10 00:13:56.864962 systemd-logind[1417]: Removed session 8. Sep 10 00:13:57.005570 kernel: bpftool[5364]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 10 00:13:57.025236 kubelet[2471]: I0910 00:13:57.025163 2471 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 00:13:57.025901 kubelet[2471]: E0910 00:13:57.025423 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:13:57.025901 kubelet[2471]: E0910 00:13:57.025833 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:13:57.227323 systemd-networkd[1370]: vxlan.calico: Link UP Sep 10 00:13:57.227330 systemd-networkd[1370]: vxlan.calico: Gained carrier Sep 10 00:13:58.838951 systemd-networkd[1370]: vxlan.calico: Gained IPv6LL Sep 10 00:13:59.185401 containerd[1430]: time="2025-09-10T00:13:59.185115662Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:13:59.186386 containerd[1430]: time="2025-09-10T00:13:59.186352310Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 10 00:13:59.187400 containerd[1430]: time="2025-09-10T00:13:59.187367478Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:13:59.189461 containerd[1430]: time="2025-09-10T00:13:59.189425172Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:13:59.190105 containerd[1430]: time="2025-09-10T00:13:59.190074537Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 3.547377418s" Sep 10 00:13:59.190144 containerd[1430]: time="2025-09-10T00:13:59.190109577Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 10 00:13:59.191712 containerd[1430]: time="2025-09-10T00:13:59.191659908Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 10 00:13:59.200388 containerd[1430]: time="2025-09-10T00:13:59.200357249Z" level=info msg="CreateContainer within sandbox \"0a27bced0f808e0bec550c849e07387a6f59054710e9802eb2a8367c7616872e\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 10 00:13:59.213161 containerd[1430]: time="2025-09-10T00:13:59.213123138Z" level=info msg="CreateContainer within sandbox \"0a27bced0f808e0bec550c849e07387a6f59054710e9802eb2a8367c7616872e\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"8650bc5878d3ea43aa44ea0285680a8e8887440430dc0d565bc0fdee60aefd69\"" Sep 10 00:13:59.213696 containerd[1430]: time="2025-09-10T00:13:59.213658902Z" level=info msg="StartContainer for \"8650bc5878d3ea43aa44ea0285680a8e8887440430dc0d565bc0fdee60aefd69\"" Sep 10 00:13:59.247695 systemd[1]: Started cri-containerd-8650bc5878d3ea43aa44ea0285680a8e8887440430dc0d565bc0fdee60aefd69.scope - libcontainer container 8650bc5878d3ea43aa44ea0285680a8e8887440430dc0d565bc0fdee60aefd69. Sep 10 00:13:59.355228 containerd[1430]: time="2025-09-10T00:13:59.355180216Z" level=info msg="StartContainer for \"8650bc5878d3ea43aa44ea0285680a8e8887440430dc0d565bc0fdee60aefd69\" returns successfully" Sep 10 00:14:00.058369 kubelet[2471]: I0910 00:14:00.058277 2471 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7f764b5b64-dd89z" podStartSLOduration=21.242350567 podStartE2EDuration="28.057832619s" podCreationTimestamp="2025-09-10 00:13:32 +0000 UTC" firstStartedPulling="2025-09-10 00:13:52.375447291 +0000 UTC m=+37.632270024" lastFinishedPulling="2025-09-10 00:13:59.190929343 +0000 UTC m=+44.447752076" observedRunningTime="2025-09-10 00:14:00.057806179 +0000 UTC m=+45.314628912" watchObservedRunningTime="2025-09-10 00:14:00.057832619 +0000 UTC m=+45.314655352" Sep 10 00:14:00.343545 containerd[1430]: time="2025-09-10T00:14:00.343320100Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:14:00.344113 containerd[1430]: time="2025-09-10T00:14:00.344056545Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 10 00:14:00.344879 containerd[1430]: time="2025-09-10T00:14:00.344851190Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:14:00.347193 containerd[1430]: time="2025-09-10T00:14:00.347071286Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:14:00.347768 containerd[1430]: time="2025-09-10T00:14:00.347737610Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 1.156045142s" Sep 10 00:14:00.347817 containerd[1430]: time="2025-09-10T00:14:00.347776530Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 10 00:14:00.349885 containerd[1430]: time="2025-09-10T00:14:00.349856745Z" level=info msg="CreateContainer within sandbox \"4434cbe969e6287ab844dffe2ac371ff4460c05bd3d9a16caccb0b53e084b8c7\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 10 00:14:00.367411 containerd[1430]: time="2025-09-10T00:14:00.367364905Z" level=info msg="CreateContainer within sandbox \"4434cbe969e6287ab844dffe2ac371ff4460c05bd3d9a16caccb0b53e084b8c7\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"09dd521cac886b6087509ab0acf353c0615c998b20a08fbbbb09198fb2ff686d\"" Sep 10 00:14:00.368074 containerd[1430]: time="2025-09-10T00:14:00.368047190Z" level=info msg="StartContainer for \"09dd521cac886b6087509ab0acf353c0615c998b20a08fbbbb09198fb2ff686d\"" Sep 10 00:14:00.399677 systemd[1]: Started cri-containerd-09dd521cac886b6087509ab0acf353c0615c998b20a08fbbbb09198fb2ff686d.scope - libcontainer container 09dd521cac886b6087509ab0acf353c0615c998b20a08fbbbb09198fb2ff686d. Sep 10 00:14:00.426679 containerd[1430]: time="2025-09-10T00:14:00.425922427Z" level=info msg="StartContainer for \"09dd521cac886b6087509ab0acf353c0615c998b20a08fbbbb09198fb2ff686d\" returns successfully" Sep 10 00:14:00.427903 containerd[1430]: time="2025-09-10T00:14:00.427871040Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 10 00:14:01.051557 kubelet[2471]: I0910 00:14:01.051527 2471 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 00:14:01.805455 containerd[1430]: time="2025-09-10T00:14:01.805404105Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:14:01.806565 containerd[1430]: time="2025-09-10T00:14:01.806522393Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 10 00:14:01.808530 containerd[1430]: time="2025-09-10T00:14:01.807535360Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:14:01.809559 containerd[1430]: time="2025-09-10T00:14:01.809527133Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:14:01.810373 containerd[1430]: time="2025-09-10T00:14:01.810333099Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 1.382425298s" Sep 10 00:14:01.810429 containerd[1430]: time="2025-09-10T00:14:01.810371739Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 10 00:14:01.812324 containerd[1430]: time="2025-09-10T00:14:01.812200071Z" level=info msg="CreateContainer within sandbox \"4434cbe969e6287ab844dffe2ac371ff4460c05bd3d9a16caccb0b53e084b8c7\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 10 00:14:01.847536 containerd[1430]: time="2025-09-10T00:14:01.847376468Z" level=info msg="CreateContainer within sandbox \"4434cbe969e6287ab844dffe2ac371ff4460c05bd3d9a16caccb0b53e084b8c7\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"3c1ff3c9a8ddfff8baa7083e8ff57af1c6f98cd7f09cb4c974b6fb106a9ef66c\"" Sep 10 00:14:01.848917 containerd[1430]: time="2025-09-10T00:14:01.848519315Z" level=info msg="StartContainer for \"3c1ff3c9a8ddfff8baa7083e8ff57af1c6f98cd7f09cb4c974b6fb106a9ef66c\"" Sep 10 00:14:01.890684 systemd[1]: Started cri-containerd-3c1ff3c9a8ddfff8baa7083e8ff57af1c6f98cd7f09cb4c974b6fb106a9ef66c.scope - libcontainer container 3c1ff3c9a8ddfff8baa7083e8ff57af1c6f98cd7f09cb4c974b6fb106a9ef66c. Sep 10 00:14:01.892802 systemd[1]: Started sshd@8-10.0.0.106:22-10.0.0.1:50688.service - OpenSSH per-connection server daemon (10.0.0.1:50688). Sep 10 00:14:01.952390 containerd[1430]: time="2025-09-10T00:14:01.952335333Z" level=info msg="StartContainer for \"3c1ff3c9a8ddfff8baa7083e8ff57af1c6f98cd7f09cb4c974b6fb106a9ef66c\" returns successfully" Sep 10 00:14:01.986098 sshd[5570]: Accepted publickey for core from 10.0.0.1 port 50688 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:14:01.987904 sshd[5570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:14:01.993662 systemd-logind[1417]: New session 9 of user core. Sep 10 00:14:01.999632 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 10 00:14:02.068794 kubelet[2471]: I0910 00:14:02.068641 2471 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-6btnb" podStartSLOduration=21.512049508 podStartE2EDuration="30.068613906s" podCreationTimestamp="2025-09-10 00:13:32 +0000 UTC" firstStartedPulling="2025-09-10 00:13:53.254447465 +0000 UTC m=+38.511270198" lastFinishedPulling="2025-09-10 00:14:01.811011863 +0000 UTC m=+47.067834596" observedRunningTime="2025-09-10 00:14:02.066789694 +0000 UTC m=+47.323612467" watchObservedRunningTime="2025-09-10 00:14:02.068613906 +0000 UTC m=+47.325436639" Sep 10 00:14:02.279782 sshd[5570]: pam_unix(sshd:session): session closed for user core Sep 10 00:14:02.283226 systemd[1]: sshd@8-10.0.0.106:22-10.0.0.1:50688.service: Deactivated successfully. Sep 10 00:14:02.284927 systemd[1]: session-9.scope: Deactivated successfully. Sep 10 00:14:02.285518 systemd-logind[1417]: Session 9 logged out. Waiting for processes to exit. Sep 10 00:14:02.286416 systemd-logind[1417]: Removed session 9. Sep 10 00:14:02.912493 kubelet[2471]: I0910 00:14:02.912440 2471 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 10 00:14:02.914457 kubelet[2471]: I0910 00:14:02.914424 2471 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 10 00:14:05.529428 kubelet[2471]: I0910 00:14:05.529371 2471 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 00:14:05.549489 systemd[1]: run-containerd-runc-k8s.io-8819ba53af93d7e2d814b2b3cc7378a2ac12a35bd020df5f98b77cec8ee35bb4-runc.RozUuM.mount: Deactivated successfully. Sep 10 00:14:07.290268 systemd[1]: Started sshd@9-10.0.0.106:22-10.0.0.1:50700.service - OpenSSH per-connection server daemon (10.0.0.1:50700). Sep 10 00:14:07.334986 sshd[5679]: Accepted publickey for core from 10.0.0.1 port 50700 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:14:07.336292 sshd[5679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:14:07.339968 systemd-logind[1417]: New session 10 of user core. Sep 10 00:14:07.349684 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 10 00:14:07.557600 sshd[5679]: pam_unix(sshd:session): session closed for user core Sep 10 00:14:07.566145 systemd[1]: sshd@9-10.0.0.106:22-10.0.0.1:50700.service: Deactivated successfully. Sep 10 00:14:07.568915 systemd[1]: session-10.scope: Deactivated successfully. Sep 10 00:14:07.570208 systemd-logind[1417]: Session 10 logged out. Waiting for processes to exit. Sep 10 00:14:07.578825 systemd[1]: Started sshd@10-10.0.0.106:22-10.0.0.1:50708.service - OpenSSH per-connection server daemon (10.0.0.1:50708). Sep 10 00:14:07.579894 systemd-logind[1417]: Removed session 10. Sep 10 00:14:07.612839 sshd[5703]: Accepted publickey for core from 10.0.0.1 port 50708 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:14:07.614944 sshd[5703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:14:07.619986 systemd-logind[1417]: New session 11 of user core. Sep 10 00:14:07.629703 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 10 00:14:07.847173 sshd[5703]: pam_unix(sshd:session): session closed for user core Sep 10 00:14:07.858854 systemd[1]: sshd@10-10.0.0.106:22-10.0.0.1:50708.service: Deactivated successfully. Sep 10 00:14:07.861315 systemd[1]: session-11.scope: Deactivated successfully. Sep 10 00:14:07.866353 systemd-logind[1417]: Session 11 logged out. Waiting for processes to exit. Sep 10 00:14:07.879880 systemd[1]: Started sshd@11-10.0.0.106:22-10.0.0.1:50722.service - OpenSSH per-connection server daemon (10.0.0.1:50722). Sep 10 00:14:07.880737 systemd-logind[1417]: Removed session 11. Sep 10 00:14:07.916899 sshd[5720]: Accepted publickey for core from 10.0.0.1 port 50722 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:14:07.919050 sshd[5720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:14:07.925603 systemd-logind[1417]: New session 12 of user core. Sep 10 00:14:07.934718 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 10 00:14:08.100286 sshd[5720]: pam_unix(sshd:session): session closed for user core Sep 10 00:14:08.104029 systemd[1]: sshd@11-10.0.0.106:22-10.0.0.1:50722.service: Deactivated successfully. Sep 10 00:14:08.105923 systemd[1]: session-12.scope: Deactivated successfully. Sep 10 00:14:08.106585 systemd-logind[1417]: Session 12 logged out. Waiting for processes to exit. Sep 10 00:14:08.107542 systemd-logind[1417]: Removed session 12. Sep 10 00:14:09.176457 kubelet[2471]: I0910 00:14:09.176338 2471 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 00:14:13.114235 systemd[1]: Started sshd@12-10.0.0.106:22-10.0.0.1:41136.service - OpenSSH per-connection server daemon (10.0.0.1:41136). Sep 10 00:14:13.156463 sshd[5775]: Accepted publickey for core from 10.0.0.1 port 41136 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:14:13.157678 sshd[5775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:14:13.161539 systemd-logind[1417]: New session 13 of user core. Sep 10 00:14:13.168654 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 10 00:14:13.314904 sshd[5775]: pam_unix(sshd:session): session closed for user core Sep 10 00:14:13.330263 systemd[1]: sshd@12-10.0.0.106:22-10.0.0.1:41136.service: Deactivated successfully. Sep 10 00:14:13.331929 systemd[1]: session-13.scope: Deactivated successfully. Sep 10 00:14:13.333679 systemd-logind[1417]: Session 13 logged out. Waiting for processes to exit. Sep 10 00:14:13.340769 systemd[1]: Started sshd@13-10.0.0.106:22-10.0.0.1:41146.service - OpenSSH per-connection server daemon (10.0.0.1:41146). Sep 10 00:14:13.341860 systemd-logind[1417]: Removed session 13. Sep 10 00:14:13.375305 sshd[5790]: Accepted publickey for core from 10.0.0.1 port 41146 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:14:13.376438 sshd[5790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:14:13.380142 systemd-logind[1417]: New session 14 of user core. Sep 10 00:14:13.387630 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 10 00:14:13.581088 sshd[5790]: pam_unix(sshd:session): session closed for user core Sep 10 00:14:13.590080 systemd[1]: sshd@13-10.0.0.106:22-10.0.0.1:41146.service: Deactivated successfully. Sep 10 00:14:13.591785 systemd[1]: session-14.scope: Deactivated successfully. Sep 10 00:14:13.592959 systemd-logind[1417]: Session 14 logged out. Waiting for processes to exit. Sep 10 00:14:13.602328 systemd[1]: Started sshd@14-10.0.0.106:22-10.0.0.1:41162.service - OpenSSH per-connection server daemon (10.0.0.1:41162). Sep 10 00:14:13.603582 systemd-logind[1417]: Removed session 14. Sep 10 00:14:13.638432 sshd[5802]: Accepted publickey for core from 10.0.0.1 port 41162 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:14:13.639638 sshd[5802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:14:13.643060 systemd-logind[1417]: New session 15 of user core. Sep 10 00:14:13.652646 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 10 00:14:14.831220 containerd[1430]: time="2025-09-10T00:14:14.830998660Z" level=info msg="StopPodSandbox for \"89e42bdb4ebafba989d2b80772fba6857a23b7ca53958659e5e44d05fdab7658\"" Sep 10 00:14:14.954886 containerd[1430]: 2025-09-10 00:14:14.909 [WARNING][5829] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="89e42bdb4ebafba989d2b80772fba6857a23b7ca53958659e5e44d05fdab7658" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6bc6768489--w628d-eth0", GenerateName:"calico-apiserver-6bc6768489-", Namespace:"calico-apiserver", SelfLink:"", UID:"279a2bd1-e5f8-4ed5-bbcf-24ce06e302a6", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 13, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bc6768489", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cdd9848092fa05e43c969e6cffae700f727f78c089e3d39f4287d0f93fdad33a", Pod:"calico-apiserver-6bc6768489-w628d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid50230b23d2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:14:14.954886 containerd[1430]: 2025-09-10 00:14:14.914 [INFO][5829] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="89e42bdb4ebafba989d2b80772fba6857a23b7ca53958659e5e44d05fdab7658" Sep 10 00:14:14.954886 containerd[1430]: 2025-09-10 00:14:14.914 [INFO][5829] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="89e42bdb4ebafba989d2b80772fba6857a23b7ca53958659e5e44d05fdab7658" iface="eth0" netns="" Sep 10 00:14:14.954886 containerd[1430]: 2025-09-10 00:14:14.914 [INFO][5829] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="89e42bdb4ebafba989d2b80772fba6857a23b7ca53958659e5e44d05fdab7658" Sep 10 00:14:14.954886 containerd[1430]: 2025-09-10 00:14:14.914 [INFO][5829] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="89e42bdb4ebafba989d2b80772fba6857a23b7ca53958659e5e44d05fdab7658" Sep 10 00:14:14.954886 containerd[1430]: 2025-09-10 00:14:14.937 [INFO][5838] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="89e42bdb4ebafba989d2b80772fba6857a23b7ca53958659e5e44d05fdab7658" HandleID="k8s-pod-network.89e42bdb4ebafba989d2b80772fba6857a23b7ca53958659e5e44d05fdab7658" Workload="localhost-k8s-calico--apiserver--6bc6768489--w628d-eth0" Sep 10 00:14:14.954886 containerd[1430]: 2025-09-10 00:14:14.938 [INFO][5838] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:14:14.954886 containerd[1430]: 2025-09-10 00:14:14.938 [INFO][5838] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:14:14.954886 containerd[1430]: 2025-09-10 00:14:14.947 [WARNING][5838] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="89e42bdb4ebafba989d2b80772fba6857a23b7ca53958659e5e44d05fdab7658" HandleID="k8s-pod-network.89e42bdb4ebafba989d2b80772fba6857a23b7ca53958659e5e44d05fdab7658" Workload="localhost-k8s-calico--apiserver--6bc6768489--w628d-eth0" Sep 10 00:14:14.954886 containerd[1430]: 2025-09-10 00:14:14.947 [INFO][5838] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="89e42bdb4ebafba989d2b80772fba6857a23b7ca53958659e5e44d05fdab7658" HandleID="k8s-pod-network.89e42bdb4ebafba989d2b80772fba6857a23b7ca53958659e5e44d05fdab7658" Workload="localhost-k8s-calico--apiserver--6bc6768489--w628d-eth0" Sep 10 00:14:14.954886 containerd[1430]: 2025-09-10 00:14:14.950 [INFO][5838] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:14:14.954886 containerd[1430]: 2025-09-10 00:14:14.953 [INFO][5829] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="89e42bdb4ebafba989d2b80772fba6857a23b7ca53958659e5e44d05fdab7658" Sep 10 00:14:14.955618 containerd[1430]: time="2025-09-10T00:14:14.955578306Z" level=info msg="TearDown network for sandbox \"89e42bdb4ebafba989d2b80772fba6857a23b7ca53958659e5e44d05fdab7658\" successfully" Sep 10 00:14:14.955703 containerd[1430]: time="2025-09-10T00:14:14.955617707Z" level=info msg="StopPodSandbox for \"89e42bdb4ebafba989d2b80772fba6857a23b7ca53958659e5e44d05fdab7658\" returns successfully" Sep 10 00:14:14.956222 containerd[1430]: time="2025-09-10T00:14:14.956192510Z" level=info msg="RemovePodSandbox for \"89e42bdb4ebafba989d2b80772fba6857a23b7ca53958659e5e44d05fdab7658\"" Sep 10 00:14:14.962984 containerd[1430]: time="2025-09-10T00:14:14.962935267Z" level=info msg="Forcibly stopping sandbox \"89e42bdb4ebafba989d2b80772fba6857a23b7ca53958659e5e44d05fdab7658\"" Sep 10 00:14:15.047917 containerd[1430]: 2025-09-10 00:14:15.005 [WARNING][5855] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="89e42bdb4ebafba989d2b80772fba6857a23b7ca53958659e5e44d05fdab7658" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6bc6768489--w628d-eth0", GenerateName:"calico-apiserver-6bc6768489-", Namespace:"calico-apiserver", SelfLink:"", UID:"279a2bd1-e5f8-4ed5-bbcf-24ce06e302a6", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 13, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bc6768489", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cdd9848092fa05e43c969e6cffae700f727f78c089e3d39f4287d0f93fdad33a", Pod:"calico-apiserver-6bc6768489-w628d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid50230b23d2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:14:15.047917 containerd[1430]: 2025-09-10 00:14:15.006 [INFO][5855] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="89e42bdb4ebafba989d2b80772fba6857a23b7ca53958659e5e44d05fdab7658" Sep 10 00:14:15.047917 containerd[1430]: 2025-09-10 00:14:15.006 [INFO][5855] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="89e42bdb4ebafba989d2b80772fba6857a23b7ca53958659e5e44d05fdab7658" iface="eth0" netns="" Sep 10 00:14:15.047917 containerd[1430]: 2025-09-10 00:14:15.006 [INFO][5855] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="89e42bdb4ebafba989d2b80772fba6857a23b7ca53958659e5e44d05fdab7658" Sep 10 00:14:15.047917 containerd[1430]: 2025-09-10 00:14:15.006 [INFO][5855] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="89e42bdb4ebafba989d2b80772fba6857a23b7ca53958659e5e44d05fdab7658" Sep 10 00:14:15.047917 containerd[1430]: 2025-09-10 00:14:15.026 [INFO][5864] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="89e42bdb4ebafba989d2b80772fba6857a23b7ca53958659e5e44d05fdab7658" HandleID="k8s-pod-network.89e42bdb4ebafba989d2b80772fba6857a23b7ca53958659e5e44d05fdab7658" Workload="localhost-k8s-calico--apiserver--6bc6768489--w628d-eth0" Sep 10 00:14:15.047917 containerd[1430]: 2025-09-10 00:14:15.026 [INFO][5864] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:14:15.047917 containerd[1430]: 2025-09-10 00:14:15.026 [INFO][5864] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:14:15.047917 containerd[1430]: 2025-09-10 00:14:15.038 [WARNING][5864] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="89e42bdb4ebafba989d2b80772fba6857a23b7ca53958659e5e44d05fdab7658" HandleID="k8s-pod-network.89e42bdb4ebafba989d2b80772fba6857a23b7ca53958659e5e44d05fdab7658" Workload="localhost-k8s-calico--apiserver--6bc6768489--w628d-eth0" Sep 10 00:14:15.047917 containerd[1430]: 2025-09-10 00:14:15.038 [INFO][5864] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="89e42bdb4ebafba989d2b80772fba6857a23b7ca53958659e5e44d05fdab7658" HandleID="k8s-pod-network.89e42bdb4ebafba989d2b80772fba6857a23b7ca53958659e5e44d05fdab7658" Workload="localhost-k8s-calico--apiserver--6bc6768489--w628d-eth0" Sep 10 00:14:15.047917 containerd[1430]: 2025-09-10 00:14:15.042 [INFO][5864] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:14:15.047917 containerd[1430]: 2025-09-10 00:14:15.046 [INFO][5855] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="89e42bdb4ebafba989d2b80772fba6857a23b7ca53958659e5e44d05fdab7658" Sep 10 00:14:15.048297 containerd[1430]: time="2025-09-10T00:14:15.047951332Z" level=info msg="TearDown network for sandbox \"89e42bdb4ebafba989d2b80772fba6857a23b7ca53958659e5e44d05fdab7658\" successfully" Sep 10 00:14:15.076538 containerd[1430]: time="2025-09-10T00:14:15.076236087Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"89e42bdb4ebafba989d2b80772fba6857a23b7ca53958659e5e44d05fdab7658\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 10 00:14:15.076538 containerd[1430]: time="2025-09-10T00:14:15.076329607Z" level=info msg="RemovePodSandbox \"89e42bdb4ebafba989d2b80772fba6857a23b7ca53958659e5e44d05fdab7658\" returns successfully" Sep 10 00:14:15.077415 containerd[1430]: time="2025-09-10T00:14:15.077390173Z" level=info msg="StopPodSandbox for \"472415d550a77d5a0800f54596aec97171fd5f8401eee0a00519943d393e801f\"" Sep 10 00:14:15.136909 sshd[5802]: pam_unix(sshd:session): session closed for user core Sep 10 00:14:15.150083 systemd[1]: sshd@14-10.0.0.106:22-10.0.0.1:41162.service: Deactivated successfully. Sep 10 00:14:15.155149 systemd[1]: session-15.scope: Deactivated successfully. Sep 10 00:14:15.158169 systemd-logind[1417]: Session 15 logged out. Waiting for processes to exit. Sep 10 00:14:15.167833 systemd[1]: Started sshd@15-10.0.0.106:22-10.0.0.1:41178.service - OpenSSH per-connection server daemon (10.0.0.1:41178). Sep 10 00:14:15.172539 systemd-logind[1417]: Removed session 15. Sep 10 00:14:15.184350 containerd[1430]: 2025-09-10 00:14:15.133 [WARNING][5883] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="472415d550a77d5a0800f54596aec97171fd5f8401eee0a00519943d393e801f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--9zjzp-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"d1a71852-d2fe-4382-8a25-a3d286247a75", ResourceVersion:"1168", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 13, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e874f8a086ccd5e2aca1101fb74180dd7b1e6345e182eb750c32bcdfc7a47a4d", Pod:"goldmane-7988f88666-9zjzp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calica46d2300cd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:14:15.184350 containerd[1430]: 2025-09-10 00:14:15.134 [INFO][5883] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="472415d550a77d5a0800f54596aec97171fd5f8401eee0a00519943d393e801f" Sep 10 00:14:15.184350 containerd[1430]: 2025-09-10 00:14:15.134 [INFO][5883] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="472415d550a77d5a0800f54596aec97171fd5f8401eee0a00519943d393e801f" iface="eth0" netns="" Sep 10 00:14:15.184350 containerd[1430]: 2025-09-10 00:14:15.134 [INFO][5883] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="472415d550a77d5a0800f54596aec97171fd5f8401eee0a00519943d393e801f" Sep 10 00:14:15.184350 containerd[1430]: 2025-09-10 00:14:15.134 [INFO][5883] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="472415d550a77d5a0800f54596aec97171fd5f8401eee0a00519943d393e801f" Sep 10 00:14:15.184350 containerd[1430]: 2025-09-10 00:14:15.168 [INFO][5894] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="472415d550a77d5a0800f54596aec97171fd5f8401eee0a00519943d393e801f" HandleID="k8s-pod-network.472415d550a77d5a0800f54596aec97171fd5f8401eee0a00519943d393e801f" Workload="localhost-k8s-goldmane--7988f88666--9zjzp-eth0" Sep 10 00:14:15.184350 containerd[1430]: 2025-09-10 00:14:15.168 [INFO][5894] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:14:15.184350 containerd[1430]: 2025-09-10 00:14:15.168 [INFO][5894] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:14:15.184350 containerd[1430]: 2025-09-10 00:14:15.178 [WARNING][5894] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="472415d550a77d5a0800f54596aec97171fd5f8401eee0a00519943d393e801f" HandleID="k8s-pod-network.472415d550a77d5a0800f54596aec97171fd5f8401eee0a00519943d393e801f" Workload="localhost-k8s-goldmane--7988f88666--9zjzp-eth0" Sep 10 00:14:15.184350 containerd[1430]: 2025-09-10 00:14:15.178 [INFO][5894] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="472415d550a77d5a0800f54596aec97171fd5f8401eee0a00519943d393e801f" HandleID="k8s-pod-network.472415d550a77d5a0800f54596aec97171fd5f8401eee0a00519943d393e801f" Workload="localhost-k8s-goldmane--7988f88666--9zjzp-eth0" Sep 10 00:14:15.184350 containerd[1430]: 2025-09-10 00:14:15.180 [INFO][5894] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:14:15.184350 containerd[1430]: 2025-09-10 00:14:15.182 [INFO][5883] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="472415d550a77d5a0800f54596aec97171fd5f8401eee0a00519943d393e801f" Sep 10 00:14:15.184892 containerd[1430]: time="2025-09-10T00:14:15.184381476Z" level=info msg="TearDown network for sandbox \"472415d550a77d5a0800f54596aec97171fd5f8401eee0a00519943d393e801f\" successfully" Sep 10 00:14:15.184892 containerd[1430]: time="2025-09-10T00:14:15.184406316Z" level=info msg="StopPodSandbox for \"472415d550a77d5a0800f54596aec97171fd5f8401eee0a00519943d393e801f\" returns successfully" Sep 10 00:14:15.184892 containerd[1430]: time="2025-09-10T00:14:15.184850999Z" level=info msg="RemovePodSandbox for \"472415d550a77d5a0800f54596aec97171fd5f8401eee0a00519943d393e801f\"" Sep 10 00:14:15.184892 containerd[1430]: time="2025-09-10T00:14:15.184877199Z" level=info msg="Forcibly stopping sandbox \"472415d550a77d5a0800f54596aec97171fd5f8401eee0a00519943d393e801f\"" Sep 10 00:14:15.204462 sshd[5904]: Accepted publickey for core from 10.0.0.1 port 41178 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:14:15.205840 sshd[5904]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:14:15.210765 systemd-logind[1417]: New session 16 of user core. Sep 10 00:14:15.220054 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 10 00:14:15.254997 containerd[1430]: 2025-09-10 00:14:15.224 [WARNING][5918] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="472415d550a77d5a0800f54596aec97171fd5f8401eee0a00519943d393e801f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--9zjzp-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"d1a71852-d2fe-4382-8a25-a3d286247a75", ResourceVersion:"1168", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 13, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e874f8a086ccd5e2aca1101fb74180dd7b1e6345e182eb750c32bcdfc7a47a4d", Pod:"goldmane-7988f88666-9zjzp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calica46d2300cd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:14:15.254997 containerd[1430]: 2025-09-10 00:14:15.224 [INFO][5918] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="472415d550a77d5a0800f54596aec97171fd5f8401eee0a00519943d393e801f" Sep 10 00:14:15.254997 containerd[1430]: 2025-09-10 00:14:15.224 [INFO][5918] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="472415d550a77d5a0800f54596aec97171fd5f8401eee0a00519943d393e801f" iface="eth0" netns="" Sep 10 00:14:15.254997 containerd[1430]: 2025-09-10 00:14:15.224 [INFO][5918] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="472415d550a77d5a0800f54596aec97171fd5f8401eee0a00519943d393e801f" Sep 10 00:14:15.254997 containerd[1430]: 2025-09-10 00:14:15.224 [INFO][5918] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="472415d550a77d5a0800f54596aec97171fd5f8401eee0a00519943d393e801f" Sep 10 00:14:15.254997 containerd[1430]: 2025-09-10 00:14:15.242 [INFO][5928] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="472415d550a77d5a0800f54596aec97171fd5f8401eee0a00519943d393e801f" HandleID="k8s-pod-network.472415d550a77d5a0800f54596aec97171fd5f8401eee0a00519943d393e801f" Workload="localhost-k8s-goldmane--7988f88666--9zjzp-eth0" Sep 10 00:14:15.254997 containerd[1430]: 2025-09-10 00:14:15.242 [INFO][5928] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:14:15.254997 containerd[1430]: 2025-09-10 00:14:15.242 [INFO][5928] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:14:15.254997 containerd[1430]: 2025-09-10 00:14:15.250 [WARNING][5928] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="472415d550a77d5a0800f54596aec97171fd5f8401eee0a00519943d393e801f" HandleID="k8s-pod-network.472415d550a77d5a0800f54596aec97171fd5f8401eee0a00519943d393e801f" Workload="localhost-k8s-goldmane--7988f88666--9zjzp-eth0" Sep 10 00:14:15.254997 containerd[1430]: 2025-09-10 00:14:15.250 [INFO][5928] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="472415d550a77d5a0800f54596aec97171fd5f8401eee0a00519943d393e801f" HandleID="k8s-pod-network.472415d550a77d5a0800f54596aec97171fd5f8401eee0a00519943d393e801f" Workload="localhost-k8s-goldmane--7988f88666--9zjzp-eth0" Sep 10 00:14:15.254997 containerd[1430]: 2025-09-10 00:14:15.251 [INFO][5928] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:14:15.254997 containerd[1430]: 2025-09-10 00:14:15.253 [INFO][5918] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="472415d550a77d5a0800f54596aec97171fd5f8401eee0a00519943d393e801f" Sep 10 00:14:15.254997 containerd[1430]: time="2025-09-10T00:14:15.255077981Z" level=info msg="TearDown network for sandbox \"472415d550a77d5a0800f54596aec97171fd5f8401eee0a00519943d393e801f\" successfully" Sep 10 00:14:15.258690 containerd[1430]: time="2025-09-10T00:14:15.258618161Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"472415d550a77d5a0800f54596aec97171fd5f8401eee0a00519943d393e801f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 10 00:14:15.258967 containerd[1430]: time="2025-09-10T00:14:15.258867362Z" level=info msg="RemovePodSandbox \"472415d550a77d5a0800f54596aec97171fd5f8401eee0a00519943d393e801f\" returns successfully" Sep 10 00:14:15.259443 containerd[1430]: time="2025-09-10T00:14:15.259401365Z" level=info msg="StopPodSandbox for \"1d818c7a41fd63b7d4631ae852b8897391bd867cf4a341f468a4a1dc715dec5d\"" Sep 10 00:14:15.324429 containerd[1430]: 2025-09-10 00:14:15.292 [WARNING][5945] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1d818c7a41fd63b7d4631ae852b8897391bd867cf4a341f468a4a1dc715dec5d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--9575c-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"138bcf10-bfb3-4e83-915f-37b2d9c80ead", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 13, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"78a011b78a26092b5c8f33ef62e4f842e4057dd80a615447c08ae92476b92627", Pod:"coredns-7c65d6cfc9-9575c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia0d3215865b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:14:15.324429 containerd[1430]: 2025-09-10 00:14:15.292 [INFO][5945] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1d818c7a41fd63b7d4631ae852b8897391bd867cf4a341f468a4a1dc715dec5d" Sep 10 00:14:15.324429 containerd[1430]: 2025-09-10 00:14:15.292 [INFO][5945] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1d818c7a41fd63b7d4631ae852b8897391bd867cf4a341f468a4a1dc715dec5d" iface="eth0" netns="" Sep 10 00:14:15.324429 containerd[1430]: 2025-09-10 00:14:15.292 [INFO][5945] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1d818c7a41fd63b7d4631ae852b8897391bd867cf4a341f468a4a1dc715dec5d" Sep 10 00:14:15.324429 containerd[1430]: 2025-09-10 00:14:15.292 [INFO][5945] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1d818c7a41fd63b7d4631ae852b8897391bd867cf4a341f468a4a1dc715dec5d" Sep 10 00:14:15.324429 containerd[1430]: 2025-09-10 00:14:15.310 [INFO][5954] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1d818c7a41fd63b7d4631ae852b8897391bd867cf4a341f468a4a1dc715dec5d" HandleID="k8s-pod-network.1d818c7a41fd63b7d4631ae852b8897391bd867cf4a341f468a4a1dc715dec5d" Workload="localhost-k8s-coredns--7c65d6cfc9--9575c-eth0" Sep 10 00:14:15.324429 containerd[1430]: 2025-09-10 00:14:15.310 [INFO][5954] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:14:15.324429 containerd[1430]: 2025-09-10 00:14:15.310 [INFO][5954] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:14:15.324429 containerd[1430]: 2025-09-10 00:14:15.319 [WARNING][5954] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1d818c7a41fd63b7d4631ae852b8897391bd867cf4a341f468a4a1dc715dec5d" HandleID="k8s-pod-network.1d818c7a41fd63b7d4631ae852b8897391bd867cf4a341f468a4a1dc715dec5d" Workload="localhost-k8s-coredns--7c65d6cfc9--9575c-eth0" Sep 10 00:14:15.324429 containerd[1430]: 2025-09-10 00:14:15.319 [INFO][5954] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1d818c7a41fd63b7d4631ae852b8897391bd867cf4a341f468a4a1dc715dec5d" HandleID="k8s-pod-network.1d818c7a41fd63b7d4631ae852b8897391bd867cf4a341f468a4a1dc715dec5d" Workload="localhost-k8s-coredns--7c65d6cfc9--9575c-eth0" Sep 10 00:14:15.324429 containerd[1430]: 2025-09-10 00:14:15.320 [INFO][5954] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:14:15.324429 containerd[1430]: 2025-09-10 00:14:15.322 [INFO][5945] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1d818c7a41fd63b7d4631ae852b8897391bd867cf4a341f468a4a1dc715dec5d" Sep 10 00:14:15.325487 containerd[1430]: time="2025-09-10T00:14:15.324465199Z" level=info msg="TearDown network for sandbox \"1d818c7a41fd63b7d4631ae852b8897391bd867cf4a341f468a4a1dc715dec5d\" successfully" Sep 10 00:14:15.325487 containerd[1430]: time="2025-09-10T00:14:15.324488960Z" level=info msg="StopPodSandbox for \"1d818c7a41fd63b7d4631ae852b8897391bd867cf4a341f468a4a1dc715dec5d\" returns successfully" Sep 10 00:14:15.325487 containerd[1430]: time="2025-09-10T00:14:15.324947882Z" level=info msg="RemovePodSandbox for \"1d818c7a41fd63b7d4631ae852b8897391bd867cf4a341f468a4a1dc715dec5d\"" Sep 10 00:14:15.325487 containerd[1430]: time="2025-09-10T00:14:15.324978562Z" level=info msg="Forcibly stopping sandbox \"1d818c7a41fd63b7d4631ae852b8897391bd867cf4a341f468a4a1dc715dec5d\"" Sep 10 00:14:15.401141 containerd[1430]: 2025-09-10 00:14:15.365 [WARNING][5977] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1d818c7a41fd63b7d4631ae852b8897391bd867cf4a341f468a4a1dc715dec5d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--9575c-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"138bcf10-bfb3-4e83-915f-37b2d9c80ead", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 13, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"78a011b78a26092b5c8f33ef62e4f842e4057dd80a615447c08ae92476b92627", Pod:"coredns-7c65d6cfc9-9575c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia0d3215865b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:14:15.401141 containerd[1430]: 2025-09-10 00:14:15.365 [INFO][5977] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1d818c7a41fd63b7d4631ae852b8897391bd867cf4a341f468a4a1dc715dec5d" Sep 10 00:14:15.401141 containerd[1430]: 2025-09-10 00:14:15.365 [INFO][5977] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1d818c7a41fd63b7d4631ae852b8897391bd867cf4a341f468a4a1dc715dec5d" iface="eth0" netns="" Sep 10 00:14:15.401141 containerd[1430]: 2025-09-10 00:14:15.365 [INFO][5977] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1d818c7a41fd63b7d4631ae852b8897391bd867cf4a341f468a4a1dc715dec5d" Sep 10 00:14:15.401141 containerd[1430]: 2025-09-10 00:14:15.365 [INFO][5977] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1d818c7a41fd63b7d4631ae852b8897391bd867cf4a341f468a4a1dc715dec5d" Sep 10 00:14:15.401141 containerd[1430]: 2025-09-10 00:14:15.384 [INFO][5987] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1d818c7a41fd63b7d4631ae852b8897391bd867cf4a341f468a4a1dc715dec5d" HandleID="k8s-pod-network.1d818c7a41fd63b7d4631ae852b8897391bd867cf4a341f468a4a1dc715dec5d" Workload="localhost-k8s-coredns--7c65d6cfc9--9575c-eth0" Sep 10 00:14:15.401141 containerd[1430]: 2025-09-10 00:14:15.384 [INFO][5987] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:14:15.401141 containerd[1430]: 2025-09-10 00:14:15.384 [INFO][5987] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:14:15.401141 containerd[1430]: 2025-09-10 00:14:15.395 [WARNING][5987] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1d818c7a41fd63b7d4631ae852b8897391bd867cf4a341f468a4a1dc715dec5d" HandleID="k8s-pod-network.1d818c7a41fd63b7d4631ae852b8897391bd867cf4a341f468a4a1dc715dec5d" Workload="localhost-k8s-coredns--7c65d6cfc9--9575c-eth0" Sep 10 00:14:15.401141 containerd[1430]: 2025-09-10 00:14:15.395 [INFO][5987] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1d818c7a41fd63b7d4631ae852b8897391bd867cf4a341f468a4a1dc715dec5d" HandleID="k8s-pod-network.1d818c7a41fd63b7d4631ae852b8897391bd867cf4a341f468a4a1dc715dec5d" Workload="localhost-k8s-coredns--7c65d6cfc9--9575c-eth0" Sep 10 00:14:15.401141 containerd[1430]: 2025-09-10 00:14:15.397 [INFO][5987] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:14:15.401141 containerd[1430]: 2025-09-10 00:14:15.399 [INFO][5977] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1d818c7a41fd63b7d4631ae852b8897391bd867cf4a341f468a4a1dc715dec5d" Sep 10 00:14:15.401141 containerd[1430]: time="2025-09-10T00:14:15.401120617Z" level=info msg="TearDown network for sandbox \"1d818c7a41fd63b7d4631ae852b8897391bd867cf4a341f468a4a1dc715dec5d\" successfully" Sep 10 00:14:15.406157 containerd[1430]: time="2025-09-10T00:14:15.405968164Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1d818c7a41fd63b7d4631ae852b8897391bd867cf4a341f468a4a1dc715dec5d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 10 00:14:15.406157 containerd[1430]: time="2025-09-10T00:14:15.406050364Z" level=info msg="RemovePodSandbox \"1d818c7a41fd63b7d4631ae852b8897391bd867cf4a341f468a4a1dc715dec5d\" returns successfully" Sep 10 00:14:15.406928 containerd[1430]: time="2025-09-10T00:14:15.406894809Z" level=info msg="StopPodSandbox for \"deb389a8cde9c78f119ddad7b44b9900de9441dbb8ede2f6cd31993d7b7967c8\"" Sep 10 00:14:15.476735 containerd[1430]: 2025-09-10 00:14:15.438 [WARNING][6006] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="deb389a8cde9c78f119ddad7b44b9900de9441dbb8ede2f6cd31993d7b7967c8" WorkloadEndpoint="localhost-k8s-whisker--78574f96f6--6lwj2-eth0" Sep 10 00:14:15.476735 containerd[1430]: 2025-09-10 00:14:15.438 [INFO][6006] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="deb389a8cde9c78f119ddad7b44b9900de9441dbb8ede2f6cd31993d7b7967c8" Sep 10 00:14:15.476735 containerd[1430]: 2025-09-10 00:14:15.438 [INFO][6006] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="deb389a8cde9c78f119ddad7b44b9900de9441dbb8ede2f6cd31993d7b7967c8" iface="eth0" netns="" Sep 10 00:14:15.476735 containerd[1430]: 2025-09-10 00:14:15.438 [INFO][6006] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="deb389a8cde9c78f119ddad7b44b9900de9441dbb8ede2f6cd31993d7b7967c8" Sep 10 00:14:15.476735 containerd[1430]: 2025-09-10 00:14:15.438 [INFO][6006] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="deb389a8cde9c78f119ddad7b44b9900de9441dbb8ede2f6cd31993d7b7967c8" Sep 10 00:14:15.476735 containerd[1430]: 2025-09-10 00:14:15.460 [INFO][6015] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="deb389a8cde9c78f119ddad7b44b9900de9441dbb8ede2f6cd31993d7b7967c8" HandleID="k8s-pod-network.deb389a8cde9c78f119ddad7b44b9900de9441dbb8ede2f6cd31993d7b7967c8" Workload="localhost-k8s-whisker--78574f96f6--6lwj2-eth0" Sep 10 00:14:15.476735 containerd[1430]: 2025-09-10 00:14:15.460 [INFO][6015] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:14:15.476735 containerd[1430]: 2025-09-10 00:14:15.460 [INFO][6015] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:14:15.476735 containerd[1430]: 2025-09-10 00:14:15.469 [WARNING][6015] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="deb389a8cde9c78f119ddad7b44b9900de9441dbb8ede2f6cd31993d7b7967c8" HandleID="k8s-pod-network.deb389a8cde9c78f119ddad7b44b9900de9441dbb8ede2f6cd31993d7b7967c8" Workload="localhost-k8s-whisker--78574f96f6--6lwj2-eth0" Sep 10 00:14:15.476735 containerd[1430]: 2025-09-10 00:14:15.469 [INFO][6015] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="deb389a8cde9c78f119ddad7b44b9900de9441dbb8ede2f6cd31993d7b7967c8" HandleID="k8s-pod-network.deb389a8cde9c78f119ddad7b44b9900de9441dbb8ede2f6cd31993d7b7967c8" Workload="localhost-k8s-whisker--78574f96f6--6lwj2-eth0" Sep 10 00:14:15.476735 containerd[1430]: 2025-09-10 00:14:15.470 [INFO][6015] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:14:15.476735 containerd[1430]: 2025-09-10 00:14:15.472 [INFO][6006] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="deb389a8cde9c78f119ddad7b44b9900de9441dbb8ede2f6cd31993d7b7967c8" Sep 10 00:14:15.477088 containerd[1430]: time="2025-09-10T00:14:15.476773110Z" level=info msg="TearDown network for sandbox \"deb389a8cde9c78f119ddad7b44b9900de9441dbb8ede2f6cd31993d7b7967c8\" successfully" Sep 10 00:14:15.477088 containerd[1430]: time="2025-09-10T00:14:15.476798230Z" level=info msg="StopPodSandbox for \"deb389a8cde9c78f119ddad7b44b9900de9441dbb8ede2f6cd31993d7b7967c8\" returns successfully" Sep 10 00:14:15.477658 containerd[1430]: time="2025-09-10T00:14:15.477351873Z" level=info msg="RemovePodSandbox for \"deb389a8cde9c78f119ddad7b44b9900de9441dbb8ede2f6cd31993d7b7967c8\"" Sep 10 00:14:15.477658 containerd[1430]: time="2025-09-10T00:14:15.477383513Z" level=info msg="Forcibly stopping sandbox \"deb389a8cde9c78f119ddad7b44b9900de9441dbb8ede2f6cd31993d7b7967c8\"" Sep 10 00:14:15.567898 containerd[1430]: 2025-09-10 00:14:15.524 [WARNING][6032] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="deb389a8cde9c78f119ddad7b44b9900de9441dbb8ede2f6cd31993d7b7967c8" WorkloadEndpoint="localhost-k8s-whisker--78574f96f6--6lwj2-eth0" Sep 10 00:14:15.567898 containerd[1430]: 2025-09-10 00:14:15.524 [INFO][6032] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="deb389a8cde9c78f119ddad7b44b9900de9441dbb8ede2f6cd31993d7b7967c8" Sep 10 00:14:15.567898 containerd[1430]: 2025-09-10 00:14:15.524 [INFO][6032] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="deb389a8cde9c78f119ddad7b44b9900de9441dbb8ede2f6cd31993d7b7967c8" iface="eth0" netns="" Sep 10 00:14:15.567898 containerd[1430]: 2025-09-10 00:14:15.524 [INFO][6032] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="deb389a8cde9c78f119ddad7b44b9900de9441dbb8ede2f6cd31993d7b7967c8" Sep 10 00:14:15.567898 containerd[1430]: 2025-09-10 00:14:15.524 [INFO][6032] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="deb389a8cde9c78f119ddad7b44b9900de9441dbb8ede2f6cd31993d7b7967c8" Sep 10 00:14:15.567898 containerd[1430]: 2025-09-10 00:14:15.546 [INFO][6040] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="deb389a8cde9c78f119ddad7b44b9900de9441dbb8ede2f6cd31993d7b7967c8" HandleID="k8s-pod-network.deb389a8cde9c78f119ddad7b44b9900de9441dbb8ede2f6cd31993d7b7967c8" Workload="localhost-k8s-whisker--78574f96f6--6lwj2-eth0" Sep 10 00:14:15.567898 containerd[1430]: 2025-09-10 00:14:15.547 [INFO][6040] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:14:15.567898 containerd[1430]: 2025-09-10 00:14:15.547 [INFO][6040] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:14:15.567898 containerd[1430]: 2025-09-10 00:14:15.561 [WARNING][6040] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="deb389a8cde9c78f119ddad7b44b9900de9441dbb8ede2f6cd31993d7b7967c8" HandleID="k8s-pod-network.deb389a8cde9c78f119ddad7b44b9900de9441dbb8ede2f6cd31993d7b7967c8" Workload="localhost-k8s-whisker--78574f96f6--6lwj2-eth0" Sep 10 00:14:15.567898 containerd[1430]: 2025-09-10 00:14:15.561 [INFO][6040] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="deb389a8cde9c78f119ddad7b44b9900de9441dbb8ede2f6cd31993d7b7967c8" HandleID="k8s-pod-network.deb389a8cde9c78f119ddad7b44b9900de9441dbb8ede2f6cd31993d7b7967c8" Workload="localhost-k8s-whisker--78574f96f6--6lwj2-eth0" Sep 10 00:14:15.567898 containerd[1430]: 2025-09-10 00:14:15.563 [INFO][6040] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:14:15.567898 containerd[1430]: 2025-09-10 00:14:15.565 [INFO][6032] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="deb389a8cde9c78f119ddad7b44b9900de9441dbb8ede2f6cd31993d7b7967c8" Sep 10 00:14:15.568245 containerd[1430]: time="2025-09-10T00:14:15.567943206Z" level=info msg="TearDown network for sandbox \"deb389a8cde9c78f119ddad7b44b9900de9441dbb8ede2f6cd31993d7b7967c8\" successfully" Sep 10 00:14:15.611278 containerd[1430]: time="2025-09-10T00:14:15.611227242Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"deb389a8cde9c78f119ddad7b44b9900de9441dbb8ede2f6cd31993d7b7967c8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 10 00:14:15.611415 containerd[1430]: time="2025-09-10T00:14:15.611300403Z" level=info msg="RemovePodSandbox \"deb389a8cde9c78f119ddad7b44b9900de9441dbb8ede2f6cd31993d7b7967c8\" returns successfully" Sep 10 00:14:15.611798 containerd[1430]: time="2025-09-10T00:14:15.611768845Z" level=info msg="StopPodSandbox for \"2e115a12ea45474c3c9e17a2166489c5cf6672f00d5896ee6a3a6628774baf40\"" Sep 10 00:14:15.654072 sshd[5904]: pam_unix(sshd:session): session closed for user core Sep 10 00:14:15.664337 systemd[1]: sshd@15-10.0.0.106:22-10.0.0.1:41178.service: Deactivated successfully. Sep 10 00:14:15.666412 systemd[1]: session-16.scope: Deactivated successfully. Sep 10 00:14:15.668150 systemd-logind[1417]: Session 16 logged out. Waiting for processes to exit. Sep 10 00:14:15.674084 systemd[1]: Started sshd@16-10.0.0.106:22-10.0.0.1:41186.service - OpenSSH per-connection server daemon (10.0.0.1:41186). Sep 10 00:14:15.675928 systemd-logind[1417]: Removed session 16. Sep 10 00:14:15.708789 containerd[1430]: 2025-09-10 00:14:15.665 [WARNING][6057] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2e115a12ea45474c3c9e17a2166489c5cf6672f00d5896ee6a3a6628774baf40" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7f764b5b64--dd89z-eth0", GenerateName:"calico-kube-controllers-7f764b5b64-", Namespace:"calico-system", SelfLink:"", UID:"2f58e97e-6e12-4135-b90c-0a1b0b407422", ResourceVersion:"1215", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 13, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f764b5b64", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0a27bced0f808e0bec550c849e07387a6f59054710e9802eb2a8367c7616872e", Pod:"calico-kube-controllers-7f764b5b64-dd89z", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie40bc9eb9c4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:14:15.708789 containerd[1430]: 2025-09-10 00:14:15.665 [INFO][6057] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2e115a12ea45474c3c9e17a2166489c5cf6672f00d5896ee6a3a6628774baf40" Sep 10 00:14:15.708789 containerd[1430]: 2025-09-10 00:14:15.665 [INFO][6057] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2e115a12ea45474c3c9e17a2166489c5cf6672f00d5896ee6a3a6628774baf40" iface="eth0" netns="" Sep 10 00:14:15.708789 containerd[1430]: 2025-09-10 00:14:15.665 [INFO][6057] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2e115a12ea45474c3c9e17a2166489c5cf6672f00d5896ee6a3a6628774baf40" Sep 10 00:14:15.708789 containerd[1430]: 2025-09-10 00:14:15.665 [INFO][6057] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2e115a12ea45474c3c9e17a2166489c5cf6672f00d5896ee6a3a6628774baf40" Sep 10 00:14:15.708789 containerd[1430]: 2025-09-10 00:14:15.692 [INFO][6068] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2e115a12ea45474c3c9e17a2166489c5cf6672f00d5896ee6a3a6628774baf40" HandleID="k8s-pod-network.2e115a12ea45474c3c9e17a2166489c5cf6672f00d5896ee6a3a6628774baf40" Workload="localhost-k8s-calico--kube--controllers--7f764b5b64--dd89z-eth0" Sep 10 00:14:15.708789 containerd[1430]: 2025-09-10 00:14:15.692 [INFO][6068] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:14:15.708789 containerd[1430]: 2025-09-10 00:14:15.692 [INFO][6068] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:14:15.708789 containerd[1430]: 2025-09-10 00:14:15.704 [WARNING][6068] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2e115a12ea45474c3c9e17a2166489c5cf6672f00d5896ee6a3a6628774baf40" HandleID="k8s-pod-network.2e115a12ea45474c3c9e17a2166489c5cf6672f00d5896ee6a3a6628774baf40" Workload="localhost-k8s-calico--kube--controllers--7f764b5b64--dd89z-eth0" Sep 10 00:14:15.708789 containerd[1430]: 2025-09-10 00:14:15.704 [INFO][6068] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2e115a12ea45474c3c9e17a2166489c5cf6672f00d5896ee6a3a6628774baf40" HandleID="k8s-pod-network.2e115a12ea45474c3c9e17a2166489c5cf6672f00d5896ee6a3a6628774baf40" Workload="localhost-k8s-calico--kube--controllers--7f764b5b64--dd89z-eth0" Sep 10 00:14:15.708789 containerd[1430]: 2025-09-10 00:14:15.705 [INFO][6068] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:14:15.708789 containerd[1430]: 2025-09-10 00:14:15.707 [INFO][6057] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2e115a12ea45474c3c9e17a2166489c5cf6672f00d5896ee6a3a6628774baf40" Sep 10 00:14:15.709484 containerd[1430]: time="2025-09-10T00:14:15.709223736Z" level=info msg="TearDown network for sandbox \"2e115a12ea45474c3c9e17a2166489c5cf6672f00d5896ee6a3a6628774baf40\" successfully" Sep 10 00:14:15.709484 containerd[1430]: time="2025-09-10T00:14:15.709270457Z" level=info msg="StopPodSandbox for \"2e115a12ea45474c3c9e17a2166489c5cf6672f00d5896ee6a3a6628774baf40\" returns successfully" Sep 10 00:14:15.709835 containerd[1430]: time="2025-09-10T00:14:15.709808540Z" level=info msg="RemovePodSandbox for \"2e115a12ea45474c3c9e17a2166489c5cf6672f00d5896ee6a3a6628774baf40\"" Sep 10 00:14:15.709942 containerd[1430]: time="2025-09-10T00:14:15.709841540Z" level=info msg="Forcibly stopping sandbox \"2e115a12ea45474c3c9e17a2166489c5cf6672f00d5896ee6a3a6628774baf40\"" Sep 10 00:14:15.717978 sshd[6074]: Accepted publickey for core from 10.0.0.1 port 41186 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:14:15.719452 sshd[6074]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:14:15.724092 systemd-logind[1417]: New session 17 of user core. Sep 10 00:14:15.728757 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 10 00:14:15.776625 containerd[1430]: 2025-09-10 00:14:15.744 [WARNING][6088] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2e115a12ea45474c3c9e17a2166489c5cf6672f00d5896ee6a3a6628774baf40" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7f764b5b64--dd89z-eth0", GenerateName:"calico-kube-controllers-7f764b5b64-", Namespace:"calico-system", SelfLink:"", UID:"2f58e97e-6e12-4135-b90c-0a1b0b407422", ResourceVersion:"1215", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 13, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f764b5b64", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0a27bced0f808e0bec550c849e07387a6f59054710e9802eb2a8367c7616872e", Pod:"calico-kube-controllers-7f764b5b64-dd89z", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie40bc9eb9c4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:14:15.776625 containerd[1430]: 2025-09-10 00:14:15.744 [INFO][6088] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2e115a12ea45474c3c9e17a2166489c5cf6672f00d5896ee6a3a6628774baf40" Sep 10 00:14:15.776625 containerd[1430]: 2025-09-10 00:14:15.744 [INFO][6088] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2e115a12ea45474c3c9e17a2166489c5cf6672f00d5896ee6a3a6628774baf40" iface="eth0" netns="" Sep 10 00:14:15.776625 containerd[1430]: 2025-09-10 00:14:15.744 [INFO][6088] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2e115a12ea45474c3c9e17a2166489c5cf6672f00d5896ee6a3a6628774baf40" Sep 10 00:14:15.776625 containerd[1430]: 2025-09-10 00:14:15.744 [INFO][6088] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2e115a12ea45474c3c9e17a2166489c5cf6672f00d5896ee6a3a6628774baf40" Sep 10 00:14:15.776625 containerd[1430]: 2025-09-10 00:14:15.762 [INFO][6098] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2e115a12ea45474c3c9e17a2166489c5cf6672f00d5896ee6a3a6628774baf40" HandleID="k8s-pod-network.2e115a12ea45474c3c9e17a2166489c5cf6672f00d5896ee6a3a6628774baf40" Workload="localhost-k8s-calico--kube--controllers--7f764b5b64--dd89z-eth0" Sep 10 00:14:15.776625 containerd[1430]: 2025-09-10 00:14:15.762 [INFO][6098] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:14:15.776625 containerd[1430]: 2025-09-10 00:14:15.762 [INFO][6098] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:14:15.776625 containerd[1430]: 2025-09-10 00:14:15.771 [WARNING][6098] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2e115a12ea45474c3c9e17a2166489c5cf6672f00d5896ee6a3a6628774baf40" HandleID="k8s-pod-network.2e115a12ea45474c3c9e17a2166489c5cf6672f00d5896ee6a3a6628774baf40" Workload="localhost-k8s-calico--kube--controllers--7f764b5b64--dd89z-eth0" Sep 10 00:14:15.776625 containerd[1430]: 2025-09-10 00:14:15.771 [INFO][6098] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2e115a12ea45474c3c9e17a2166489c5cf6672f00d5896ee6a3a6628774baf40" HandleID="k8s-pod-network.2e115a12ea45474c3c9e17a2166489c5cf6672f00d5896ee6a3a6628774baf40" Workload="localhost-k8s-calico--kube--controllers--7f764b5b64--dd89z-eth0" Sep 10 00:14:15.776625 containerd[1430]: 2025-09-10 00:14:15.772 [INFO][6098] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:14:15.776625 containerd[1430]: 2025-09-10 00:14:15.774 [INFO][6088] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2e115a12ea45474c3c9e17a2166489c5cf6672f00d5896ee6a3a6628774baf40" Sep 10 00:14:15.777075 containerd[1430]: time="2025-09-10T00:14:15.776664744Z" level=info msg="TearDown network for sandbox \"2e115a12ea45474c3c9e17a2166489c5cf6672f00d5896ee6a3a6628774baf40\" successfully" Sep 10 00:14:15.779608 containerd[1430]: time="2025-09-10T00:14:15.779558560Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2e115a12ea45474c3c9e17a2166489c5cf6672f00d5896ee6a3a6628774baf40\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 10 00:14:15.779706 containerd[1430]: time="2025-09-10T00:14:15.779660400Z" level=info msg="RemovePodSandbox \"2e115a12ea45474c3c9e17a2166489c5cf6672f00d5896ee6a3a6628774baf40\" returns successfully" Sep 10 00:14:15.780562 containerd[1430]: time="2025-09-10T00:14:15.780245323Z" level=info msg="StopPodSandbox for \"2d43f127c1ddba66236343c437ddd27b49d78d0eaa7075097b8975be4eb06d4c\"" Sep 10 00:14:15.852209 containerd[1430]: 2025-09-10 00:14:15.816 [WARNING][6124] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2d43f127c1ddba66236343c437ddd27b49d78d0eaa7075097b8975be4eb06d4c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6bc6768489--gxbgn-eth0", GenerateName:"calico-apiserver-6bc6768489-", Namespace:"calico-apiserver", SelfLink:"", UID:"34aa417d-a639-4656-862d-aac7f831a9b9", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 13, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bc6768489", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d524110e64c55aa32146be1fcf958ecf06169f0170e7e32e1837750e2164a58e", Pod:"calico-apiserver-6bc6768489-gxbgn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5b2a78a0105", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:14:15.852209 containerd[1430]: 2025-09-10 00:14:15.816 [INFO][6124] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2d43f127c1ddba66236343c437ddd27b49d78d0eaa7075097b8975be4eb06d4c" Sep 10 00:14:15.852209 containerd[1430]: 2025-09-10 00:14:15.816 [INFO][6124] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2d43f127c1ddba66236343c437ddd27b49d78d0eaa7075097b8975be4eb06d4c" iface="eth0" netns="" Sep 10 00:14:15.852209 containerd[1430]: 2025-09-10 00:14:15.816 [INFO][6124] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2d43f127c1ddba66236343c437ddd27b49d78d0eaa7075097b8975be4eb06d4c" Sep 10 00:14:15.852209 containerd[1430]: 2025-09-10 00:14:15.816 [INFO][6124] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2d43f127c1ddba66236343c437ddd27b49d78d0eaa7075097b8975be4eb06d4c" Sep 10 00:14:15.852209 containerd[1430]: 2025-09-10 00:14:15.838 [INFO][6134] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2d43f127c1ddba66236343c437ddd27b49d78d0eaa7075097b8975be4eb06d4c" HandleID="k8s-pod-network.2d43f127c1ddba66236343c437ddd27b49d78d0eaa7075097b8975be4eb06d4c" Workload="localhost-k8s-calico--apiserver--6bc6768489--gxbgn-eth0" Sep 10 00:14:15.852209 containerd[1430]: 2025-09-10 00:14:15.838 [INFO][6134] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:14:15.852209 containerd[1430]: 2025-09-10 00:14:15.838 [INFO][6134] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:14:15.852209 containerd[1430]: 2025-09-10 00:14:15.847 [WARNING][6134] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2d43f127c1ddba66236343c437ddd27b49d78d0eaa7075097b8975be4eb06d4c" HandleID="k8s-pod-network.2d43f127c1ddba66236343c437ddd27b49d78d0eaa7075097b8975be4eb06d4c" Workload="localhost-k8s-calico--apiserver--6bc6768489--gxbgn-eth0" Sep 10 00:14:15.852209 containerd[1430]: 2025-09-10 00:14:15.847 [INFO][6134] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2d43f127c1ddba66236343c437ddd27b49d78d0eaa7075097b8975be4eb06d4c" HandleID="k8s-pod-network.2d43f127c1ddba66236343c437ddd27b49d78d0eaa7075097b8975be4eb06d4c" Workload="localhost-k8s-calico--apiserver--6bc6768489--gxbgn-eth0" Sep 10 00:14:15.852209 containerd[1430]: 2025-09-10 00:14:15.848 [INFO][6134] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:14:15.852209 containerd[1430]: 2025-09-10 00:14:15.850 [INFO][6124] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2d43f127c1ddba66236343c437ddd27b49d78d0eaa7075097b8975be4eb06d4c" Sep 10 00:14:15.852958 containerd[1430]: time="2025-09-10T00:14:15.852246196Z" level=info msg="TearDown network for sandbox \"2d43f127c1ddba66236343c437ddd27b49d78d0eaa7075097b8975be4eb06d4c\" successfully" Sep 10 00:14:15.852958 containerd[1430]: time="2025-09-10T00:14:15.852270076Z" level=info msg="StopPodSandbox for \"2d43f127c1ddba66236343c437ddd27b49d78d0eaa7075097b8975be4eb06d4c\" returns successfully" Sep 10 00:14:15.853281 containerd[1430]: time="2025-09-10T00:14:15.853258761Z" level=info msg="RemovePodSandbox for \"2d43f127c1ddba66236343c437ddd27b49d78d0eaa7075097b8975be4eb06d4c\"" Sep 10 00:14:15.853329 containerd[1430]: time="2025-09-10T00:14:15.853294242Z" level=info msg="Forcibly stopping sandbox \"2d43f127c1ddba66236343c437ddd27b49d78d0eaa7075097b8975be4eb06d4c\"" Sep 10 00:14:15.870390 sshd[6074]: pam_unix(sshd:session): session closed for user core Sep 10 00:14:15.874773 systemd[1]: sshd@16-10.0.0.106:22-10.0.0.1:41186.service: Deactivated successfully. Sep 10 00:14:15.876895 systemd[1]: session-17.scope: Deactivated successfully. Sep 10 00:14:15.877566 systemd-logind[1417]: Session 17 logged out. Waiting for processes to exit. Sep 10 00:14:15.878419 systemd-logind[1417]: Removed session 17. Sep 10 00:14:15.924288 containerd[1430]: 2025-09-10 00:14:15.890 [WARNING][6152] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2d43f127c1ddba66236343c437ddd27b49d78d0eaa7075097b8975be4eb06d4c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6bc6768489--gxbgn-eth0", GenerateName:"calico-apiserver-6bc6768489-", Namespace:"calico-apiserver", SelfLink:"", UID:"34aa417d-a639-4656-862d-aac7f831a9b9", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 13, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bc6768489", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d524110e64c55aa32146be1fcf958ecf06169f0170e7e32e1837750e2164a58e", Pod:"calico-apiserver-6bc6768489-gxbgn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5b2a78a0105", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:14:15.924288 containerd[1430]: 2025-09-10 00:14:15.890 [INFO][6152] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2d43f127c1ddba66236343c437ddd27b49d78d0eaa7075097b8975be4eb06d4c" Sep 10 00:14:15.924288 containerd[1430]: 2025-09-10 00:14:15.890 [INFO][6152] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2d43f127c1ddba66236343c437ddd27b49d78d0eaa7075097b8975be4eb06d4c" iface="eth0" netns="" Sep 10 00:14:15.924288 containerd[1430]: 2025-09-10 00:14:15.890 [INFO][6152] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2d43f127c1ddba66236343c437ddd27b49d78d0eaa7075097b8975be4eb06d4c" Sep 10 00:14:15.924288 containerd[1430]: 2025-09-10 00:14:15.890 [INFO][6152] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2d43f127c1ddba66236343c437ddd27b49d78d0eaa7075097b8975be4eb06d4c" Sep 10 00:14:15.924288 containerd[1430]: 2025-09-10 00:14:15.908 [INFO][6165] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2d43f127c1ddba66236343c437ddd27b49d78d0eaa7075097b8975be4eb06d4c" HandleID="k8s-pod-network.2d43f127c1ddba66236343c437ddd27b49d78d0eaa7075097b8975be4eb06d4c" Workload="localhost-k8s-calico--apiserver--6bc6768489--gxbgn-eth0" Sep 10 00:14:15.924288 containerd[1430]: 2025-09-10 00:14:15.908 [INFO][6165] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:14:15.924288 containerd[1430]: 2025-09-10 00:14:15.908 [INFO][6165] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:14:15.924288 containerd[1430]: 2025-09-10 00:14:15.917 [WARNING][6165] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2d43f127c1ddba66236343c437ddd27b49d78d0eaa7075097b8975be4eb06d4c" HandleID="k8s-pod-network.2d43f127c1ddba66236343c437ddd27b49d78d0eaa7075097b8975be4eb06d4c" Workload="localhost-k8s-calico--apiserver--6bc6768489--gxbgn-eth0" Sep 10 00:14:15.924288 containerd[1430]: 2025-09-10 00:14:15.917 [INFO][6165] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2d43f127c1ddba66236343c437ddd27b49d78d0eaa7075097b8975be4eb06d4c" HandleID="k8s-pod-network.2d43f127c1ddba66236343c437ddd27b49d78d0eaa7075097b8975be4eb06d4c" Workload="localhost-k8s-calico--apiserver--6bc6768489--gxbgn-eth0" Sep 10 00:14:15.924288 containerd[1430]: 2025-09-10 00:14:15.918 [INFO][6165] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:14:15.924288 containerd[1430]: 2025-09-10 00:14:15.920 [INFO][6152] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2d43f127c1ddba66236343c437ddd27b49d78d0eaa7075097b8975be4eb06d4c" Sep 10 00:14:15.924288 containerd[1430]: time="2025-09-10T00:14:15.922528659Z" level=info msg="TearDown network for sandbox \"2d43f127c1ddba66236343c437ddd27b49d78d0eaa7075097b8975be4eb06d4c\" successfully" Sep 10 00:14:15.926908 containerd[1430]: time="2025-09-10T00:14:15.926597001Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2d43f127c1ddba66236343c437ddd27b49d78d0eaa7075097b8975be4eb06d4c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 10 00:14:15.926908 containerd[1430]: time="2025-09-10T00:14:15.926659321Z" level=info msg="RemovePodSandbox \"2d43f127c1ddba66236343c437ddd27b49d78d0eaa7075097b8975be4eb06d4c\" returns successfully" Sep 10 00:14:15.928193 containerd[1430]: time="2025-09-10T00:14:15.927644887Z" level=info msg="StopPodSandbox for \"d62eebdda38d4548d2e3f52b4f8523eec446425cf32213f102134b444b23ebf7\"" Sep 10 00:14:15.993556 containerd[1430]: 2025-09-10 00:14:15.958 [WARNING][6183] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d62eebdda38d4548d2e3f52b4f8523eec446425cf32213f102134b444b23ebf7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--fprdx-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"778ab3ef-74e3-4341-b35b-556c4e8acdd5", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 13, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"aeb3435b992ce3412be85016e50f6acc18819bc90799234f47f77ff6818d8c7a", Pod:"coredns-7c65d6cfc9-fprdx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4276c933052", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:14:15.993556 containerd[1430]: 2025-09-10 00:14:15.959 [INFO][6183] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d62eebdda38d4548d2e3f52b4f8523eec446425cf32213f102134b444b23ebf7" Sep 10 00:14:15.993556 containerd[1430]: 2025-09-10 00:14:15.959 [INFO][6183] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d62eebdda38d4548d2e3f52b4f8523eec446425cf32213f102134b444b23ebf7" iface="eth0" netns="" Sep 10 00:14:15.993556 containerd[1430]: 2025-09-10 00:14:15.959 [INFO][6183] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d62eebdda38d4548d2e3f52b4f8523eec446425cf32213f102134b444b23ebf7" Sep 10 00:14:15.993556 containerd[1430]: 2025-09-10 00:14:15.959 [INFO][6183] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d62eebdda38d4548d2e3f52b4f8523eec446425cf32213f102134b444b23ebf7" Sep 10 00:14:15.993556 containerd[1430]: 2025-09-10 00:14:15.979 [INFO][6192] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d62eebdda38d4548d2e3f52b4f8523eec446425cf32213f102134b444b23ebf7" HandleID="k8s-pod-network.d62eebdda38d4548d2e3f52b4f8523eec446425cf32213f102134b444b23ebf7" Workload="localhost-k8s-coredns--7c65d6cfc9--fprdx-eth0" Sep 10 00:14:15.993556 containerd[1430]: 2025-09-10 00:14:15.979 [INFO][6192] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:14:15.993556 containerd[1430]: 2025-09-10 00:14:15.979 [INFO][6192] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:14:15.993556 containerd[1430]: 2025-09-10 00:14:15.988 [WARNING][6192] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d62eebdda38d4548d2e3f52b4f8523eec446425cf32213f102134b444b23ebf7" HandleID="k8s-pod-network.d62eebdda38d4548d2e3f52b4f8523eec446425cf32213f102134b444b23ebf7" Workload="localhost-k8s-coredns--7c65d6cfc9--fprdx-eth0" Sep 10 00:14:15.993556 containerd[1430]: 2025-09-10 00:14:15.988 [INFO][6192] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d62eebdda38d4548d2e3f52b4f8523eec446425cf32213f102134b444b23ebf7" HandleID="k8s-pod-network.d62eebdda38d4548d2e3f52b4f8523eec446425cf32213f102134b444b23ebf7" Workload="localhost-k8s-coredns--7c65d6cfc9--fprdx-eth0" Sep 10 00:14:15.993556 containerd[1430]: 2025-09-10 00:14:15.989 [INFO][6192] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:14:15.993556 containerd[1430]: 2025-09-10 00:14:15.991 [INFO][6183] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d62eebdda38d4548d2e3f52b4f8523eec446425cf32213f102134b444b23ebf7" Sep 10 00:14:15.993954 containerd[1430]: time="2025-09-10T00:14:15.993583886Z" level=info msg="TearDown network for sandbox \"d62eebdda38d4548d2e3f52b4f8523eec446425cf32213f102134b444b23ebf7\" successfully" Sep 10 00:14:15.993954 containerd[1430]: time="2025-09-10T00:14:15.993605966Z" level=info msg="StopPodSandbox for \"d62eebdda38d4548d2e3f52b4f8523eec446425cf32213f102134b444b23ebf7\" returns successfully" Sep 10 00:14:15.994055 containerd[1430]: time="2025-09-10T00:14:15.994029529Z" level=info msg="RemovePodSandbox for \"d62eebdda38d4548d2e3f52b4f8523eec446425cf32213f102134b444b23ebf7\"" Sep 10 00:14:15.994095 containerd[1430]: time="2025-09-10T00:14:15.994065689Z" level=info msg="Forcibly stopping sandbox \"d62eebdda38d4548d2e3f52b4f8523eec446425cf32213f102134b444b23ebf7\"" Sep 10 00:14:16.058130 containerd[1430]: 2025-09-10 00:14:16.025 [WARNING][6209] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d62eebdda38d4548d2e3f52b4f8523eec446425cf32213f102134b444b23ebf7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--fprdx-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"778ab3ef-74e3-4341-b35b-556c4e8acdd5", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 13, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"aeb3435b992ce3412be85016e50f6acc18819bc90799234f47f77ff6818d8c7a", Pod:"coredns-7c65d6cfc9-fprdx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4276c933052", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:14:16.058130 containerd[1430]: 2025-09-10 00:14:16.025 [INFO][6209] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d62eebdda38d4548d2e3f52b4f8523eec446425cf32213f102134b444b23ebf7" Sep 10 00:14:16.058130 containerd[1430]: 2025-09-10 00:14:16.025 [INFO][6209] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d62eebdda38d4548d2e3f52b4f8523eec446425cf32213f102134b444b23ebf7" iface="eth0" netns="" Sep 10 00:14:16.058130 containerd[1430]: 2025-09-10 00:14:16.025 [INFO][6209] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d62eebdda38d4548d2e3f52b4f8523eec446425cf32213f102134b444b23ebf7" Sep 10 00:14:16.058130 containerd[1430]: 2025-09-10 00:14:16.025 [INFO][6209] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d62eebdda38d4548d2e3f52b4f8523eec446425cf32213f102134b444b23ebf7" Sep 10 00:14:16.058130 containerd[1430]: 2025-09-10 00:14:16.045 [INFO][6218] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d62eebdda38d4548d2e3f52b4f8523eec446425cf32213f102134b444b23ebf7" HandleID="k8s-pod-network.d62eebdda38d4548d2e3f52b4f8523eec446425cf32213f102134b444b23ebf7" Workload="localhost-k8s-coredns--7c65d6cfc9--fprdx-eth0" Sep 10 00:14:16.058130 containerd[1430]: 2025-09-10 00:14:16.045 [INFO][6218] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:14:16.058130 containerd[1430]: 2025-09-10 00:14:16.045 [INFO][6218] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:14:16.058130 containerd[1430]: 2025-09-10 00:14:16.053 [WARNING][6218] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d62eebdda38d4548d2e3f52b4f8523eec446425cf32213f102134b444b23ebf7" HandleID="k8s-pod-network.d62eebdda38d4548d2e3f52b4f8523eec446425cf32213f102134b444b23ebf7" Workload="localhost-k8s-coredns--7c65d6cfc9--fprdx-eth0" Sep 10 00:14:16.058130 containerd[1430]: 2025-09-10 00:14:16.053 [INFO][6218] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d62eebdda38d4548d2e3f52b4f8523eec446425cf32213f102134b444b23ebf7" HandleID="k8s-pod-network.d62eebdda38d4548d2e3f52b4f8523eec446425cf32213f102134b444b23ebf7" Workload="localhost-k8s-coredns--7c65d6cfc9--fprdx-eth0" Sep 10 00:14:16.058130 containerd[1430]: 2025-09-10 00:14:16.054 [INFO][6218] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:14:16.058130 containerd[1430]: 2025-09-10 00:14:16.056 [INFO][6209] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d62eebdda38d4548d2e3f52b4f8523eec446425cf32213f102134b444b23ebf7" Sep 10 00:14:16.058561 containerd[1430]: time="2025-09-10T00:14:16.058176835Z" level=info msg="TearDown network for sandbox \"d62eebdda38d4548d2e3f52b4f8523eec446425cf32213f102134b444b23ebf7\" successfully" Sep 10 00:14:16.060914 containerd[1430]: time="2025-09-10T00:14:16.060881810Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d62eebdda38d4548d2e3f52b4f8523eec446425cf32213f102134b444b23ebf7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 10 00:14:16.060978 containerd[1430]: time="2025-09-10T00:14:16.060943730Z" level=info msg="RemovePodSandbox \"d62eebdda38d4548d2e3f52b4f8523eec446425cf32213f102134b444b23ebf7\" returns successfully" Sep 10 00:14:16.061455 containerd[1430]: time="2025-09-10T00:14:16.061389172Z" level=info msg="StopPodSandbox for \"d69084ff2923cc958b95e61a07b97aed2f0bbea5caddd3a838f8758611a00248\"" Sep 10 00:14:16.133902 containerd[1430]: 2025-09-10 00:14:16.095 [WARNING][6236] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d69084ff2923cc958b95e61a07b97aed2f0bbea5caddd3a838f8758611a00248" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6btnb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"09c5ed53-4869-4b1c-8a65-b62ac3f88415", ResourceVersion:"1143", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 13, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4434cbe969e6287ab844dffe2ac371ff4460c05bd3d9a16caccb0b53e084b8c7", Pod:"csi-node-driver-6btnb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali808ac0ad1fd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:14:16.133902 containerd[1430]: 2025-09-10 00:14:16.095 [INFO][6236] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d69084ff2923cc958b95e61a07b97aed2f0bbea5caddd3a838f8758611a00248" Sep 10 00:14:16.133902 containerd[1430]: 2025-09-10 00:14:16.095 [INFO][6236] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d69084ff2923cc958b95e61a07b97aed2f0bbea5caddd3a838f8758611a00248" iface="eth0" netns="" Sep 10 00:14:16.133902 containerd[1430]: 2025-09-10 00:14:16.095 [INFO][6236] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d69084ff2923cc958b95e61a07b97aed2f0bbea5caddd3a838f8758611a00248" Sep 10 00:14:16.133902 containerd[1430]: 2025-09-10 00:14:16.095 [INFO][6236] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d69084ff2923cc958b95e61a07b97aed2f0bbea5caddd3a838f8758611a00248" Sep 10 00:14:16.133902 containerd[1430]: 2025-09-10 00:14:16.115 [INFO][6245] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d69084ff2923cc958b95e61a07b97aed2f0bbea5caddd3a838f8758611a00248" HandleID="k8s-pod-network.d69084ff2923cc958b95e61a07b97aed2f0bbea5caddd3a838f8758611a00248" Workload="localhost-k8s-csi--node--driver--6btnb-eth0" Sep 10 00:14:16.133902 containerd[1430]: 2025-09-10 00:14:16.115 [INFO][6245] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:14:16.133902 containerd[1430]: 2025-09-10 00:14:16.115 [INFO][6245] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:14:16.133902 containerd[1430]: 2025-09-10 00:14:16.124 [WARNING][6245] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d69084ff2923cc958b95e61a07b97aed2f0bbea5caddd3a838f8758611a00248" HandleID="k8s-pod-network.d69084ff2923cc958b95e61a07b97aed2f0bbea5caddd3a838f8758611a00248" Workload="localhost-k8s-csi--node--driver--6btnb-eth0" Sep 10 00:14:16.133902 containerd[1430]: 2025-09-10 00:14:16.124 [INFO][6245] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d69084ff2923cc958b95e61a07b97aed2f0bbea5caddd3a838f8758611a00248" HandleID="k8s-pod-network.d69084ff2923cc958b95e61a07b97aed2f0bbea5caddd3a838f8758611a00248" Workload="localhost-k8s-csi--node--driver--6btnb-eth0" Sep 10 00:14:16.133902 containerd[1430]: 2025-09-10 00:14:16.129 [INFO][6245] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:14:16.133902 containerd[1430]: 2025-09-10 00:14:16.130 [INFO][6236] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d69084ff2923cc958b95e61a07b97aed2f0bbea5caddd3a838f8758611a00248" Sep 10 00:14:16.134388 containerd[1430]: time="2025-09-10T00:14:16.134328086Z" level=info msg="TearDown network for sandbox \"d69084ff2923cc958b95e61a07b97aed2f0bbea5caddd3a838f8758611a00248\" successfully" Sep 10 00:14:16.134388 containerd[1430]: time="2025-09-10T00:14:16.134352446Z" level=info msg="StopPodSandbox for \"d69084ff2923cc958b95e61a07b97aed2f0bbea5caddd3a838f8758611a00248\" returns successfully" Sep 10 00:14:16.134895 containerd[1430]: time="2025-09-10T00:14:16.134869249Z" level=info msg="RemovePodSandbox for \"d69084ff2923cc958b95e61a07b97aed2f0bbea5caddd3a838f8758611a00248\"" Sep 10 00:14:16.134945 containerd[1430]: time="2025-09-10T00:14:16.134903329Z" level=info msg="Forcibly stopping sandbox \"d69084ff2923cc958b95e61a07b97aed2f0bbea5caddd3a838f8758611a00248\"" Sep 10 00:14:16.197285 containerd[1430]: 2025-09-10 00:14:16.166 [WARNING][6263] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d69084ff2923cc958b95e61a07b97aed2f0bbea5caddd3a838f8758611a00248" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6btnb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"09c5ed53-4869-4b1c-8a65-b62ac3f88415", ResourceVersion:"1143", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 13, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4434cbe969e6287ab844dffe2ac371ff4460c05bd3d9a16caccb0b53e084b8c7", Pod:"csi-node-driver-6btnb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali808ac0ad1fd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:14:16.197285 containerd[1430]: 2025-09-10 00:14:16.166 [INFO][6263] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d69084ff2923cc958b95e61a07b97aed2f0bbea5caddd3a838f8758611a00248" Sep 10 00:14:16.197285 containerd[1430]: 2025-09-10 00:14:16.166 [INFO][6263] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d69084ff2923cc958b95e61a07b97aed2f0bbea5caddd3a838f8758611a00248" iface="eth0" netns="" Sep 10 00:14:16.197285 containerd[1430]: 2025-09-10 00:14:16.166 [INFO][6263] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d69084ff2923cc958b95e61a07b97aed2f0bbea5caddd3a838f8758611a00248" Sep 10 00:14:16.197285 containerd[1430]: 2025-09-10 00:14:16.166 [INFO][6263] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d69084ff2923cc958b95e61a07b97aed2f0bbea5caddd3a838f8758611a00248" Sep 10 00:14:16.197285 containerd[1430]: 2025-09-10 00:14:16.184 [INFO][6272] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d69084ff2923cc958b95e61a07b97aed2f0bbea5caddd3a838f8758611a00248" HandleID="k8s-pod-network.d69084ff2923cc958b95e61a07b97aed2f0bbea5caddd3a838f8758611a00248" Workload="localhost-k8s-csi--node--driver--6btnb-eth0" Sep 10 00:14:16.197285 containerd[1430]: 2025-09-10 00:14:16.184 [INFO][6272] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:14:16.197285 containerd[1430]: 2025-09-10 00:14:16.184 [INFO][6272] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:14:16.197285 containerd[1430]: 2025-09-10 00:14:16.192 [WARNING][6272] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d69084ff2923cc958b95e61a07b97aed2f0bbea5caddd3a838f8758611a00248" HandleID="k8s-pod-network.d69084ff2923cc958b95e61a07b97aed2f0bbea5caddd3a838f8758611a00248" Workload="localhost-k8s-csi--node--driver--6btnb-eth0" Sep 10 00:14:16.197285 containerd[1430]: 2025-09-10 00:14:16.192 [INFO][6272] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d69084ff2923cc958b95e61a07b97aed2f0bbea5caddd3a838f8758611a00248" HandleID="k8s-pod-network.d69084ff2923cc958b95e61a07b97aed2f0bbea5caddd3a838f8758611a00248" Workload="localhost-k8s-csi--node--driver--6btnb-eth0" Sep 10 00:14:16.197285 containerd[1430]: 2025-09-10 00:14:16.193 [INFO][6272] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:14:16.197285 containerd[1430]: 2025-09-10 00:14:16.195 [INFO][6263] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d69084ff2923cc958b95e61a07b97aed2f0bbea5caddd3a838f8758611a00248" Sep 10 00:14:16.197285 containerd[1430]: time="2025-09-10T00:14:16.197047424Z" level=info msg="TearDown network for sandbox \"d69084ff2923cc958b95e61a07b97aed2f0bbea5caddd3a838f8758611a00248\" successfully" Sep 10 00:14:16.201704 containerd[1430]: time="2025-09-10T00:14:16.201660849Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d69084ff2923cc958b95e61a07b97aed2f0bbea5caddd3a838f8758611a00248\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 10 00:14:16.201775 containerd[1430]: time="2025-09-10T00:14:16.201742170Z" level=info msg="RemovePodSandbox \"d69084ff2923cc958b95e61a07b97aed2f0bbea5caddd3a838f8758611a00248\" returns successfully" Sep 10 00:14:20.881523 systemd[1]: Started sshd@17-10.0.0.106:22-10.0.0.1:36130.service - OpenSSH per-connection server daemon (10.0.0.1:36130). Sep 10 00:14:20.920620 sshd[6290]: Accepted publickey for core from 10.0.0.1 port 36130 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:14:20.921831 sshd[6290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:14:20.925570 systemd-logind[1417]: New session 18 of user core. Sep 10 00:14:20.933665 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 10 00:14:21.044742 sshd[6290]: pam_unix(sshd:session): session closed for user core Sep 10 00:14:21.048752 systemd[1]: sshd@17-10.0.0.106:22-10.0.0.1:36130.service: Deactivated successfully. Sep 10 00:14:21.051120 systemd[1]: session-18.scope: Deactivated successfully. Sep 10 00:14:21.052152 systemd-logind[1417]: Session 18 logged out. Waiting for processes to exit. Sep 10 00:14:21.054050 systemd-logind[1417]: Removed session 18. Sep 10 00:14:22.608380 kubelet[2471]: I0910 00:14:22.607863 2471 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 00:14:26.067801 systemd[1]: Started sshd@18-10.0.0.106:22-10.0.0.1:36140.service - OpenSSH per-connection server daemon (10.0.0.1:36140). Sep 10 00:14:26.120002 sshd[6313]: Accepted publickey for core from 10.0.0.1 port 36140 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:14:26.121405 sshd[6313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:14:26.125931 systemd-logind[1417]: New session 19 of user core. Sep 10 00:14:26.130661 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 10 00:14:26.299321 sshd[6313]: pam_unix(sshd:session): session closed for user core Sep 10 00:14:26.302890 systemd[1]: sshd@18-10.0.0.106:22-10.0.0.1:36140.service: Deactivated successfully. Sep 10 00:14:26.307192 systemd[1]: session-19.scope: Deactivated successfully. Sep 10 00:14:26.309114 systemd-logind[1417]: Session 19 logged out. Waiting for processes to exit. Sep 10 00:14:26.310724 systemd-logind[1417]: Removed session 19. Sep 10 00:14:29.828902 kubelet[2471]: E0910 00:14:29.828865 2471 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:14:31.313322 systemd[1]: Started sshd@19-10.0.0.106:22-10.0.0.1:51334.service - OpenSSH per-connection server daemon (10.0.0.1:51334). Sep 10 00:14:31.358837 sshd[6349]: Accepted publickey for core from 10.0.0.1 port 51334 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:14:31.360244 sshd[6349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:14:31.366550 systemd-logind[1417]: New session 20 of user core. Sep 10 00:14:31.376744 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 10 00:14:31.521425 sshd[6349]: pam_unix(sshd:session): session closed for user core Sep 10 00:14:31.525537 systemd[1]: sshd@19-10.0.0.106:22-10.0.0.1:51334.service: Deactivated successfully. Sep 10 00:14:31.527448 systemd[1]: session-20.scope: Deactivated successfully. Sep 10 00:14:31.528320 systemd-logind[1417]: Session 20 logged out. Waiting for processes to exit. Sep 10 00:14:31.530238 systemd-logind[1417]: Removed session 20.