Sep 10 00:16:43.837197 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 10 00:16:43.837218 kernel: Linux version 6.6.104-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Sep 9 22:41:53 -00 2025 Sep 10 00:16:43.837227 kernel: KASLR enabled Sep 10 00:16:43.837233 kernel: efi: EFI v2.7 by EDK II Sep 10 00:16:43.837239 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Sep 10 00:16:43.837245 kernel: random: crng init done Sep 10 00:16:43.837252 kernel: ACPI: Early table checksum verification disabled Sep 10 00:16:43.837257 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Sep 10 00:16:43.837264 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 10 00:16:43.837271 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:16:43.837277 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:16:43.837283 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:16:43.837289 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:16:43.837295 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:16:43.837303 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:16:43.837310 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:16:43.837317 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:16:43.837323 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:16:43.837330 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 10 00:16:43.837336 kernel: NUMA: Failed to initialise from firmware Sep 10 00:16:43.837342 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 10 00:16:43.837349 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Sep 10 00:16:43.837355 kernel: Zone ranges: Sep 10 00:16:43.837361 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 10 00:16:43.837367 kernel: DMA32 empty Sep 10 00:16:43.837375 kernel: Normal empty Sep 10 00:16:43.837381 kernel: Movable zone start for each node Sep 10 00:16:43.837387 kernel: Early memory node ranges Sep 10 00:16:43.837394 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Sep 10 00:16:43.837400 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Sep 10 00:16:43.837406 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Sep 10 00:16:43.837412 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 10 00:16:43.837419 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 10 00:16:43.837425 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 10 00:16:43.837431 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 10 00:16:43.837437 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 10 00:16:43.837443 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 10 00:16:43.837451 kernel: psci: probing for conduit method from ACPI. Sep 10 00:16:43.837458 kernel: psci: PSCIv1.1 detected in firmware. Sep 10 00:16:43.837464 kernel: psci: Using standard PSCI v0.2 function IDs Sep 10 00:16:43.837473 kernel: psci: Trusted OS migration not required Sep 10 00:16:43.837479 kernel: psci: SMC Calling Convention v1.1 Sep 10 00:16:43.837486 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 10 00:16:43.837502 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 10 00:16:43.837509 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 10 00:16:43.837516 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 10 00:16:43.837523 kernel: Detected PIPT I-cache on CPU0 Sep 10 00:16:43.837530 kernel: CPU features: detected: GIC system register CPU interface Sep 10 00:16:43.837536 kernel: CPU features: detected: Hardware dirty bit management Sep 10 00:16:43.837543 kernel: CPU features: detected: Spectre-v4 Sep 10 00:16:43.837550 kernel: CPU features: detected: Spectre-BHB Sep 10 00:16:43.837557 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 10 00:16:43.837563 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 10 00:16:43.837572 kernel: CPU features: detected: ARM erratum 1418040 Sep 10 00:16:43.837579 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 10 00:16:43.837585 kernel: alternatives: applying boot alternatives Sep 10 00:16:43.837593 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=9519a2b52292e68cf8bced92b7c71fffa7243efe8697174d43c360b4308144c8 Sep 10 00:16:43.837600 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 10 00:16:43.837607 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 10 00:16:43.837614 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 10 00:16:43.837620 kernel: Fallback order for Node 0: 0 Sep 10 00:16:43.837627 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Sep 10 00:16:43.837633 kernel: Policy zone: DMA Sep 10 00:16:43.837640 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 10 00:16:43.837648 kernel: software IO TLB: area num 4. Sep 10 00:16:43.837655 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Sep 10 00:16:43.837662 kernel: Memory: 2386404K/2572288K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 185884K reserved, 0K cma-reserved) Sep 10 00:16:43.837669 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 10 00:16:43.837675 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 10 00:16:43.837683 kernel: rcu: RCU event tracing is enabled. Sep 10 00:16:43.837689 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 10 00:16:43.837696 kernel: Trampoline variant of Tasks RCU enabled. Sep 10 00:16:43.837703 kernel: Tracing variant of Tasks RCU enabled. Sep 10 00:16:43.837710 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 10 00:16:43.837717 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 10 00:16:43.837724 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 10 00:16:43.837731 kernel: GICv3: 256 SPIs implemented Sep 10 00:16:43.837738 kernel: GICv3: 0 Extended SPIs implemented Sep 10 00:16:43.837745 kernel: Root IRQ handler: gic_handle_irq Sep 10 00:16:43.837751 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 10 00:16:43.837758 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 10 00:16:43.837765 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 10 00:16:43.837771 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Sep 10 00:16:43.837778 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Sep 10 00:16:43.837785 kernel: GICv3: using LPI property table @0x00000000400f0000 Sep 10 00:16:43.837792 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Sep 10 00:16:43.837798 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 10 00:16:43.837806 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 10 00:16:43.837813 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 10 00:16:43.837820 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 10 00:16:43.837827 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 10 00:16:43.837843 kernel: arm-pv: using stolen time PV Sep 10 00:16:43.837850 kernel: Console: colour dummy device 80x25 Sep 10 00:16:43.837856 kernel: ACPI: Core revision 20230628 Sep 10 00:16:43.837864 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 10 00:16:43.837871 kernel: pid_max: default: 32768 minimum: 301 Sep 10 00:16:43.837878 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 10 00:16:43.837886 kernel: landlock: Up and running. Sep 10 00:16:43.837893 kernel: SELinux: Initializing. Sep 10 00:16:43.837900 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 10 00:16:43.837907 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 10 00:16:43.837914 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 10 00:16:43.837921 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 10 00:16:43.837928 kernel: rcu: Hierarchical SRCU implementation. Sep 10 00:16:43.837935 kernel: rcu: Max phase no-delay instances is 400. Sep 10 00:16:43.837942 kernel: Platform MSI: ITS@0x8080000 domain created Sep 10 00:16:43.837950 kernel: PCI/MSI: ITS@0x8080000 domain created Sep 10 00:16:43.837957 kernel: Remapping and enabling EFI services. Sep 10 00:16:43.837964 kernel: smp: Bringing up secondary CPUs ... Sep 10 00:16:43.837970 kernel: Detected PIPT I-cache on CPU1 Sep 10 00:16:43.837977 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 10 00:16:43.837984 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Sep 10 00:16:43.837991 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 10 00:16:43.837998 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 10 00:16:43.838005 kernel: Detected PIPT I-cache on CPU2 Sep 10 00:16:43.838012 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 10 00:16:43.838020 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Sep 10 00:16:43.838027 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 10 00:16:43.838038 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 10 00:16:43.838046 kernel: Detected PIPT I-cache on CPU3 Sep 10 00:16:43.838054 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 10 00:16:43.838061 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Sep 10 00:16:43.838068 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 10 00:16:43.838075 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 10 00:16:43.838082 kernel: smp: Brought up 1 node, 4 CPUs Sep 10 00:16:43.838091 kernel: SMP: Total of 4 processors activated. Sep 10 00:16:43.838098 kernel: CPU features: detected: 32-bit EL0 Support Sep 10 00:16:43.838106 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 10 00:16:43.838113 kernel: CPU features: detected: Common not Private translations Sep 10 00:16:43.838120 kernel: CPU features: detected: CRC32 instructions Sep 10 00:16:43.838127 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 10 00:16:43.838134 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 10 00:16:43.838142 kernel: CPU features: detected: LSE atomic instructions Sep 10 00:16:43.838150 kernel: CPU features: detected: Privileged Access Never Sep 10 00:16:43.838157 kernel: CPU features: detected: RAS Extension Support Sep 10 00:16:43.838165 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 10 00:16:43.838172 kernel: CPU: All CPU(s) started at EL1 Sep 10 00:16:43.838179 kernel: alternatives: applying system-wide alternatives Sep 10 00:16:43.838186 kernel: devtmpfs: initialized Sep 10 00:16:43.838194 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 10 00:16:43.838201 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 10 00:16:43.838208 kernel: pinctrl core: initialized pinctrl subsystem Sep 10 00:16:43.838217 kernel: SMBIOS 3.0.0 present. Sep 10 00:16:43.838224 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Sep 10 00:16:43.838232 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 10 00:16:43.838239 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 10 00:16:43.838246 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 10 00:16:43.838254 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 10 00:16:43.838261 kernel: audit: initializing netlink subsys (disabled) Sep 10 00:16:43.838268 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Sep 10 00:16:43.838276 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 10 00:16:43.838284 kernel: cpuidle: using governor menu Sep 10 00:16:43.838291 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 10 00:16:43.838298 kernel: ASID allocator initialised with 32768 entries Sep 10 00:16:43.838305 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 10 00:16:43.838313 kernel: Serial: AMBA PL011 UART driver Sep 10 00:16:43.838320 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 10 00:16:43.838327 kernel: Modules: 0 pages in range for non-PLT usage Sep 10 00:16:43.838335 kernel: Modules: 509008 pages in range for PLT usage Sep 10 00:16:43.838342 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 10 00:16:43.838350 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 10 00:16:43.838357 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 10 00:16:43.838365 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 10 00:16:43.838372 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 10 00:16:43.838379 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 10 00:16:43.838386 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 10 00:16:43.838393 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 10 00:16:43.838400 kernel: ACPI: Added _OSI(Module Device) Sep 10 00:16:43.838408 kernel: ACPI: Added _OSI(Processor Device) Sep 10 00:16:43.838416 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 10 00:16:43.838423 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 10 00:16:43.838430 kernel: ACPI: Interpreter enabled Sep 10 00:16:43.838437 kernel: ACPI: Using GIC for interrupt routing Sep 10 00:16:43.838444 kernel: ACPI: MCFG table detected, 1 entries Sep 10 00:16:43.838452 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 10 00:16:43.838459 kernel: printk: console [ttyAMA0] enabled Sep 10 00:16:43.838466 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 10 00:16:43.838598 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 10 00:16:43.838674 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 10 00:16:43.838739 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 10 00:16:43.838803 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 10 00:16:43.838956 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 10 00:16:43.838968 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 10 00:16:43.838976 kernel: PCI host bridge to bus 0000:00 Sep 10 00:16:43.839047 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 10 00:16:43.839111 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 10 00:16:43.839168 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 10 00:16:43.839225 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 10 00:16:43.839304 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Sep 10 00:16:43.839383 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Sep 10 00:16:43.839449 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Sep 10 00:16:43.839533 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Sep 10 00:16:43.839600 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Sep 10 00:16:43.839665 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Sep 10 00:16:43.839729 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Sep 10 00:16:43.839793 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Sep 10 00:16:43.839873 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 10 00:16:43.839949 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 10 00:16:43.840011 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 10 00:16:43.840021 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 10 00:16:43.840029 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 10 00:16:43.840037 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 10 00:16:43.840044 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 10 00:16:43.840051 kernel: iommu: Default domain type: Translated Sep 10 00:16:43.840059 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 10 00:16:43.840066 kernel: efivars: Registered efivars operations Sep 10 00:16:43.840075 kernel: vgaarb: loaded Sep 10 00:16:43.840083 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 10 00:16:43.840090 kernel: VFS: Disk quotas dquot_6.6.0 Sep 10 00:16:43.840097 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 10 00:16:43.840105 kernel: pnp: PnP ACPI init Sep 10 00:16:43.840175 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 10 00:16:43.840186 kernel: pnp: PnP ACPI: found 1 devices Sep 10 00:16:43.840193 kernel: NET: Registered PF_INET protocol family Sep 10 00:16:43.840200 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 10 00:16:43.840210 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 10 00:16:43.840217 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 10 00:16:43.840225 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 10 00:16:43.840232 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 10 00:16:43.840240 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 10 00:16:43.840247 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 10 00:16:43.840254 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 10 00:16:43.840262 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 10 00:16:43.840270 kernel: PCI: CLS 0 bytes, default 64 Sep 10 00:16:43.840278 kernel: kvm [1]: HYP mode not available Sep 10 00:16:43.840285 kernel: Initialise system trusted keyrings Sep 10 00:16:43.840292 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 10 00:16:43.840300 kernel: Key type asymmetric registered Sep 10 00:16:43.840307 kernel: Asymmetric key parser 'x509' registered Sep 10 00:16:43.840314 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 10 00:16:43.840322 kernel: io scheduler mq-deadline registered Sep 10 00:16:43.840329 kernel: io scheduler kyber registered Sep 10 00:16:43.840336 kernel: io scheduler bfq registered Sep 10 00:16:43.840345 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 10 00:16:43.840353 kernel: ACPI: button: Power Button [PWRB] Sep 10 00:16:43.840361 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 10 00:16:43.840425 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 10 00:16:43.840435 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 10 00:16:43.840442 kernel: thunder_xcv, ver 1.0 Sep 10 00:16:43.840449 kernel: thunder_bgx, ver 1.0 Sep 10 00:16:43.840457 kernel: nicpf, ver 1.0 Sep 10 00:16:43.840464 kernel: nicvf, ver 1.0 Sep 10 00:16:43.840547 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 10 00:16:43.840616 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-10T00:16:43 UTC (1757463403) Sep 10 00:16:43.840627 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 10 00:16:43.840634 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Sep 10 00:16:43.840642 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 10 00:16:43.840649 kernel: watchdog: Hard watchdog permanently disabled Sep 10 00:16:43.840656 kernel: NET: Registered PF_INET6 protocol family Sep 10 00:16:43.840663 kernel: Segment Routing with IPv6 Sep 10 00:16:43.840673 kernel: In-situ OAM (IOAM) with IPv6 Sep 10 00:16:43.840680 kernel: NET: Registered PF_PACKET protocol family Sep 10 00:16:43.840687 kernel: Key type dns_resolver registered Sep 10 00:16:43.840694 kernel: registered taskstats version 1 Sep 10 00:16:43.840701 kernel: Loading compiled-in X.509 certificates Sep 10 00:16:43.840709 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.104-flatcar: e85a1044dffeb2f9696d4659bfe36fdfbb79b10c' Sep 10 00:16:43.840716 kernel: Key type .fscrypt registered Sep 10 00:16:43.840723 kernel: Key type fscrypt-provisioning registered Sep 10 00:16:43.840730 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 10 00:16:43.840738 kernel: ima: Allocated hash algorithm: sha1 Sep 10 00:16:43.840746 kernel: ima: No architecture policies found Sep 10 00:16:43.840753 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 10 00:16:43.840760 kernel: clk: Disabling unused clocks Sep 10 00:16:43.840767 kernel: Freeing unused kernel memory: 39424K Sep 10 00:16:43.840774 kernel: Run /init as init process Sep 10 00:16:43.840782 kernel: with arguments: Sep 10 00:16:43.840788 kernel: /init Sep 10 00:16:43.840795 kernel: with environment: Sep 10 00:16:43.840803 kernel: HOME=/ Sep 10 00:16:43.840811 kernel: TERM=linux Sep 10 00:16:43.840818 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 10 00:16:43.840827 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 10 00:16:43.840855 systemd[1]: Detected virtualization kvm. Sep 10 00:16:43.840874 systemd[1]: Detected architecture arm64. Sep 10 00:16:43.840882 systemd[1]: Running in initrd. Sep 10 00:16:43.840892 systemd[1]: No hostname configured, using default hostname. Sep 10 00:16:43.840899 systemd[1]: Hostname set to . Sep 10 00:16:43.840907 systemd[1]: Initializing machine ID from VM UUID. Sep 10 00:16:43.840915 systemd[1]: Queued start job for default target initrd.target. Sep 10 00:16:43.840923 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 00:16:43.840931 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 00:16:43.840939 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 10 00:16:43.840947 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 10 00:16:43.840955 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 10 00:16:43.840963 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 10 00:16:43.840973 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 10 00:16:43.840981 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 10 00:16:43.840989 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 00:16:43.840996 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 10 00:16:43.841004 systemd[1]: Reached target paths.target - Path Units. Sep 10 00:16:43.841013 systemd[1]: Reached target slices.target - Slice Units. Sep 10 00:16:43.841020 systemd[1]: Reached target swap.target - Swaps. Sep 10 00:16:43.841028 systemd[1]: Reached target timers.target - Timer Units. Sep 10 00:16:43.841036 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 10 00:16:43.841044 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 10 00:16:43.841051 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 10 00:16:43.841059 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 10 00:16:43.841067 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 10 00:16:43.841075 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 10 00:16:43.841084 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 00:16:43.841092 systemd[1]: Reached target sockets.target - Socket Units. Sep 10 00:16:43.841100 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 10 00:16:43.841108 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 10 00:16:43.841115 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 10 00:16:43.841123 systemd[1]: Starting systemd-fsck-usr.service... Sep 10 00:16:43.841131 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 10 00:16:43.841139 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 10 00:16:43.841148 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 00:16:43.841155 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 10 00:16:43.841163 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 00:16:43.841171 systemd[1]: Finished systemd-fsck-usr.service. Sep 10 00:16:43.841179 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 10 00:16:43.841204 systemd-journald[239]: Collecting audit messages is disabled. Sep 10 00:16:43.841223 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 00:16:43.841232 systemd-journald[239]: Journal started Sep 10 00:16:43.841252 systemd-journald[239]: Runtime Journal (/run/log/journal/f4b5cc37dc6c4e49af243a8befad27fb) is 5.9M, max 47.3M, 41.4M free. Sep 10 00:16:43.832773 systemd-modules-load[240]: Inserted module 'overlay' Sep 10 00:16:43.844453 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 10 00:16:43.844487 systemd[1]: Started systemd-journald.service - Journal Service. Sep 10 00:16:43.845206 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 10 00:16:43.846863 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 10 00:16:43.849333 systemd-modules-load[240]: Inserted module 'br_netfilter' Sep 10 00:16:43.850042 kernel: Bridge firewalling registered Sep 10 00:16:43.849878 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 10 00:16:43.851362 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 10 00:16:43.852814 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 10 00:16:43.856401 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 10 00:16:43.860530 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 00:16:43.865101 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 00:16:43.869322 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 10 00:16:43.875979 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 10 00:16:43.876863 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 00:16:43.879697 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 10 00:16:43.899139 dracut-cmdline[280]: dracut-dracut-053 Sep 10 00:16:43.900938 dracut-cmdline[280]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=9519a2b52292e68cf8bced92b7c71fffa7243efe8697174d43c360b4308144c8 Sep 10 00:16:43.900798 systemd-resolved[276]: Positive Trust Anchors: Sep 10 00:16:43.900807 systemd-resolved[276]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 10 00:16:43.900851 systemd-resolved[276]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 10 00:16:43.905488 systemd-resolved[276]: Defaulting to hostname 'linux'. Sep 10 00:16:43.906565 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 10 00:16:43.912875 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 10 00:16:43.969851 kernel: SCSI subsystem initialized Sep 10 00:16:43.973844 kernel: Loading iSCSI transport class v2.0-870. Sep 10 00:16:43.981871 kernel: iscsi: registered transport (tcp) Sep 10 00:16:43.994003 kernel: iscsi: registered transport (qla4xxx) Sep 10 00:16:43.994035 kernel: QLogic iSCSI HBA Driver Sep 10 00:16:44.034726 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 10 00:16:44.045001 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 10 00:16:44.060546 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 10 00:16:44.060608 kernel: device-mapper: uevent: version 1.0.3 Sep 10 00:16:44.060619 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 10 00:16:44.105853 kernel: raid6: neonx8 gen() 15776 MB/s Sep 10 00:16:44.121847 kernel: raid6: neonx4 gen() 15692 MB/s Sep 10 00:16:44.138842 kernel: raid6: neonx2 gen() 13207 MB/s Sep 10 00:16:44.155843 kernel: raid6: neonx1 gen() 10502 MB/s Sep 10 00:16:44.172843 kernel: raid6: int64x8 gen() 6959 MB/s Sep 10 00:16:44.189843 kernel: raid6: int64x4 gen() 7340 MB/s Sep 10 00:16:44.206842 kernel: raid6: int64x2 gen() 6124 MB/s Sep 10 00:16:44.223842 kernel: raid6: int64x1 gen() 5058 MB/s Sep 10 00:16:44.223862 kernel: raid6: using algorithm neonx8 gen() 15776 MB/s Sep 10 00:16:44.240851 kernel: raid6: .... xor() 12028 MB/s, rmw enabled Sep 10 00:16:44.240864 kernel: raid6: using neon recovery algorithm Sep 10 00:16:44.245940 kernel: xor: measuring software checksum speed Sep 10 00:16:44.245963 kernel: 8regs : 19231 MB/sec Sep 10 00:16:44.246982 kernel: 32regs : 19664 MB/sec Sep 10 00:16:44.246996 kernel: arm64_neon : 26171 MB/sec Sep 10 00:16:44.247005 kernel: xor: using function: arm64_neon (26171 MB/sec) Sep 10 00:16:44.294859 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 10 00:16:44.305815 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 10 00:16:44.322979 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 00:16:44.334363 systemd-udevd[461]: Using default interface naming scheme 'v255'. Sep 10 00:16:44.337548 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 00:16:44.345987 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 10 00:16:44.357020 dracut-pre-trigger[469]: rd.md=0: removing MD RAID activation Sep 10 00:16:44.381301 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 10 00:16:44.391973 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 10 00:16:44.430758 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 00:16:44.439018 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 10 00:16:44.450991 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 10 00:16:44.452222 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 10 00:16:44.454757 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 00:16:44.456637 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 10 00:16:44.467005 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 10 00:16:44.476009 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 10 00:16:44.476581 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 10 00:16:44.478165 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 10 00:16:44.478196 kernel: GPT:9289727 != 19775487 Sep 10 00:16:44.478206 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 10 00:16:44.478973 kernel: GPT:9289727 != 19775487 Sep 10 00:16:44.479000 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 10 00:16:44.479841 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 00:16:44.481323 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 10 00:16:44.484559 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 10 00:16:44.484659 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 00:16:44.487693 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 10 00:16:44.488713 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 10 00:16:44.488891 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 00:16:44.490841 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 00:16:44.499857 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (522) Sep 10 00:16:44.501854 kernel: BTRFS: device fsid 56932cd9-691c-4ccb-8da6-e6508edf5f69 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (523) Sep 10 00:16:44.503125 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 00:16:44.513518 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 10 00:16:44.515553 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 00:16:44.523779 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 10 00:16:44.532621 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 10 00:16:44.533610 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 10 00:16:44.538893 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 10 00:16:44.552945 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 10 00:16:44.556982 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 10 00:16:44.559399 disk-uuid[549]: Primary Header is updated. Sep 10 00:16:44.559399 disk-uuid[549]: Secondary Entries is updated. Sep 10 00:16:44.559399 disk-uuid[549]: Secondary Header is updated. Sep 10 00:16:44.561885 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 00:16:44.579985 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 00:16:45.570891 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 00:16:45.570995 disk-uuid[551]: The operation has completed successfully. Sep 10 00:16:45.588115 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 10 00:16:45.588235 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 10 00:16:45.615984 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 10 00:16:45.618719 sh[574]: Success Sep 10 00:16:45.627847 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 10 00:16:45.653153 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 10 00:16:45.661072 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 10 00:16:45.663868 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 10 00:16:45.672510 kernel: BTRFS info (device dm-0): first mount of filesystem 56932cd9-691c-4ccb-8da6-e6508edf5f69 Sep 10 00:16:45.672552 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 10 00:16:45.672572 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 10 00:16:45.672592 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 10 00:16:45.673163 kernel: BTRFS info (device dm-0): using free space tree Sep 10 00:16:45.677080 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 10 00:16:45.678161 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 10 00:16:45.678875 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 10 00:16:45.681132 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 10 00:16:45.690145 kernel: BTRFS info (device vda6): first mount of filesystem 1f9a2be6-c1a7-433d-9dbe-1e5d2ce6fc09 Sep 10 00:16:45.690180 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 10 00:16:45.690190 kernel: BTRFS info (device vda6): using free space tree Sep 10 00:16:45.692861 kernel: BTRFS info (device vda6): auto enabling async discard Sep 10 00:16:45.700815 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 10 00:16:45.701918 kernel: BTRFS info (device vda6): last unmount of filesystem 1f9a2be6-c1a7-433d-9dbe-1e5d2ce6fc09 Sep 10 00:16:45.707382 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 10 00:16:45.717961 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 10 00:16:45.777480 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 10 00:16:45.787171 ignition[669]: Ignition 2.19.0 Sep 10 00:16:45.787180 ignition[669]: Stage: fetch-offline Sep 10 00:16:45.787996 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 10 00:16:45.787217 ignition[669]: no configs at "/usr/lib/ignition/base.d" Sep 10 00:16:45.787225 ignition[669]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:16:45.787379 ignition[669]: parsed url from cmdline: "" Sep 10 00:16:45.787382 ignition[669]: no config URL provided Sep 10 00:16:45.787387 ignition[669]: reading system config file "/usr/lib/ignition/user.ign" Sep 10 00:16:45.787393 ignition[669]: no config at "/usr/lib/ignition/user.ign" Sep 10 00:16:45.787416 ignition[669]: op(1): [started] loading QEMU firmware config module Sep 10 00:16:45.787421 ignition[669]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 10 00:16:45.799948 ignition[669]: op(1): [finished] loading QEMU firmware config module Sep 10 00:16:45.808604 systemd-networkd[764]: lo: Link UP Sep 10 00:16:45.808615 systemd-networkd[764]: lo: Gained carrier Sep 10 00:16:45.809261 systemd-networkd[764]: Enumeration completed Sep 10 00:16:45.809359 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 10 00:16:45.809676 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 00:16:45.809679 systemd-networkd[764]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 10 00:16:45.810378 systemd-networkd[764]: eth0: Link UP Sep 10 00:16:45.810381 systemd-networkd[764]: eth0: Gained carrier Sep 10 00:16:45.810387 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 00:16:45.810763 systemd[1]: Reached target network.target - Network. Sep 10 00:16:45.830866 systemd-networkd[764]: eth0: DHCPv4 address 10.0.0.124/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 10 00:16:45.852289 ignition[669]: parsing config with SHA512: 4fba4a6efe628b2eb2234fcd4b228c922ecd6e8638e6921c23ccefcb7eb4fa35eb56baec3d9879bfbd9b7886da1811938afd74d9284ee423744d0776dd7c8e14 Sep 10 00:16:45.856412 unknown[669]: fetched base config from "system" Sep 10 00:16:45.856420 unknown[669]: fetched user config from "qemu" Sep 10 00:16:45.858111 ignition[669]: fetch-offline: fetch-offline passed Sep 10 00:16:45.858211 ignition[669]: Ignition finished successfully Sep 10 00:16:45.859635 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 10 00:16:45.861422 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 10 00:16:45.868001 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 10 00:16:45.877917 ignition[770]: Ignition 2.19.0 Sep 10 00:16:45.877927 ignition[770]: Stage: kargs Sep 10 00:16:45.878081 ignition[770]: no configs at "/usr/lib/ignition/base.d" Sep 10 00:16:45.878089 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:16:45.878926 ignition[770]: kargs: kargs passed Sep 10 00:16:45.881992 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 10 00:16:45.878971 ignition[770]: Ignition finished successfully Sep 10 00:16:45.889991 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 10 00:16:45.900214 ignition[778]: Ignition 2.19.0 Sep 10 00:16:45.900227 ignition[778]: Stage: disks Sep 10 00:16:45.900384 ignition[778]: no configs at "/usr/lib/ignition/base.d" Sep 10 00:16:45.900393 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:16:45.901245 ignition[778]: disks: disks passed Sep 10 00:16:45.903084 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 10 00:16:45.901286 ignition[778]: Ignition finished successfully Sep 10 00:16:45.903987 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 10 00:16:45.905339 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 10 00:16:45.906715 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 10 00:16:45.908150 systemd[1]: Reached target sysinit.target - System Initialization. Sep 10 00:16:45.909561 systemd[1]: Reached target basic.target - Basic System. Sep 10 00:16:45.921971 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 10 00:16:45.930904 systemd-fsck[789]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 10 00:16:45.934722 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 10 00:16:45.937923 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 10 00:16:45.981705 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 10 00:16:45.982389 kernel: EXT4-fs (vda9): mounted filesystem 43028332-c79c-426f-8992-528d495eb356 r/w with ordered data mode. Quota mode: none. Sep 10 00:16:45.983418 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 10 00:16:45.995919 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 10 00:16:45.997344 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 10 00:16:45.998324 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 10 00:16:45.998405 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 10 00:16:45.998430 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 10 00:16:46.007091 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (797) Sep 10 00:16:46.007111 kernel: BTRFS info (device vda6): first mount of filesystem 1f9a2be6-c1a7-433d-9dbe-1e5d2ce6fc09 Sep 10 00:16:46.007121 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 10 00:16:46.007131 kernel: BTRFS info (device vda6): using free space tree Sep 10 00:16:46.004447 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 10 00:16:46.008922 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 10 00:16:46.011375 kernel: BTRFS info (device vda6): auto enabling async discard Sep 10 00:16:46.012450 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 10 00:16:46.042201 initrd-setup-root[822]: cut: /sysroot/etc/passwd: No such file or directory Sep 10 00:16:46.045741 initrd-setup-root[829]: cut: /sysroot/etc/group: No such file or directory Sep 10 00:16:46.049594 initrd-setup-root[836]: cut: /sysroot/etc/shadow: No such file or directory Sep 10 00:16:46.053268 initrd-setup-root[843]: cut: /sysroot/etc/gshadow: No such file or directory Sep 10 00:16:46.114751 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 10 00:16:46.121937 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 10 00:16:46.123184 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 10 00:16:46.127868 kernel: BTRFS info (device vda6): last unmount of filesystem 1f9a2be6-c1a7-433d-9dbe-1e5d2ce6fc09 Sep 10 00:16:46.140388 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 10 00:16:46.145522 ignition[911]: INFO : Ignition 2.19.0 Sep 10 00:16:46.145522 ignition[911]: INFO : Stage: mount Sep 10 00:16:46.145522 ignition[911]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 00:16:46.145522 ignition[911]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:16:46.145522 ignition[911]: INFO : mount: mount passed Sep 10 00:16:46.145522 ignition[911]: INFO : Ignition finished successfully Sep 10 00:16:46.147019 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 10 00:16:46.158943 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 10 00:16:46.671046 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 10 00:16:46.679983 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 10 00:16:46.684844 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (925) Sep 10 00:16:46.687119 kernel: BTRFS info (device vda6): first mount of filesystem 1f9a2be6-c1a7-433d-9dbe-1e5d2ce6fc09 Sep 10 00:16:46.687151 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 10 00:16:46.687171 kernel: BTRFS info (device vda6): using free space tree Sep 10 00:16:46.688852 kernel: BTRFS info (device vda6): auto enabling async discard Sep 10 00:16:46.689930 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 10 00:16:46.704289 ignition[942]: INFO : Ignition 2.19.0 Sep 10 00:16:46.704289 ignition[942]: INFO : Stage: files Sep 10 00:16:46.705562 ignition[942]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 00:16:46.705562 ignition[942]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:16:46.705562 ignition[942]: DEBUG : files: compiled without relabeling support, skipping Sep 10 00:16:46.708151 ignition[942]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 10 00:16:46.708151 ignition[942]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 10 00:16:46.711104 ignition[942]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 10 00:16:46.712172 ignition[942]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 10 00:16:46.712172 ignition[942]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 10 00:16:46.711546 unknown[942]: wrote ssh authorized keys file for user: core Sep 10 00:16:46.715032 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 10 00:16:46.715032 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Sep 10 00:16:46.753949 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 10 00:16:46.885993 systemd-networkd[764]: eth0: Gained IPv6LL Sep 10 00:16:46.985690 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 10 00:16:46.985690 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 10 00:16:46.988818 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 10 00:16:46.988818 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 10 00:16:46.988818 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 10 00:16:46.988818 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 10 00:16:46.988818 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 10 00:16:46.988818 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 10 00:16:46.988818 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 10 00:16:46.988818 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 10 00:16:46.988818 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 10 00:16:46.988818 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 10 00:16:46.988818 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 10 00:16:46.988818 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 10 00:16:46.988818 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Sep 10 00:16:47.429613 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 10 00:16:47.882456 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 10 00:16:47.882456 ignition[942]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 10 00:16:47.885776 ignition[942]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 10 00:16:47.885776 ignition[942]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 10 00:16:47.885776 ignition[942]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 10 00:16:47.885776 ignition[942]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 10 00:16:47.885776 ignition[942]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 10 00:16:47.885776 ignition[942]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 10 00:16:47.885776 ignition[942]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 10 00:16:47.885776 ignition[942]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 10 00:16:47.902155 ignition[942]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 10 00:16:47.906226 ignition[942]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 10 00:16:47.907406 ignition[942]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 10 00:16:47.907406 ignition[942]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 10 00:16:47.907406 ignition[942]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 10 00:16:47.907406 ignition[942]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 10 00:16:47.907406 ignition[942]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 10 00:16:47.907406 ignition[942]: INFO : files: files passed Sep 10 00:16:47.907406 ignition[942]: INFO : Ignition finished successfully Sep 10 00:16:47.908870 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 10 00:16:47.919970 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 10 00:16:47.922787 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 10 00:16:47.925597 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 10 00:16:47.925695 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 10 00:16:47.929132 initrd-setup-root-after-ignition[971]: grep: /sysroot/oem/oem-release: No such file or directory Sep 10 00:16:47.931122 initrd-setup-root-after-ignition[973]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 10 00:16:47.931122 initrd-setup-root-after-ignition[973]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 10 00:16:47.933713 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 10 00:16:47.933412 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 10 00:16:47.934903 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 10 00:16:47.942959 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 10 00:16:47.962149 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 10 00:16:47.963877 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 10 00:16:47.964973 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 10 00:16:47.966540 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 10 00:16:47.967886 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 10 00:16:47.968640 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 10 00:16:47.982775 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 10 00:16:47.984947 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 10 00:16:47.995281 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 10 00:16:47.996341 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 00:16:47.998008 systemd[1]: Stopped target timers.target - Timer Units. Sep 10 00:16:47.999521 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 10 00:16:47.999622 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 10 00:16:48.001852 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 10 00:16:48.003606 systemd[1]: Stopped target basic.target - Basic System. Sep 10 00:16:48.004983 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 10 00:16:48.006430 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 10 00:16:48.008041 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 10 00:16:48.009737 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 10 00:16:48.011342 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 10 00:16:48.012947 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 10 00:16:48.014733 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 10 00:16:48.016265 systemd[1]: Stopped target swap.target - Swaps. Sep 10 00:16:48.017520 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 10 00:16:48.017620 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 10 00:16:48.019610 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 10 00:16:48.021352 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 00:16:48.022958 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 10 00:16:48.023040 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 00:16:48.024908 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 10 00:16:48.025005 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 10 00:16:48.027558 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 10 00:16:48.027665 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 10 00:16:48.029290 systemd[1]: Stopped target paths.target - Path Units. Sep 10 00:16:48.030598 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 10 00:16:48.033888 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 00:16:48.035044 systemd[1]: Stopped target slices.target - Slice Units. Sep 10 00:16:48.036731 systemd[1]: Stopped target sockets.target - Socket Units. Sep 10 00:16:48.038097 systemd[1]: iscsid.socket: Deactivated successfully. Sep 10 00:16:48.038174 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 10 00:16:48.039513 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 10 00:16:48.039591 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 10 00:16:48.040948 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 10 00:16:48.041045 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 10 00:16:48.042687 systemd[1]: ignition-files.service: Deactivated successfully. Sep 10 00:16:48.042780 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 10 00:16:48.051975 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 10 00:16:48.053310 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 10 00:16:48.054243 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 10 00:16:48.054367 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 00:16:48.055926 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 10 00:16:48.056031 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 10 00:16:48.061161 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 10 00:16:48.061248 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 10 00:16:48.065668 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 10 00:16:48.066908 ignition[997]: INFO : Ignition 2.19.0 Sep 10 00:16:48.066908 ignition[997]: INFO : Stage: umount Sep 10 00:16:48.066908 ignition[997]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 00:16:48.066908 ignition[997]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:16:48.072719 ignition[997]: INFO : umount: umount passed Sep 10 00:16:48.072719 ignition[997]: INFO : Ignition finished successfully Sep 10 00:16:48.069661 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 10 00:16:48.069747 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 10 00:16:48.070886 systemd[1]: Stopped target network.target - Network. Sep 10 00:16:48.073526 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 10 00:16:48.073585 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 10 00:16:48.074867 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 10 00:16:48.074903 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 10 00:16:48.076341 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 10 00:16:48.076381 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 10 00:16:48.077744 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 10 00:16:48.077779 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 10 00:16:48.079491 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 10 00:16:48.080769 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 10 00:16:48.089876 systemd-networkd[764]: eth0: DHCPv6 lease lost Sep 10 00:16:48.091250 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 10 00:16:48.091361 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 10 00:16:48.093089 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 10 00:16:48.093118 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 10 00:16:48.103922 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 10 00:16:48.104620 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 10 00:16:48.104669 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 10 00:16:48.106514 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 00:16:48.108789 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 10 00:16:48.108890 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 10 00:16:48.112751 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 10 00:16:48.112844 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 10 00:16:48.114512 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 10 00:16:48.114557 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 10 00:16:48.115955 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 10 00:16:48.115994 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 00:16:48.117973 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 10 00:16:48.118697 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 00:16:48.122074 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 10 00:16:48.122152 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 10 00:16:48.124400 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 10 00:16:48.124461 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 10 00:16:48.125658 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 10 00:16:48.125688 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 00:16:48.127227 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 10 00:16:48.127267 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 10 00:16:48.129646 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 10 00:16:48.129686 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 10 00:16:48.131847 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 10 00:16:48.131888 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 00:16:48.144976 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 10 00:16:48.145857 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 10 00:16:48.145906 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 00:16:48.147663 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 10 00:16:48.147701 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 10 00:16:48.149224 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 10 00:16:48.149260 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 00:16:48.150845 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 10 00:16:48.150884 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 00:16:48.152753 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 10 00:16:48.152858 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 10 00:16:48.154323 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 10 00:16:48.154386 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 10 00:16:48.156241 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 10 00:16:48.157333 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 10 00:16:48.157394 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 10 00:16:48.159414 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 10 00:16:48.168756 systemd[1]: Switching root. Sep 10 00:16:48.200702 systemd-journald[239]: Journal stopped Sep 10 00:16:48.866054 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). Sep 10 00:16:48.866111 kernel: SELinux: policy capability network_peer_controls=1 Sep 10 00:16:48.866126 kernel: SELinux: policy capability open_perms=1 Sep 10 00:16:48.866136 kernel: SELinux: policy capability extended_socket_class=1 Sep 10 00:16:48.866146 kernel: SELinux: policy capability always_check_network=0 Sep 10 00:16:48.866155 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 10 00:16:48.866165 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 10 00:16:48.866175 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 10 00:16:48.866187 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 10 00:16:48.866201 kernel: audit: type=1403 audit(1757463408.339:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 10 00:16:48.866212 systemd[1]: Successfully loaded SELinux policy in 28.985ms. Sep 10 00:16:48.866230 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.239ms. Sep 10 00:16:48.866241 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 10 00:16:48.866252 systemd[1]: Detected virtualization kvm. Sep 10 00:16:48.866263 systemd[1]: Detected architecture arm64. Sep 10 00:16:48.866273 systemd[1]: Detected first boot. Sep 10 00:16:48.866283 systemd[1]: Initializing machine ID from VM UUID. Sep 10 00:16:48.866293 zram_generator::config[1045]: No configuration found. Sep 10 00:16:48.866304 systemd[1]: Populated /etc with preset unit settings. Sep 10 00:16:48.866316 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 10 00:16:48.866328 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 10 00:16:48.866339 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 10 00:16:48.866349 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 10 00:16:48.866360 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 10 00:16:48.866371 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 10 00:16:48.866381 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 10 00:16:48.866392 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 10 00:16:48.866404 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 10 00:16:48.866414 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 10 00:16:48.866425 systemd[1]: Created slice user.slice - User and Session Slice. Sep 10 00:16:48.866435 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 00:16:48.866446 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 00:16:48.866456 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 10 00:16:48.866466 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 10 00:16:48.866485 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 10 00:16:48.866497 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 10 00:16:48.866513 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 10 00:16:48.866525 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 00:16:48.866535 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 10 00:16:48.866545 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 10 00:16:48.866556 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 10 00:16:48.866568 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 10 00:16:48.866578 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 00:16:48.866589 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 10 00:16:48.866601 systemd[1]: Reached target slices.target - Slice Units. Sep 10 00:16:48.866611 systemd[1]: Reached target swap.target - Swaps. Sep 10 00:16:48.866622 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 10 00:16:48.866634 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 10 00:16:48.866655 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 10 00:16:48.866670 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 10 00:16:48.866681 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 00:16:48.866691 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 10 00:16:48.866702 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 10 00:16:48.866713 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 10 00:16:48.866724 systemd[1]: Mounting media.mount - External Media Directory... Sep 10 00:16:48.866734 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 10 00:16:48.866745 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 10 00:16:48.866756 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 10 00:16:48.866776 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 10 00:16:48.866787 systemd[1]: Reached target machines.target - Containers. Sep 10 00:16:48.866797 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 10 00:16:48.866811 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 00:16:48.866825 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 10 00:16:48.866842 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 10 00:16:48.866854 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 00:16:48.866865 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 10 00:16:48.866875 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 00:16:48.866885 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 10 00:16:48.866896 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 00:16:48.866907 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 10 00:16:48.866919 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 10 00:16:48.866930 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 10 00:16:48.866940 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 10 00:16:48.866950 systemd[1]: Stopped systemd-fsck-usr.service. Sep 10 00:16:48.866961 kernel: fuse: init (API version 7.39) Sep 10 00:16:48.866970 kernel: loop: module loaded Sep 10 00:16:48.866981 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 10 00:16:48.866992 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 10 00:16:48.867002 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 10 00:16:48.867014 kernel: ACPI: bus type drm_connector registered Sep 10 00:16:48.867024 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 10 00:16:48.867035 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 10 00:16:48.867061 systemd-journald[1112]: Collecting audit messages is disabled. Sep 10 00:16:48.867085 systemd[1]: verity-setup.service: Deactivated successfully. Sep 10 00:16:48.867096 systemd[1]: Stopped verity-setup.service. Sep 10 00:16:48.867106 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 10 00:16:48.867116 systemd-journald[1112]: Journal started Sep 10 00:16:48.867138 systemd-journald[1112]: Runtime Journal (/run/log/journal/f4b5cc37dc6c4e49af243a8befad27fb) is 5.9M, max 47.3M, 41.4M free. Sep 10 00:16:48.675514 systemd[1]: Queued start job for default target multi-user.target. Sep 10 00:16:48.869879 systemd[1]: Started systemd-journald.service - Journal Service. Sep 10 00:16:48.702739 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 10 00:16:48.703095 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 10 00:16:48.871011 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 10 00:16:48.872226 systemd[1]: Mounted media.mount - External Media Directory. Sep 10 00:16:48.873101 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 10 00:16:48.874113 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 10 00:16:48.875089 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 10 00:16:48.876056 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 00:16:48.877308 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 10 00:16:48.877490 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 10 00:16:48.878797 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 00:16:48.878987 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 00:16:48.880078 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 10 00:16:48.880216 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 10 00:16:48.881284 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 00:16:48.881411 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 00:16:48.882869 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 10 00:16:48.883987 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 10 00:16:48.884116 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 10 00:16:48.885223 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 00:16:48.885360 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 00:16:48.886608 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 10 00:16:48.887737 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 10 00:16:48.889051 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 10 00:16:48.900626 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 10 00:16:48.909938 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 10 00:16:48.911759 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 10 00:16:48.912736 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 10 00:16:48.912765 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 10 00:16:48.914512 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 10 00:16:48.919020 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 10 00:16:48.922184 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 10 00:16:48.923158 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 00:16:48.924371 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 10 00:16:48.926170 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 10 00:16:48.927119 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 00:16:48.930967 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 10 00:16:48.934088 systemd-journald[1112]: Time spent on flushing to /var/log/journal/f4b5cc37dc6c4e49af243a8befad27fb is 12.254ms for 851 entries. Sep 10 00:16:48.934088 systemd-journald[1112]: System Journal (/var/log/journal/f4b5cc37dc6c4e49af243a8befad27fb) is 8.0M, max 195.6M, 187.6M free. Sep 10 00:16:48.956161 systemd-journald[1112]: Received client request to flush runtime journal. Sep 10 00:16:48.935412 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 10 00:16:48.936587 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 10 00:16:48.939677 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 10 00:16:48.946003 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 10 00:16:48.949230 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 00:16:48.952108 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 10 00:16:48.953099 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 10 00:16:48.955358 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 10 00:16:48.958187 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 10 00:16:48.960554 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 10 00:16:48.965948 kernel: loop0: detected capacity change from 0 to 114432 Sep 10 00:16:48.966751 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 10 00:16:48.968636 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 10 00:16:48.977965 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 10 00:16:48.979187 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 10 00:16:48.981493 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 10 00:16:48.988363 systemd-tmpfiles[1157]: ACLs are not supported, ignoring. Sep 10 00:16:48.988384 systemd-tmpfiles[1157]: ACLs are not supported, ignoring. Sep 10 00:16:48.994894 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 10 00:16:49.000909 kernel: loop1: detected capacity change from 0 to 114328 Sep 10 00:16:49.003968 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 10 00:16:49.005683 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 10 00:16:49.006238 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 10 00:16:49.010323 udevadm[1173]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 10 00:16:49.030019 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 10 00:16:49.040187 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 10 00:16:49.042080 kernel: loop2: detected capacity change from 0 to 211168 Sep 10 00:16:49.054809 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Sep 10 00:16:49.055005 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Sep 10 00:16:49.058773 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 00:16:49.082874 kernel: loop3: detected capacity change from 0 to 114432 Sep 10 00:16:49.086921 kernel: loop4: detected capacity change from 0 to 114328 Sep 10 00:16:49.090848 kernel: loop5: detected capacity change from 0 to 211168 Sep 10 00:16:49.095099 (sd-merge)[1184]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 10 00:16:49.095512 (sd-merge)[1184]: Merged extensions into '/usr'. Sep 10 00:16:49.098975 systemd[1]: Reloading requested from client PID 1156 ('systemd-sysext') (unit systemd-sysext.service)... Sep 10 00:16:49.098996 systemd[1]: Reloading... Sep 10 00:16:49.145895 zram_generator::config[1206]: No configuration found. Sep 10 00:16:49.197327 ldconfig[1151]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 10 00:16:49.249481 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 00:16:49.284794 systemd[1]: Reloading finished in 185 ms. Sep 10 00:16:49.316563 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 10 00:16:49.318158 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 10 00:16:49.331997 systemd[1]: Starting ensure-sysext.service... Sep 10 00:16:49.333678 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 10 00:16:49.338313 systemd[1]: Reloading requested from client PID 1244 ('systemctl') (unit ensure-sysext.service)... Sep 10 00:16:49.338333 systemd[1]: Reloading... Sep 10 00:16:49.349773 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 10 00:16:49.350047 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 10 00:16:49.350679 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 10 00:16:49.350919 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. Sep 10 00:16:49.350970 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. Sep 10 00:16:49.353291 systemd-tmpfiles[1245]: Detected autofs mount point /boot during canonicalization of boot. Sep 10 00:16:49.353304 systemd-tmpfiles[1245]: Skipping /boot Sep 10 00:16:49.360452 systemd-tmpfiles[1245]: Detected autofs mount point /boot during canonicalization of boot. Sep 10 00:16:49.360480 systemd-tmpfiles[1245]: Skipping /boot Sep 10 00:16:49.383850 zram_generator::config[1270]: No configuration found. Sep 10 00:16:49.469180 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 00:16:49.504486 systemd[1]: Reloading finished in 165 ms. Sep 10 00:16:49.518664 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 10 00:16:49.526208 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 00:16:49.533523 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 10 00:16:49.535990 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 10 00:16:49.537988 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 10 00:16:49.541201 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 10 00:16:49.544089 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 00:16:49.548543 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 10 00:16:49.553496 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 00:16:49.554913 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 00:16:49.556911 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 00:16:49.560158 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 00:16:49.561232 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 00:16:49.562904 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 10 00:16:49.565668 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 00:16:49.565859 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 00:16:49.567389 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 10 00:16:49.569063 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 00:16:49.569228 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 00:16:49.576757 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 00:16:49.583058 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 00:16:49.585060 augenrules[1337]: No rules Sep 10 00:16:49.587148 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 00:16:49.591003 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 00:16:49.591159 systemd-udevd[1314]: Using default interface naming scheme 'v255'. Sep 10 00:16:49.592607 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 10 00:16:49.594126 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 10 00:16:49.596603 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 10 00:16:49.598112 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 10 00:16:49.600210 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 00:16:49.600336 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 00:16:49.602492 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 00:16:49.602639 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 00:16:49.604612 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 10 00:16:49.607423 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 00:16:49.607556 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 00:16:49.610862 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 10 00:16:49.615300 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 00:16:49.622099 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 00:16:49.634035 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 00:16:49.636106 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 10 00:16:49.638124 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 00:16:49.640328 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 00:16:49.641240 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 00:16:49.642753 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 10 00:16:49.645216 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 10 00:16:49.645676 systemd[1]: Finished ensure-sysext.service. Sep 10 00:16:49.646688 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 00:16:49.646815 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 00:16:49.654516 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 10 00:16:49.655813 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 10 00:16:49.662457 systemd-resolved[1312]: Positive Trust Anchors: Sep 10 00:16:49.662486 systemd-resolved[1312]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 10 00:16:49.662520 systemd-resolved[1312]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 10 00:16:49.675362 systemd-resolved[1312]: Defaulting to hostname 'linux'. Sep 10 00:16:49.678324 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 00:16:49.678482 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 00:16:49.681735 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 00:16:49.682812 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 10 00:16:49.684159 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 10 00:16:49.685126 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 10 00:16:49.686303 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 00:16:49.687851 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 00:16:49.689640 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 10 00:16:49.691165 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 10 00:16:49.707982 systemd-networkd[1379]: lo: Link UP Sep 10 00:16:49.707989 systemd-networkd[1379]: lo: Gained carrier Sep 10 00:16:49.708677 systemd-networkd[1379]: Enumeration completed Sep 10 00:16:49.709203 systemd-networkd[1379]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 00:16:49.709213 systemd-networkd[1379]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 10 00:16:49.709682 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 10 00:16:49.709763 systemd-networkd[1379]: eth0: Link UP Sep 10 00:16:49.709766 systemd-networkd[1379]: eth0: Gained carrier Sep 10 00:16:49.709779 systemd-networkd[1379]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 00:16:49.710746 systemd[1]: Reached target network.target - Network. Sep 10 00:16:49.715789 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 10 00:16:49.718877 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1375) Sep 10 00:16:49.723912 systemd-networkd[1379]: eth0: DHCPv4 address 10.0.0.124/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 10 00:16:49.736205 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 10 00:16:49.323128 systemd-resolved[1312]: Clock change detected. Flushing caches. Sep 10 00:16:49.328679 systemd-journald[1112]: Time jumped backwards, rotating. Sep 10 00:16:49.323180 systemd-timesyncd[1382]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 10 00:16:49.323229 systemd-timesyncd[1382]: Initial clock synchronization to Wed 2025-09-10 00:16:49.323086 UTC. Sep 10 00:16:49.324162 systemd[1]: Reached target time-set.target - System Time Set. Sep 10 00:16:49.327320 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 10 00:16:49.335963 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 10 00:16:49.348797 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 10 00:16:49.375001 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 00:16:49.385934 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 10 00:16:49.388837 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 10 00:16:49.400359 lvm[1405]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 10 00:16:49.410835 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 00:16:49.433842 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 10 00:16:49.435037 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 10 00:16:49.435913 systemd[1]: Reached target sysinit.target - System Initialization. Sep 10 00:16:49.436788 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 10 00:16:49.437687 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 10 00:16:49.438918 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 10 00:16:49.439822 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 10 00:16:49.440720 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 10 00:16:49.441607 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 10 00:16:49.441639 systemd[1]: Reached target paths.target - Path Units. Sep 10 00:16:49.442427 systemd[1]: Reached target timers.target - Timer Units. Sep 10 00:16:49.443870 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 10 00:16:49.445914 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 10 00:16:49.453643 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 10 00:16:49.455766 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 10 00:16:49.457077 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 10 00:16:49.458072 systemd[1]: Reached target sockets.target - Socket Units. Sep 10 00:16:49.458845 systemd[1]: Reached target basic.target - Basic System. Sep 10 00:16:49.459551 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 10 00:16:49.459583 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 10 00:16:49.460484 systemd[1]: Starting containerd.service - containerd container runtime... Sep 10 00:16:49.462332 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 10 00:16:49.464876 lvm[1412]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 10 00:16:49.466878 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 10 00:16:49.469132 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 10 00:16:49.470356 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 10 00:16:49.474194 jq[1415]: false Sep 10 00:16:49.474513 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 10 00:16:49.476797 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 10 00:16:49.479845 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 10 00:16:49.482529 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 10 00:16:49.483679 dbus-daemon[1414]: [system] SELinux support is enabled Sep 10 00:16:49.486012 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 10 00:16:49.487478 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 10 00:16:49.487861 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 10 00:16:49.488358 extend-filesystems[1416]: Found loop3 Sep 10 00:16:49.489689 extend-filesystems[1416]: Found loop4 Sep 10 00:16:49.489689 extend-filesystems[1416]: Found loop5 Sep 10 00:16:49.489689 extend-filesystems[1416]: Found vda Sep 10 00:16:49.489689 extend-filesystems[1416]: Found vda1 Sep 10 00:16:49.489689 extend-filesystems[1416]: Found vda2 Sep 10 00:16:49.489689 extend-filesystems[1416]: Found vda3 Sep 10 00:16:49.489689 extend-filesystems[1416]: Found usr Sep 10 00:16:49.489689 extend-filesystems[1416]: Found vda4 Sep 10 00:16:49.489689 extend-filesystems[1416]: Found vda6 Sep 10 00:16:49.489689 extend-filesystems[1416]: Found vda7 Sep 10 00:16:49.489689 extend-filesystems[1416]: Found vda9 Sep 10 00:16:49.489689 extend-filesystems[1416]: Checking size of /dev/vda9 Sep 10 00:16:49.489511 systemd[1]: Starting update-engine.service - Update Engine... Sep 10 00:16:49.492210 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 10 00:16:49.495207 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 10 00:16:49.502779 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 10 00:16:49.505164 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 10 00:16:49.505571 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 10 00:16:49.507503 jq[1427]: true Sep 10 00:16:49.509058 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 10 00:16:49.509216 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 10 00:16:49.511619 systemd[1]: motdgen.service: Deactivated successfully. Sep 10 00:16:49.511786 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 10 00:16:49.523591 extend-filesystems[1416]: Resized partition /dev/vda9 Sep 10 00:16:49.527202 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1375) Sep 10 00:16:49.527245 jq[1439]: true Sep 10 00:16:49.530238 extend-filesystems[1443]: resize2fs 1.47.1 (20-May-2024) Sep 10 00:16:49.537352 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 10 00:16:49.537398 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 10 00:16:49.537905 (ntainerd)[1448]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 10 00:16:49.539774 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 10 00:16:49.540297 update_engine[1425]: I20250910 00:16:49.540053 1425 main.cc:92] Flatcar Update Engine starting Sep 10 00:16:49.542221 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 10 00:16:49.542243 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 10 00:16:49.544081 update_engine[1425]: I20250910 00:16:49.544015 1425 update_check_scheduler.cc:74] Next update check in 9m9s Sep 10 00:16:49.544912 systemd[1]: Started update-engine.service - Update Engine. Sep 10 00:16:49.549191 tar[1436]: linux-arm64/LICENSE Sep 10 00:16:49.549191 tar[1436]: linux-arm64/helm Sep 10 00:16:49.560765 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 10 00:16:49.560887 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 10 00:16:49.574101 extend-filesystems[1443]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 10 00:16:49.574101 extend-filesystems[1443]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 10 00:16:49.574101 extend-filesystems[1443]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 10 00:16:49.573447 systemd-logind[1423]: Watching system buttons on /dev/input/event0 (Power Button) Sep 10 00:16:49.584574 bash[1468]: Updated "/home/core/.ssh/authorized_keys" Sep 10 00:16:49.584698 extend-filesystems[1416]: Resized filesystem in /dev/vda9 Sep 10 00:16:49.575931 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 10 00:16:49.576088 systemd-logind[1423]: New seat seat0. Sep 10 00:16:49.576130 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 10 00:16:49.584585 systemd[1]: Started systemd-logind.service - User Login Management. Sep 10 00:16:49.585815 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 10 00:16:49.588502 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 10 00:16:49.628885 locksmithd[1458]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 10 00:16:49.681713 containerd[1448]: time="2025-09-10T00:16:49.681631117Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 10 00:16:49.708681 containerd[1448]: time="2025-09-10T00:16:49.708638117Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 10 00:16:49.711105 containerd[1448]: time="2025-09-10T00:16:49.710011597Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.104-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 10 00:16:49.711105 containerd[1448]: time="2025-09-10T00:16:49.710042157Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 10 00:16:49.711105 containerd[1448]: time="2025-09-10T00:16:49.710057357Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 10 00:16:49.711105 containerd[1448]: time="2025-09-10T00:16:49.710200197Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 10 00:16:49.711105 containerd[1448]: time="2025-09-10T00:16:49.710218477Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 10 00:16:49.711105 containerd[1448]: time="2025-09-10T00:16:49.710266797Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 10 00:16:49.711105 containerd[1448]: time="2025-09-10T00:16:49.710278477Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 10 00:16:49.711105 containerd[1448]: time="2025-09-10T00:16:49.710439277Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 10 00:16:49.711105 containerd[1448]: time="2025-09-10T00:16:49.710454517Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 10 00:16:49.711105 containerd[1448]: time="2025-09-10T00:16:49.710466557Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 10 00:16:49.711105 containerd[1448]: time="2025-09-10T00:16:49.710476717Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 10 00:16:49.711333 containerd[1448]: time="2025-09-10T00:16:49.710549397Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 10 00:16:49.711333 containerd[1448]: time="2025-09-10T00:16:49.710727637Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 10 00:16:49.711333 containerd[1448]: time="2025-09-10T00:16:49.710848837Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 10 00:16:49.711333 containerd[1448]: time="2025-09-10T00:16:49.710862957Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 10 00:16:49.711333 containerd[1448]: time="2025-09-10T00:16:49.710938477Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 10 00:16:49.711333 containerd[1448]: time="2025-09-10T00:16:49.710975477Z" level=info msg="metadata content store policy set" policy=shared Sep 10 00:16:49.715549 containerd[1448]: time="2025-09-10T00:16:49.715520477Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 10 00:16:49.715918 containerd[1448]: time="2025-09-10T00:16:49.715855717Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 10 00:16:49.716041 containerd[1448]: time="2025-09-10T00:16:49.716024477Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 10 00:16:49.716152 containerd[1448]: time="2025-09-10T00:16:49.716087437Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 10 00:16:49.716288 containerd[1448]: time="2025-09-10T00:16:49.716210877Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 10 00:16:49.716545 containerd[1448]: time="2025-09-10T00:16:49.716524357Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 10 00:16:49.716971 containerd[1448]: time="2025-09-10T00:16:49.716949437Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 10 00:16:49.717199 containerd[1448]: time="2025-09-10T00:16:49.717169957Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 10 00:16:49.717373 containerd[1448]: time="2025-09-10T00:16:49.717309637Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 10 00:16:49.717497 containerd[1448]: time="2025-09-10T00:16:49.717435117Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 10 00:16:49.717557 containerd[1448]: time="2025-09-10T00:16:49.717544117Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 10 00:16:49.717657 containerd[1448]: time="2025-09-10T00:16:49.717642197Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 10 00:16:49.717784 containerd[1448]: time="2025-09-10T00:16:49.717767717Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 10 00:16:49.717843 containerd[1448]: time="2025-09-10T00:16:49.717830357Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 10 00:16:49.717950 containerd[1448]: time="2025-09-10T00:16:49.717934317Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 10 00:16:49.718019 containerd[1448]: time="2025-09-10T00:16:49.717994277Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 10 00:16:49.718220 containerd[1448]: time="2025-09-10T00:16:49.718091717Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 10 00:16:49.718220 containerd[1448]: time="2025-09-10T00:16:49.718110597Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 10 00:16:49.718220 containerd[1448]: time="2025-09-10T00:16:49.718133437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 10 00:16:49.718220 containerd[1448]: time="2025-09-10T00:16:49.718147757Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 10 00:16:49.718379 containerd[1448]: time="2025-09-10T00:16:49.718360837Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 10 00:16:49.718587 containerd[1448]: time="2025-09-10T00:16:49.718568277Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 10 00:16:49.719781 containerd[1448]: time="2025-09-10T00:16:49.718698317Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 10 00:16:49.719781 containerd[1448]: time="2025-09-10T00:16:49.718782157Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 10 00:16:49.719781 containerd[1448]: time="2025-09-10T00:16:49.718798197Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 10 00:16:49.719781 containerd[1448]: time="2025-09-10T00:16:49.718811597Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 10 00:16:49.719781 containerd[1448]: time="2025-09-10T00:16:49.718830077Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 10 00:16:49.719781 containerd[1448]: time="2025-09-10T00:16:49.718844997Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 10 00:16:49.719781 containerd[1448]: time="2025-09-10T00:16:49.718858517Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 10 00:16:49.719781 containerd[1448]: time="2025-09-10T00:16:49.718869877Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 10 00:16:49.719781 containerd[1448]: time="2025-09-10T00:16:49.718881997Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 10 00:16:49.719781 containerd[1448]: time="2025-09-10T00:16:49.718899717Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 10 00:16:49.719781 containerd[1448]: time="2025-09-10T00:16:49.718919957Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 10 00:16:49.719781 containerd[1448]: time="2025-09-10T00:16:49.718931797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 10 00:16:49.719781 containerd[1448]: time="2025-09-10T00:16:49.718942517Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 10 00:16:49.719781 containerd[1448]: time="2025-09-10T00:16:49.719050357Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 10 00:16:49.720034 containerd[1448]: time="2025-09-10T00:16:49.719076757Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 10 00:16:49.720034 containerd[1448]: time="2025-09-10T00:16:49.719087037Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 10 00:16:49.720034 containerd[1448]: time="2025-09-10T00:16:49.719099037Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 10 00:16:49.720034 containerd[1448]: time="2025-09-10T00:16:49.719108717Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 10 00:16:49.720034 containerd[1448]: time="2025-09-10T00:16:49.719120077Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 10 00:16:49.720034 containerd[1448]: time="2025-09-10T00:16:49.719130517Z" level=info msg="NRI interface is disabled by configuration." Sep 10 00:16:49.720034 containerd[1448]: time="2025-09-10T00:16:49.719145677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 10 00:16:49.720147 containerd[1448]: time="2025-09-10T00:16:49.719476557Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 10 00:16:49.720147 containerd[1448]: time="2025-09-10T00:16:49.719537197Z" level=info msg="Connect containerd service" Sep 10 00:16:49.720147 containerd[1448]: time="2025-09-10T00:16:49.719560637Z" level=info msg="using legacy CRI server" Sep 10 00:16:49.720147 containerd[1448]: time="2025-09-10T00:16:49.719566877Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 10 00:16:49.720147 containerd[1448]: time="2025-09-10T00:16:49.719667557Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 10 00:16:49.721337 containerd[1448]: time="2025-09-10T00:16:49.721308197Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 10 00:16:49.721999 containerd[1448]: time="2025-09-10T00:16:49.721924917Z" level=info msg="Start subscribing containerd event" Sep 10 00:16:49.721999 containerd[1448]: time="2025-09-10T00:16:49.721977317Z" level=info msg="Start recovering state" Sep 10 00:16:49.722060 containerd[1448]: time="2025-09-10T00:16:49.722038557Z" level=info msg="Start event monitor" Sep 10 00:16:49.722060 containerd[1448]: time="2025-09-10T00:16:49.722057597Z" level=info msg="Start snapshots syncer" Sep 10 00:16:49.722117 containerd[1448]: time="2025-09-10T00:16:49.722067077Z" level=info msg="Start cni network conf syncer for default" Sep 10 00:16:49.722117 containerd[1448]: time="2025-09-10T00:16:49.722077397Z" level=info msg="Start streaming server" Sep 10 00:16:49.722336 containerd[1448]: time="2025-09-10T00:16:49.722315717Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 10 00:16:49.722494 containerd[1448]: time="2025-09-10T00:16:49.722478517Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 10 00:16:49.723726 containerd[1448]: time="2025-09-10T00:16:49.722650917Z" level=info msg="containerd successfully booted in 0.042324s" Sep 10 00:16:49.722774 systemd[1]: Started containerd.service - containerd container runtime. Sep 10 00:16:49.927516 tar[1436]: linux-arm64/README.md Sep 10 00:16:49.940130 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 10 00:16:50.375949 systemd-networkd[1379]: eth0: Gained IPv6LL Sep 10 00:16:50.382851 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 10 00:16:50.384348 systemd[1]: Reached target network-online.target - Network is Online. Sep 10 00:16:50.395983 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 10 00:16:50.398527 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:16:50.400573 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 10 00:16:50.418391 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 10 00:16:50.419704 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 10 00:16:50.419894 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 10 00:16:50.422603 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 10 00:16:50.957791 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:16:50.961355 (kubelet)[1511]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 00:16:51.174008 sshd_keygen[1435]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 10 00:16:51.193432 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 10 00:16:51.204988 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 10 00:16:51.211075 systemd[1]: issuegen.service: Deactivated successfully. Sep 10 00:16:51.211251 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 10 00:16:51.214656 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 10 00:16:51.226508 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 10 00:16:51.233027 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 10 00:16:51.235101 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 10 00:16:51.236294 systemd[1]: Reached target getty.target - Login Prompts. Sep 10 00:16:51.237341 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 10 00:16:51.238515 systemd[1]: Startup finished in 503ms (kernel) + 4.664s (initrd) + 3.341s (userspace) = 8.509s. Sep 10 00:16:51.306245 kubelet[1511]: E0910 00:16:51.306184 1511 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 00:16:51.308987 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 00:16:51.309248 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 00:16:56.306390 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 10 00:16:56.307762 systemd[1]: Started sshd@0-10.0.0.124:22-10.0.0.1:50308.service - OpenSSH per-connection server daemon (10.0.0.1:50308). Sep 10 00:16:56.352354 sshd[1540]: Accepted publickey for core from 10.0.0.1 port 50308 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:16:56.354258 sshd[1540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:16:56.361944 systemd-logind[1423]: New session 1 of user core. Sep 10 00:16:56.362875 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 10 00:16:56.379962 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 10 00:16:56.389734 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 10 00:16:56.391236 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 10 00:16:56.396735 (systemd)[1544]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:16:56.465909 systemd[1544]: Queued start job for default target default.target. Sep 10 00:16:56.477564 systemd[1544]: Created slice app.slice - User Application Slice. Sep 10 00:16:56.477593 systemd[1544]: Reached target paths.target - Paths. Sep 10 00:16:56.477605 systemd[1544]: Reached target timers.target - Timers. Sep 10 00:16:56.478730 systemd[1544]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 10 00:16:56.487536 systemd[1544]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 10 00:16:56.487595 systemd[1544]: Reached target sockets.target - Sockets. Sep 10 00:16:56.487607 systemd[1544]: Reached target basic.target - Basic System. Sep 10 00:16:56.487638 systemd[1544]: Reached target default.target - Main User Target. Sep 10 00:16:56.487662 systemd[1544]: Startup finished in 86ms. Sep 10 00:16:56.487866 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 10 00:16:56.488994 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 10 00:16:56.550855 systemd[1]: Started sshd@1-10.0.0.124:22-10.0.0.1:50314.service - OpenSSH per-connection server daemon (10.0.0.1:50314). Sep 10 00:16:56.590285 sshd[1555]: Accepted publickey for core from 10.0.0.1 port 50314 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:16:56.591536 sshd[1555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:16:56.595441 systemd-logind[1423]: New session 2 of user core. Sep 10 00:16:56.605881 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 10 00:16:56.658576 sshd[1555]: pam_unix(sshd:session): session closed for user core Sep 10 00:16:56.672484 systemd[1]: sshd@1-10.0.0.124:22-10.0.0.1:50314.service: Deactivated successfully. Sep 10 00:16:56.675429 systemd[1]: session-2.scope: Deactivated successfully. Sep 10 00:16:56.676469 systemd-logind[1423]: Session 2 logged out. Waiting for processes to exit. Sep 10 00:16:56.677627 systemd[1]: Started sshd@2-10.0.0.124:22-10.0.0.1:50318.service - OpenSSH per-connection server daemon (10.0.0.1:50318). Sep 10 00:16:56.679124 systemd-logind[1423]: Removed session 2. Sep 10 00:16:56.710232 sshd[1562]: Accepted publickey for core from 10.0.0.1 port 50318 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:16:56.711311 sshd[1562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:16:56.716169 systemd-logind[1423]: New session 3 of user core. Sep 10 00:16:56.722875 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 10 00:16:56.769800 sshd[1562]: pam_unix(sshd:session): session closed for user core Sep 10 00:16:56.782866 systemd[1]: sshd@2-10.0.0.124:22-10.0.0.1:50318.service: Deactivated successfully. Sep 10 00:16:56.784128 systemd[1]: session-3.scope: Deactivated successfully. Sep 10 00:16:56.786798 systemd-logind[1423]: Session 3 logged out. Waiting for processes to exit. Sep 10 00:16:56.787857 systemd[1]: Started sshd@3-10.0.0.124:22-10.0.0.1:50326.service - OpenSSH per-connection server daemon (10.0.0.1:50326). Sep 10 00:16:56.788545 systemd-logind[1423]: Removed session 3. Sep 10 00:16:56.819313 sshd[1569]: Accepted publickey for core from 10.0.0.1 port 50326 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:16:56.820507 sshd[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:16:56.824161 systemd-logind[1423]: New session 4 of user core. Sep 10 00:16:56.836875 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 10 00:16:56.886852 sshd[1569]: pam_unix(sshd:session): session closed for user core Sep 10 00:16:56.894775 systemd[1]: sshd@3-10.0.0.124:22-10.0.0.1:50326.service: Deactivated successfully. Sep 10 00:16:56.896025 systemd[1]: session-4.scope: Deactivated successfully. Sep 10 00:16:56.897326 systemd-logind[1423]: Session 4 logged out. Waiting for processes to exit. Sep 10 00:16:56.898419 systemd[1]: Started sshd@4-10.0.0.124:22-10.0.0.1:50328.service - OpenSSH per-connection server daemon (10.0.0.1:50328). Sep 10 00:16:56.899320 systemd-logind[1423]: Removed session 4. Sep 10 00:16:56.929662 sshd[1576]: Accepted publickey for core from 10.0.0.1 port 50328 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:16:56.930719 sshd[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:16:56.934094 systemd-logind[1423]: New session 5 of user core. Sep 10 00:16:56.944869 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 10 00:16:56.999566 sudo[1579]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 10 00:16:56.999854 sudo[1579]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 00:16:57.010450 sudo[1579]: pam_unix(sudo:session): session closed for user root Sep 10 00:16:57.012008 sshd[1576]: pam_unix(sshd:session): session closed for user core Sep 10 00:16:57.021958 systemd[1]: sshd@4-10.0.0.124:22-10.0.0.1:50328.service: Deactivated successfully. Sep 10 00:16:57.023278 systemd[1]: session-5.scope: Deactivated successfully. Sep 10 00:16:57.024511 systemd-logind[1423]: Session 5 logged out. Waiting for processes to exit. Sep 10 00:16:57.025612 systemd[1]: Started sshd@5-10.0.0.124:22-10.0.0.1:50330.service - OpenSSH per-connection server daemon (10.0.0.1:50330). Sep 10 00:16:57.026741 systemd-logind[1423]: Removed session 5. Sep 10 00:16:57.057263 sshd[1584]: Accepted publickey for core from 10.0.0.1 port 50330 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:16:57.058318 sshd[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:16:57.062786 systemd-logind[1423]: New session 6 of user core. Sep 10 00:16:57.072890 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 10 00:16:57.122254 sudo[1588]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 10 00:16:57.122521 sudo[1588]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 00:16:57.125128 sudo[1588]: pam_unix(sudo:session): session closed for user root Sep 10 00:16:57.129286 sudo[1587]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 10 00:16:57.129788 sudo[1587]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 00:16:57.142025 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 10 00:16:57.143024 auditctl[1591]: No rules Sep 10 00:16:57.143818 systemd[1]: audit-rules.service: Deactivated successfully. Sep 10 00:16:57.143999 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 10 00:16:57.145413 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 10 00:16:57.166655 augenrules[1609]: No rules Sep 10 00:16:57.167714 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 10 00:16:57.169030 sudo[1587]: pam_unix(sudo:session): session closed for user root Sep 10 00:16:57.170362 sshd[1584]: pam_unix(sshd:session): session closed for user core Sep 10 00:16:57.186867 systemd[1]: sshd@5-10.0.0.124:22-10.0.0.1:50330.service: Deactivated successfully. Sep 10 00:16:57.188169 systemd[1]: session-6.scope: Deactivated successfully. Sep 10 00:16:57.189735 systemd-logind[1423]: Session 6 logged out. Waiting for processes to exit. Sep 10 00:16:57.190384 systemd[1]: Started sshd@6-10.0.0.124:22-10.0.0.1:50344.service - OpenSSH per-connection server daemon (10.0.0.1:50344). Sep 10 00:16:57.191052 systemd-logind[1423]: Removed session 6. Sep 10 00:16:57.221654 sshd[1617]: Accepted publickey for core from 10.0.0.1 port 50344 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:16:57.222667 sshd[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:16:57.226243 systemd-logind[1423]: New session 7 of user core. Sep 10 00:16:57.233873 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 10 00:16:57.283119 sudo[1620]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 10 00:16:57.283371 sudo[1620]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 00:16:57.542998 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 10 00:16:57.543107 (dockerd)[1638]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 10 00:16:57.753508 dockerd[1638]: time="2025-09-10T00:16:57.753446637Z" level=info msg="Starting up" Sep 10 00:16:57.893911 dockerd[1638]: time="2025-09-10T00:16:57.893821917Z" level=info msg="Loading containers: start." Sep 10 00:16:57.973801 kernel: Initializing XFRM netlink socket Sep 10 00:16:58.030593 systemd-networkd[1379]: docker0: Link UP Sep 10 00:16:58.048888 dockerd[1638]: time="2025-09-10T00:16:58.048848077Z" level=info msg="Loading containers: done." Sep 10 00:16:58.059418 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3423858352-merged.mount: Deactivated successfully. Sep 10 00:16:58.060830 dockerd[1638]: time="2025-09-10T00:16:58.060793397Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 10 00:16:58.060902 dockerd[1638]: time="2025-09-10T00:16:58.060879997Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 10 00:16:58.060988 dockerd[1638]: time="2025-09-10T00:16:58.060970717Z" level=info msg="Daemon has completed initialization" Sep 10 00:16:58.085764 dockerd[1638]: time="2025-09-10T00:16:58.085647437Z" level=info msg="API listen on /run/docker.sock" Sep 10 00:16:58.085902 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 10 00:16:58.601255 containerd[1448]: time="2025-09-10T00:16:58.601211157Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\"" Sep 10 00:16:59.156718 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1959716120.mount: Deactivated successfully. Sep 10 00:17:00.244883 containerd[1448]: time="2025-09-10T00:17:00.244834557Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:17:00.245780 containerd[1448]: time="2025-09-10T00:17:00.245530877Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.4: active requests=0, bytes read=27352615" Sep 10 00:17:00.246491 containerd[1448]: time="2025-09-10T00:17:00.246449557Z" level=info msg="ImageCreate event name:\"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:17:00.249487 containerd[1448]: time="2025-09-10T00:17:00.249457157Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:17:00.251264 containerd[1448]: time="2025-09-10T00:17:00.250592757Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.4\" with image id \"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\", size \"27349413\" in 1.64933676s" Sep 10 00:17:00.251264 containerd[1448]: time="2025-09-10T00:17:00.250629997Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\" returns image reference \"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\"" Sep 10 00:17:00.251861 containerd[1448]: time="2025-09-10T00:17:00.251838677Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\"" Sep 10 00:17:01.332790 containerd[1448]: time="2025-09-10T00:17:01.332731677Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:17:01.334501 containerd[1448]: time="2025-09-10T00:17:01.334211517Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.4: active requests=0, bytes read=23536979" Sep 10 00:17:01.335409 containerd[1448]: time="2025-09-10T00:17:01.335372157Z" level=info msg="ImageCreate event name:\"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:17:01.338892 containerd[1448]: time="2025-09-10T00:17:01.338852317Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:17:01.339718 containerd[1448]: time="2025-09-10T00:17:01.339685757Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.4\" with image id \"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\", size \"25093155\" in 1.08781536s" Sep 10 00:17:01.339790 containerd[1448]: time="2025-09-10T00:17:01.339721637Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\" returns image reference \"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\"" Sep 10 00:17:01.340301 containerd[1448]: time="2025-09-10T00:17:01.340270397Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\"" Sep 10 00:17:01.559382 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 10 00:17:01.568906 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:17:01.669813 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:17:01.673104 (kubelet)[1854]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 00:17:01.725883 kubelet[1854]: E0910 00:17:01.725840 1854 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 00:17:01.729194 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 00:17:01.729355 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 00:17:02.607503 containerd[1448]: time="2025-09-10T00:17:02.606796877Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:17:02.607503 containerd[1448]: time="2025-09-10T00:17:02.607221997Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.4: active requests=0, bytes read=18292016" Sep 10 00:17:02.609011 containerd[1448]: time="2025-09-10T00:17:02.608971677Z" level=info msg="ImageCreate event name:\"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:17:02.614764 containerd[1448]: time="2025-09-10T00:17:02.614430837Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:17:02.615700 containerd[1448]: time="2025-09-10T00:17:02.615662997Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.4\" with image id \"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\", size \"19848210\" in 1.27536276s" Sep 10 00:17:02.615700 containerd[1448]: time="2025-09-10T00:17:02.615699797Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\" returns image reference \"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\"" Sep 10 00:17:02.616157 containerd[1448]: time="2025-09-10T00:17:02.616132597Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\"" Sep 10 00:17:03.597778 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3229569802.mount: Deactivated successfully. Sep 10 00:17:03.823578 containerd[1448]: time="2025-09-10T00:17:03.823527277Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:17:03.824224 containerd[1448]: time="2025-09-10T00:17:03.824090157Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.4: active requests=0, bytes read=28199961" Sep 10 00:17:03.824930 containerd[1448]: time="2025-09-10T00:17:03.824902757Z" level=info msg="ImageCreate event name:\"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:17:03.826921 containerd[1448]: time="2025-09-10T00:17:03.826872077Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:17:03.827685 containerd[1448]: time="2025-09-10T00:17:03.827516877Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.4\" with image id \"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\", repo tag \"registry.k8s.io/kube-proxy:v1.33.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\", size \"28198978\" in 1.21134932s" Sep 10 00:17:03.827685 containerd[1448]: time="2025-09-10T00:17:03.827556917Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\" returns image reference \"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\"" Sep 10 00:17:03.828244 containerd[1448]: time="2025-09-10T00:17:03.827935597Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 10 00:17:04.413084 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount270939672.mount: Deactivated successfully. Sep 10 00:17:05.486805 containerd[1448]: time="2025-09-10T00:17:05.486036997Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:17:05.487741 containerd[1448]: time="2025-09-10T00:17:05.487713077Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Sep 10 00:17:05.488833 containerd[1448]: time="2025-09-10T00:17:05.488801197Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:17:05.492213 containerd[1448]: time="2025-09-10T00:17:05.492165637Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:17:05.493853 containerd[1448]: time="2025-09-10T00:17:05.493485797Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.66551932s" Sep 10 00:17:05.493853 containerd[1448]: time="2025-09-10T00:17:05.493534597Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Sep 10 00:17:05.493977 containerd[1448]: time="2025-09-10T00:17:05.493923917Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 10 00:17:05.918693 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1253415275.mount: Deactivated successfully. Sep 10 00:17:05.922542 containerd[1448]: time="2025-09-10T00:17:05.922499637Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:17:05.922973 containerd[1448]: time="2025-09-10T00:17:05.922944277Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 10 00:17:05.923995 containerd[1448]: time="2025-09-10T00:17:05.923971797Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:17:05.927607 containerd[1448]: time="2025-09-10T00:17:05.926083997Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:17:05.927607 containerd[1448]: time="2025-09-10T00:17:05.927497677Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 433.54304ms" Sep 10 00:17:05.927607 containerd[1448]: time="2025-09-10T00:17:05.927523557Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 10 00:17:05.927920 containerd[1448]: time="2025-09-10T00:17:05.927897077Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 10 00:17:06.348410 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3875253566.mount: Deactivated successfully. Sep 10 00:17:08.330679 containerd[1448]: time="2025-09-10T00:17:08.330619637Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:17:08.331224 containerd[1448]: time="2025-09-10T00:17:08.331181117Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69465297" Sep 10 00:17:08.332328 containerd[1448]: time="2025-09-10T00:17:08.332256317Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:17:08.335839 containerd[1448]: time="2025-09-10T00:17:08.335799357Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:17:08.337919 containerd[1448]: time="2025-09-10T00:17:08.337794037Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.40987108s" Sep 10 00:17:08.337919 containerd[1448]: time="2025-09-10T00:17:08.337831397Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Sep 10 00:17:11.933311 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 10 00:17:11.942207 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:17:12.064839 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:17:12.068280 (kubelet)[2019]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 00:17:12.098743 kubelet[2019]: E0910 00:17:12.098697 2019 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 00:17:12.101393 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 00:17:12.101528 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 00:17:12.960298 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:17:12.971167 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:17:12.991736 systemd[1]: Reloading requested from client PID 2035 ('systemctl') (unit session-7.scope)... Sep 10 00:17:12.991772 systemd[1]: Reloading... Sep 10 00:17:13.062777 zram_generator::config[2074]: No configuration found. Sep 10 00:17:13.235072 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 00:17:13.287873 systemd[1]: Reloading finished in 295 ms. Sep 10 00:17:13.331670 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:17:13.333860 systemd[1]: kubelet.service: Deactivated successfully. Sep 10 00:17:13.334037 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:17:13.335450 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:17:13.445215 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:17:13.449365 (kubelet)[2121]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 10 00:17:13.485563 kubelet[2121]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 00:17:13.485563 kubelet[2121]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 10 00:17:13.485563 kubelet[2121]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 00:17:13.485926 kubelet[2121]: I0910 00:17:13.485544 2121 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 10 00:17:14.204475 kubelet[2121]: I0910 00:17:14.204424 2121 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 10 00:17:14.204475 kubelet[2121]: I0910 00:17:14.204459 2121 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 10 00:17:14.205216 kubelet[2121]: I0910 00:17:14.204929 2121 server.go:956] "Client rotation is on, will bootstrap in background" Sep 10 00:17:14.224470 kubelet[2121]: E0910 00:17:14.224409 2121 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.124:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 10 00:17:14.225090 kubelet[2121]: I0910 00:17:14.225070 2121 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 00:17:14.232606 kubelet[2121]: E0910 00:17:14.232543 2121 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 10 00:17:14.232606 kubelet[2121]: I0910 00:17:14.232607 2121 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 10 00:17:14.236244 kubelet[2121]: I0910 00:17:14.236227 2121 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 10 00:17:14.236976 kubelet[2121]: I0910 00:17:14.236943 2121 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 10 00:17:14.237116 kubelet[2121]: I0910 00:17:14.236979 2121 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 10 00:17:14.237199 kubelet[2121]: I0910 00:17:14.237184 2121 topology_manager.go:138] "Creating topology manager with none policy" Sep 10 00:17:14.237199 kubelet[2121]: I0910 00:17:14.237195 2121 container_manager_linux.go:303] "Creating device plugin manager" Sep 10 00:17:14.237417 kubelet[2121]: I0910 00:17:14.237402 2121 state_mem.go:36] "Initialized new in-memory state store" Sep 10 00:17:14.241034 kubelet[2121]: I0910 00:17:14.240913 2121 kubelet.go:480] "Attempting to sync node with API server" Sep 10 00:17:14.241034 kubelet[2121]: I0910 00:17:14.240947 2121 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 10 00:17:14.241280 kubelet[2121]: I0910 00:17:14.240974 2121 kubelet.go:386] "Adding apiserver pod source" Sep 10 00:17:14.242909 kubelet[2121]: I0910 00:17:14.242223 2121 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 10 00:17:14.244168 kubelet[2121]: I0910 00:17:14.243280 2121 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 10 00:17:14.244168 kubelet[2121]: I0910 00:17:14.243976 2121 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 10 00:17:14.244168 kubelet[2121]: W0910 00:17:14.244085 2121 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 10 00:17:14.248159 kubelet[2121]: E0910 00:17:14.247389 2121 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.124:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 10 00:17:14.248159 kubelet[2121]: E0910 00:17:14.247465 2121 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.124:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 10 00:17:14.250792 kubelet[2121]: I0910 00:17:14.250037 2121 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 10 00:17:14.250792 kubelet[2121]: I0910 00:17:14.250201 2121 server.go:1289] "Started kubelet" Sep 10 00:17:14.252030 kubelet[2121]: I0910 00:17:14.251624 2121 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 10 00:17:14.254891 kubelet[2121]: I0910 00:17:14.252117 2121 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 10 00:17:14.254891 kubelet[2121]: I0910 00:17:14.254469 2121 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 10 00:17:14.254891 kubelet[2121]: E0910 00:17:14.254562 2121 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:17:14.254891 kubelet[2121]: I0910 00:17:14.252167 2121 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 10 00:17:14.254891 kubelet[2121]: I0910 00:17:14.254889 2121 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 10 00:17:14.255131 kubelet[2121]: I0910 00:17:14.254977 2121 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 10 00:17:14.255131 kubelet[2121]: I0910 00:17:14.255016 2121 reconciler.go:26] "Reconciler: start to sync state" Sep 10 00:17:14.257789 kubelet[2121]: I0910 00:17:14.256973 2121 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 10 00:17:14.257789 kubelet[2121]: I0910 00:17:14.257050 2121 factory.go:223] Registration of the systemd container factory successfully Sep 10 00:17:14.257789 kubelet[2121]: I0910 00:17:14.257136 2121 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 10 00:17:14.258693 kubelet[2121]: E0910 00:17:14.258196 2121 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.124:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.124:6443: connect: connection refused" interval="200ms" Sep 10 00:17:14.258693 kubelet[2121]: E0910 00:17:14.258300 2121 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.124:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 10 00:17:14.258920 kubelet[2121]: I0910 00:17:14.258900 2121 factory.go:223] Registration of the containerd container factory successfully Sep 10 00:17:14.260703 kubelet[2121]: I0910 00:17:14.259539 2121 server.go:317] "Adding debug handlers to kubelet server" Sep 10 00:17:14.264551 kubelet[2121]: E0910 00:17:14.258362 2121 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.124:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.124:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1863c3ae9837c35d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-10 00:17:14.250142557 +0000 UTC m=+0.796894361,LastTimestamp:2025-09-10 00:17:14.250142557 +0000 UTC m=+0.796894361,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 10 00:17:14.267427 kubelet[2121]: I0910 00:17:14.267396 2121 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 10 00:17:14.274668 kubelet[2121]: I0910 00:17:14.274652 2121 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 10 00:17:14.274832 kubelet[2121]: I0910 00:17:14.274819 2121 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 10 00:17:14.274896 kubelet[2121]: I0910 00:17:14.274888 2121 state_mem.go:36] "Initialized new in-memory state store" Sep 10 00:17:14.355103 kubelet[2121]: E0910 00:17:14.355057 2121 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:17:14.455379 kubelet[2121]: E0910 00:17:14.455271 2121 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:17:14.458923 kubelet[2121]: E0910 00:17:14.458881 2121 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.124:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.124:6443: connect: connection refused" interval="400ms" Sep 10 00:17:14.460886 kubelet[2121]: I0910 00:17:14.460837 2121 policy_none.go:49] "None policy: Start" Sep 10 00:17:14.463293 kubelet[2121]: I0910 00:17:14.462792 2121 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 10 00:17:14.463293 kubelet[2121]: I0910 00:17:14.462826 2121 state_mem.go:35] "Initializing new in-memory state store" Sep 10 00:17:14.465928 kubelet[2121]: I0910 00:17:14.465864 2121 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 10 00:17:14.465928 kubelet[2121]: I0910 00:17:14.465898 2121 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 10 00:17:14.465928 kubelet[2121]: I0910 00:17:14.465921 2121 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 10 00:17:14.465928 kubelet[2121]: I0910 00:17:14.465928 2121 kubelet.go:2436] "Starting kubelet main sync loop" Sep 10 00:17:14.466045 kubelet[2121]: E0910 00:17:14.465968 2121 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 10 00:17:14.468011 kubelet[2121]: E0910 00:17:14.467984 2121 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.124:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 10 00:17:14.471822 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 10 00:17:14.486872 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 10 00:17:14.492957 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 10 00:17:14.506712 kubelet[2121]: E0910 00:17:14.506490 2121 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 10 00:17:14.506712 kubelet[2121]: I0910 00:17:14.506681 2121 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 10 00:17:14.506712 kubelet[2121]: I0910 00:17:14.506693 2121 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 10 00:17:14.507324 kubelet[2121]: I0910 00:17:14.506959 2121 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 10 00:17:14.508408 kubelet[2121]: E0910 00:17:14.508076 2121 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 10 00:17:14.508408 kubelet[2121]: E0910 00:17:14.508114 2121 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 10 00:17:14.522964 kubelet[2121]: E0910 00:17:14.522874 2121 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.124:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.124:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1863c3ae9837c35d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-10 00:17:14.250142557 +0000 UTC m=+0.796894361,LastTimestamp:2025-09-10 00:17:14.250142557 +0000 UTC m=+0.796894361,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 10 00:17:14.578650 systemd[1]: Created slice kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice - libcontainer container kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice. Sep 10 00:17:14.606724 kubelet[2121]: E0910 00:17:14.606546 2121 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 00:17:14.609271 kubelet[2121]: I0910 00:17:14.609236 2121 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 10 00:17:14.609774 kubelet[2121]: E0910 00:17:14.609718 2121 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.124:6443/api/v1/nodes\": dial tcp 10.0.0.124:6443: connect: connection refused" node="localhost" Sep 10 00:17:14.611012 systemd[1]: Created slice kubepods-burstable-pode3234bef5d210ad45c8d23eea220b87a.slice - libcontainer container kubepods-burstable-pode3234bef5d210ad45c8d23eea220b87a.slice. Sep 10 00:17:14.613223 kubelet[2121]: E0910 00:17:14.612738 2121 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 00:17:14.614011 systemd[1]: Created slice kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice - libcontainer container kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice. Sep 10 00:17:14.616125 kubelet[2121]: E0910 00:17:14.616089 2121 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 00:17:14.655484 kubelet[2121]: I0910 00:17:14.655403 2121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:17:14.655484 kubelet[2121]: I0910 00:17:14.655444 2121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:17:14.655484 kubelet[2121]: I0910 00:17:14.655460 2121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:17:14.655807 kubelet[2121]: I0910 00:17:14.655654 2121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 10 00:17:14.655807 kubelet[2121]: I0910 00:17:14.655678 2121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:17:14.655807 kubelet[2121]: I0910 00:17:14.655696 2121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:17:14.655807 kubelet[2121]: I0910 00:17:14.655708 2121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e3234bef5d210ad45c8d23eea220b87a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e3234bef5d210ad45c8d23eea220b87a\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:17:14.655807 kubelet[2121]: I0910 00:17:14.655720 2121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e3234bef5d210ad45c8d23eea220b87a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e3234bef5d210ad45c8d23eea220b87a\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:17:14.655927 kubelet[2121]: I0910 00:17:14.655733 2121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e3234bef5d210ad45c8d23eea220b87a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e3234bef5d210ad45c8d23eea220b87a\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:17:14.811895 kubelet[2121]: I0910 00:17:14.811803 2121 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 10 00:17:14.812924 kubelet[2121]: E0910 00:17:14.812885 2121 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.124:6443/api/v1/nodes\": dial tcp 10.0.0.124:6443: connect: connection refused" node="localhost" Sep 10 00:17:14.860214 kubelet[2121]: E0910 00:17:14.860161 2121 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.124:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.124:6443: connect: connection refused" interval="800ms" Sep 10 00:17:14.911006 kubelet[2121]: E0910 00:17:14.910939 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:17:14.911653 containerd[1448]: time="2025-09-10T00:17:14.911615037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,}" Sep 10 00:17:14.913895 kubelet[2121]: E0910 00:17:14.913877 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:17:14.914227 containerd[1448]: time="2025-09-10T00:17:14.914201277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e3234bef5d210ad45c8d23eea220b87a,Namespace:kube-system,Attempt:0,}" Sep 10 00:17:14.917147 kubelet[2121]: E0910 00:17:14.916964 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:17:14.917277 containerd[1448]: time="2025-09-10T00:17:14.917249957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,}" Sep 10 00:17:15.214728 kubelet[2121]: I0910 00:17:15.214703 2121 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 10 00:17:15.215209 kubelet[2121]: E0910 00:17:15.215178 2121 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.124:6443/api/v1/nodes\": dial tcp 10.0.0.124:6443: connect: connection refused" node="localhost" Sep 10 00:17:15.221799 kubelet[2121]: E0910 00:17:15.221728 2121 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.124:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 10 00:17:15.253980 kubelet[2121]: E0910 00:17:15.253939 2121 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.124:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 10 00:17:15.377458 kubelet[2121]: E0910 00:17:15.377412 2121 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.124:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 10 00:17:15.408023 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1679737024.mount: Deactivated successfully. Sep 10 00:17:15.414085 containerd[1448]: time="2025-09-10T00:17:15.414048437Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 00:17:15.414915 containerd[1448]: time="2025-09-10T00:17:15.414892677Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 00:17:15.415354 containerd[1448]: time="2025-09-10T00:17:15.415324917Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 10 00:17:15.415971 containerd[1448]: time="2025-09-10T00:17:15.415949477Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Sep 10 00:17:15.416662 containerd[1448]: time="2025-09-10T00:17:15.416637637Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 00:17:15.417406 containerd[1448]: time="2025-09-10T00:17:15.417384117Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 00:17:15.417948 containerd[1448]: time="2025-09-10T00:17:15.417925357Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 10 00:17:15.421851 containerd[1448]: time="2025-09-10T00:17:15.421417557Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 507.15404ms" Sep 10 00:17:15.421933 containerd[1448]: time="2025-09-10T00:17:15.421850557Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 00:17:15.423396 containerd[1448]: time="2025-09-10T00:17:15.423362477Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 506.0518ms" Sep 10 00:17:15.425174 containerd[1448]: time="2025-09-10T00:17:15.425136517Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 513.44692ms" Sep 10 00:17:15.521231 containerd[1448]: time="2025-09-10T00:17:15.520616437Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:17:15.521231 containerd[1448]: time="2025-09-10T00:17:15.520686917Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:17:15.521231 containerd[1448]: time="2025-09-10T00:17:15.520701677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:17:15.521902 containerd[1448]: time="2025-09-10T00:17:15.521808037Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:17:15.522448 containerd[1448]: time="2025-09-10T00:17:15.522053557Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:17:15.522448 containerd[1448]: time="2025-09-10T00:17:15.522095757Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:17:15.522448 containerd[1448]: time="2025-09-10T00:17:15.522110317Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:17:15.522448 containerd[1448]: time="2025-09-10T00:17:15.521778317Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:17:15.522448 containerd[1448]: time="2025-09-10T00:17:15.522045437Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:17:15.522448 containerd[1448]: time="2025-09-10T00:17:15.522058317Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:17:15.522448 containerd[1448]: time="2025-09-10T00:17:15.522184557Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:17:15.522600 containerd[1448]: time="2025-09-10T00:17:15.522533597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:17:15.549940 systemd[1]: Started cri-containerd-24965866e03b29dc8b55b680f95b74987767be8ca8a6492f340c73a3bfa45c41.scope - libcontainer container 24965866e03b29dc8b55b680f95b74987767be8ca8a6492f340c73a3bfa45c41. Sep 10 00:17:15.551416 systemd[1]: Started cri-containerd-ada566fc8b2cdbbd6ea16f7a56b8c2e2281473ae92f15602d6850fbc1e69f1da.scope - libcontainer container ada566fc8b2cdbbd6ea16f7a56b8c2e2281473ae92f15602d6850fbc1e69f1da. Sep 10 00:17:15.552798 systemd[1]: Started cri-containerd-f4f84d17f99a34a71307d1700d83e6706d3454d4e244ded4607be2df37f6a0b8.scope - libcontainer container f4f84d17f99a34a71307d1700d83e6706d3454d4e244ded4607be2df37f6a0b8. Sep 10 00:17:15.583160 containerd[1448]: time="2025-09-10T00:17:15.583117277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e3234bef5d210ad45c8d23eea220b87a,Namespace:kube-system,Attempt:0,} returns sandbox id \"f4f84d17f99a34a71307d1700d83e6706d3454d4e244ded4607be2df37f6a0b8\"" Sep 10 00:17:15.585145 kubelet[2121]: E0910 00:17:15.584899 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:17:15.588358 containerd[1448]: time="2025-09-10T00:17:15.588321157Z" level=info msg="CreateContainer within sandbox \"f4f84d17f99a34a71307d1700d83e6706d3454d4e244ded4607be2df37f6a0b8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 10 00:17:15.590603 containerd[1448]: time="2025-09-10T00:17:15.590568397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,} returns sandbox id \"ada566fc8b2cdbbd6ea16f7a56b8c2e2281473ae92f15602d6850fbc1e69f1da\"" Sep 10 00:17:15.592020 kubelet[2121]: E0910 00:17:15.591118 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:17:15.595166 containerd[1448]: time="2025-09-10T00:17:15.595106197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,} returns sandbox id \"24965866e03b29dc8b55b680f95b74987767be8ca8a6492f340c73a3bfa45c41\"" Sep 10 00:17:15.595769 kubelet[2121]: E0910 00:17:15.595739 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:17:15.596273 containerd[1448]: time="2025-09-10T00:17:15.596223717Z" level=info msg="CreateContainer within sandbox \"ada566fc8b2cdbbd6ea16f7a56b8c2e2281473ae92f15602d6850fbc1e69f1da\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 10 00:17:15.599002 containerd[1448]: time="2025-09-10T00:17:15.598968037Z" level=info msg="CreateContainer within sandbox \"24965866e03b29dc8b55b680f95b74987767be8ca8a6492f340c73a3bfa45c41\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 10 00:17:15.604038 containerd[1448]: time="2025-09-10T00:17:15.603985957Z" level=info msg="CreateContainer within sandbox \"f4f84d17f99a34a71307d1700d83e6706d3454d4e244ded4607be2df37f6a0b8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e24b1a58904ea2f01c727442106649c99fe0790142b5642c999a79312bb4da83\"" Sep 10 00:17:15.604612 containerd[1448]: time="2025-09-10T00:17:15.604564757Z" level=info msg="StartContainer for \"e24b1a58904ea2f01c727442106649c99fe0790142b5642c999a79312bb4da83\"" Sep 10 00:17:15.612738 containerd[1448]: time="2025-09-10T00:17:15.612706037Z" level=info msg="CreateContainer within sandbox \"24965866e03b29dc8b55b680f95b74987767be8ca8a6492f340c73a3bfa45c41\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e4448d9153b803b9a065cfd39fb7989871084b1a369f8c9fbabd361cba2eb67a\"" Sep 10 00:17:15.613321 containerd[1448]: time="2025-09-10T00:17:15.613294757Z" level=info msg="StartContainer for \"e4448d9153b803b9a065cfd39fb7989871084b1a369f8c9fbabd361cba2eb67a\"" Sep 10 00:17:15.617871 containerd[1448]: time="2025-09-10T00:17:15.617711357Z" level=info msg="CreateContainer within sandbox \"ada566fc8b2cdbbd6ea16f7a56b8c2e2281473ae92f15602d6850fbc1e69f1da\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a161634ecffff52b9071f10a15ec7e85fc4ee50ec6f94258f0237394c9415b64\"" Sep 10 00:17:15.618571 containerd[1448]: time="2025-09-10T00:17:15.618510997Z" level=info msg="StartContainer for \"a161634ecffff52b9071f10a15ec7e85fc4ee50ec6f94258f0237394c9415b64\"" Sep 10 00:17:15.628895 systemd[1]: Started cri-containerd-e24b1a58904ea2f01c727442106649c99fe0790142b5642c999a79312bb4da83.scope - libcontainer container e24b1a58904ea2f01c727442106649c99fe0790142b5642c999a79312bb4da83. Sep 10 00:17:15.643913 systemd[1]: Started cri-containerd-e4448d9153b803b9a065cfd39fb7989871084b1a369f8c9fbabd361cba2eb67a.scope - libcontainer container e4448d9153b803b9a065cfd39fb7989871084b1a369f8c9fbabd361cba2eb67a. Sep 10 00:17:15.647490 systemd[1]: Started cri-containerd-a161634ecffff52b9071f10a15ec7e85fc4ee50ec6f94258f0237394c9415b64.scope - libcontainer container a161634ecffff52b9071f10a15ec7e85fc4ee50ec6f94258f0237394c9415b64. Sep 10 00:17:15.660872 kubelet[2121]: E0910 00:17:15.660772 2121 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.124:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.124:6443: connect: connection refused" interval="1.6s" Sep 10 00:17:15.682149 containerd[1448]: time="2025-09-10T00:17:15.682102557Z" level=info msg="StartContainer for \"e24b1a58904ea2f01c727442106649c99fe0790142b5642c999a79312bb4da83\" returns successfully" Sep 10 00:17:15.682326 containerd[1448]: time="2025-09-10T00:17:15.682213517Z" level=info msg="StartContainer for \"e4448d9153b803b9a065cfd39fb7989871084b1a369f8c9fbabd361cba2eb67a\" returns successfully" Sep 10 00:17:15.685485 containerd[1448]: time="2025-09-10T00:17:15.685384957Z" level=info msg="StartContainer for \"a161634ecffff52b9071f10a15ec7e85fc4ee50ec6f94258f0237394c9415b64\" returns successfully" Sep 10 00:17:16.018167 kubelet[2121]: I0910 00:17:16.018133 2121 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 10 00:17:16.471918 kubelet[2121]: E0910 00:17:16.471891 2121 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 00:17:16.472028 kubelet[2121]: E0910 00:17:16.472001 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:17:16.473603 kubelet[2121]: E0910 00:17:16.473392 2121 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 00:17:16.473603 kubelet[2121]: E0910 00:17:16.473491 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:17:16.474848 kubelet[2121]: E0910 00:17:16.474666 2121 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 00:17:16.474848 kubelet[2121]: E0910 00:17:16.474772 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:17:17.273405 kubelet[2121]: E0910 00:17:17.273364 2121 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 10 00:17:17.418341 kubelet[2121]: I0910 00:17:17.418284 2121 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 10 00:17:17.418545 kubelet[2121]: E0910 00:17:17.418331 2121 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 10 00:17:17.428883 kubelet[2121]: E0910 00:17:17.428851 2121 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:17:17.481230 kubelet[2121]: E0910 00:17:17.481026 2121 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 00:17:17.481230 kubelet[2121]: E0910 00:17:17.481062 2121 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 00:17:17.481230 kubelet[2121]: E0910 00:17:17.481151 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:17:17.481230 kubelet[2121]: E0910 00:17:17.481155 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:17:17.529902 kubelet[2121]: E0910 00:17:17.529777 2121 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:17:17.629987 kubelet[2121]: E0910 00:17:17.629908 2121 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:17:17.755974 kubelet[2121]: I0910 00:17:17.755487 2121 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 10 00:17:17.760110 kubelet[2121]: E0910 00:17:17.760079 2121 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 10 00:17:17.760208 kubelet[2121]: I0910 00:17:17.760197 2121 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 10 00:17:17.761637 kubelet[2121]: E0910 00:17:17.761615 2121 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 10 00:17:17.761852 kubelet[2121]: I0910 00:17:17.761730 2121 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 10 00:17:17.763244 kubelet[2121]: E0910 00:17:17.763217 2121 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 10 00:17:18.244482 kubelet[2121]: I0910 00:17:18.244456 2121 apiserver.go:52] "Watching apiserver" Sep 10 00:17:18.255596 kubelet[2121]: I0910 00:17:18.255569 2121 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 10 00:17:19.180780 kubelet[2121]: I0910 00:17:19.180552 2121 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 10 00:17:19.185358 kubelet[2121]: E0910 00:17:19.185329 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:17:19.398123 systemd[1]: Reloading requested from client PID 2410 ('systemctl') (unit session-7.scope)... Sep 10 00:17:19.398138 systemd[1]: Reloading... Sep 10 00:17:19.462803 zram_generator::config[2452]: No configuration found. Sep 10 00:17:19.483860 kubelet[2121]: E0910 00:17:19.483832 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:17:19.548622 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 00:17:19.617592 systemd[1]: Reloading finished in 219 ms. Sep 10 00:17:19.654310 kubelet[2121]: I0910 00:17:19.654235 2121 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 00:17:19.654340 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:17:19.672723 systemd[1]: kubelet.service: Deactivated successfully. Sep 10 00:17:19.673010 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:17:19.673067 systemd[1]: kubelet.service: Consumed 1.135s CPU time, 130.5M memory peak, 0B memory swap peak. Sep 10 00:17:19.683121 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:17:19.785712 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:17:19.789308 (kubelet)[2491]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 10 00:17:19.821237 kubelet[2491]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 00:17:19.821237 kubelet[2491]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 10 00:17:19.821237 kubelet[2491]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 00:17:19.821237 kubelet[2491]: I0910 00:17:19.821118 2491 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 10 00:17:19.827620 kubelet[2491]: I0910 00:17:19.827572 2491 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 10 00:17:19.827620 kubelet[2491]: I0910 00:17:19.827604 2491 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 10 00:17:19.827815 kubelet[2491]: I0910 00:17:19.827800 2491 server.go:956] "Client rotation is on, will bootstrap in background" Sep 10 00:17:19.828979 kubelet[2491]: I0910 00:17:19.828957 2491 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 10 00:17:19.831051 kubelet[2491]: I0910 00:17:19.831025 2491 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 00:17:19.834050 kubelet[2491]: E0910 00:17:19.834020 2491 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 10 00:17:19.834050 kubelet[2491]: I0910 00:17:19.834046 2491 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 10 00:17:19.837592 kubelet[2491]: I0910 00:17:19.837555 2491 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 10 00:17:19.837792 kubelet[2491]: I0910 00:17:19.837767 2491 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 10 00:17:19.837917 kubelet[2491]: I0910 00:17:19.837792 2491 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 10 00:17:19.837990 kubelet[2491]: I0910 00:17:19.837927 2491 topology_manager.go:138] "Creating topology manager with none policy" Sep 10 00:17:19.837990 kubelet[2491]: I0910 00:17:19.837936 2491 container_manager_linux.go:303] "Creating device plugin manager" Sep 10 00:17:19.837990 kubelet[2491]: I0910 00:17:19.837973 2491 state_mem.go:36] "Initialized new in-memory state store" Sep 10 00:17:19.838115 kubelet[2491]: I0910 00:17:19.838104 2491 kubelet.go:480] "Attempting to sync node with API server" Sep 10 00:17:19.838137 kubelet[2491]: I0910 00:17:19.838118 2491 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 10 00:17:19.838158 kubelet[2491]: I0910 00:17:19.838140 2491 kubelet.go:386] "Adding apiserver pod source" Sep 10 00:17:19.838158 kubelet[2491]: I0910 00:17:19.838152 2491 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 10 00:17:19.839225 kubelet[2491]: I0910 00:17:19.838802 2491 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 10 00:17:19.839791 kubelet[2491]: I0910 00:17:19.839368 2491 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 10 00:17:19.844805 kubelet[2491]: I0910 00:17:19.844202 2491 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 10 00:17:19.844805 kubelet[2491]: I0910 00:17:19.844243 2491 server.go:1289] "Started kubelet" Sep 10 00:17:19.845568 kubelet[2491]: I0910 00:17:19.845508 2491 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 10 00:17:19.846990 kubelet[2491]: I0910 00:17:19.846942 2491 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 10 00:17:19.847708 kubelet[2491]: I0910 00:17:19.847687 2491 server.go:317] "Adding debug handlers to kubelet server" Sep 10 00:17:19.851325 kubelet[2491]: I0910 00:17:19.851259 2491 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 10 00:17:19.851786 kubelet[2491]: I0910 00:17:19.851449 2491 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 10 00:17:19.851786 kubelet[2491]: I0910 00:17:19.851611 2491 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 10 00:17:19.857219 kubelet[2491]: E0910 00:17:19.857189 2491 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 10 00:17:19.858322 kubelet[2491]: I0910 00:17:19.857718 2491 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 10 00:17:19.858322 kubelet[2491]: I0910 00:17:19.857863 2491 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 10 00:17:19.858322 kubelet[2491]: I0910 00:17:19.857990 2491 reconciler.go:26] "Reconciler: start to sync state" Sep 10 00:17:19.859205 kubelet[2491]: I0910 00:17:19.859179 2491 factory.go:223] Registration of the systemd container factory successfully Sep 10 00:17:19.859399 kubelet[2491]: I0910 00:17:19.859371 2491 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 10 00:17:19.862254 kubelet[2491]: I0910 00:17:19.862229 2491 factory.go:223] Registration of the containerd container factory successfully Sep 10 00:17:19.870635 kubelet[2491]: I0910 00:17:19.870577 2491 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 10 00:17:19.872148 kubelet[2491]: I0910 00:17:19.872119 2491 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 10 00:17:19.872148 kubelet[2491]: I0910 00:17:19.872144 2491 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 10 00:17:19.872237 kubelet[2491]: I0910 00:17:19.872161 2491 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 10 00:17:19.872237 kubelet[2491]: I0910 00:17:19.872169 2491 kubelet.go:2436] "Starting kubelet main sync loop" Sep 10 00:17:19.872237 kubelet[2491]: E0910 00:17:19.872214 2491 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 10 00:17:19.890512 kubelet[2491]: I0910 00:17:19.890477 2491 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 10 00:17:19.890512 kubelet[2491]: I0910 00:17:19.890499 2491 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 10 00:17:19.890512 kubelet[2491]: I0910 00:17:19.890521 2491 state_mem.go:36] "Initialized new in-memory state store" Sep 10 00:17:19.890698 kubelet[2491]: I0910 00:17:19.890672 2491 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 10 00:17:19.890698 kubelet[2491]: I0910 00:17:19.890689 2491 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 10 00:17:19.890779 kubelet[2491]: I0910 00:17:19.890707 2491 policy_none.go:49] "None policy: Start" Sep 10 00:17:19.890779 kubelet[2491]: I0910 00:17:19.890715 2491 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 10 00:17:19.890779 kubelet[2491]: I0910 00:17:19.890723 2491 state_mem.go:35] "Initializing new in-memory state store" Sep 10 00:17:19.890842 kubelet[2491]: I0910 00:17:19.890820 2491 state_mem.go:75] "Updated machine memory state" Sep 10 00:17:19.894639 kubelet[2491]: E0910 00:17:19.894606 2491 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 10 00:17:19.894829 kubelet[2491]: I0910 00:17:19.894811 2491 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 10 00:17:19.894865 kubelet[2491]: I0910 00:17:19.894826 2491 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 10 00:17:19.895056 kubelet[2491]: I0910 00:17:19.895029 2491 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 10 00:17:19.897544 kubelet[2491]: E0910 00:17:19.896468 2491 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 10 00:17:19.973278 kubelet[2491]: I0910 00:17:19.973078 2491 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 10 00:17:19.973278 kubelet[2491]: I0910 00:17:19.973165 2491 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 10 00:17:19.973278 kubelet[2491]: I0910 00:17:19.973193 2491 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 10 00:17:19.978508 kubelet[2491]: E0910 00:17:19.978479 2491 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 10 00:17:20.001278 kubelet[2491]: I0910 00:17:20.001238 2491 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 10 00:17:20.008168 kubelet[2491]: I0910 00:17:20.007813 2491 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 10 00:17:20.008168 kubelet[2491]: I0910 00:17:20.007888 2491 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 10 00:17:20.159537 kubelet[2491]: I0910 00:17:20.159420 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:17:20.159537 kubelet[2491]: I0910 00:17:20.159463 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e3234bef5d210ad45c8d23eea220b87a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e3234bef5d210ad45c8d23eea220b87a\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:17:20.159537 kubelet[2491]: I0910 00:17:20.159482 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:17:20.159537 kubelet[2491]: I0910 00:17:20.159498 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:17:20.159537 kubelet[2491]: I0910 00:17:20.159515 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 10 00:17:20.159728 kubelet[2491]: I0910 00:17:20.159529 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e3234bef5d210ad45c8d23eea220b87a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e3234bef5d210ad45c8d23eea220b87a\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:17:20.159728 kubelet[2491]: I0910 00:17:20.159542 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e3234bef5d210ad45c8d23eea220b87a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e3234bef5d210ad45c8d23eea220b87a\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:17:20.159728 kubelet[2491]: I0910 00:17:20.159558 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:17:20.159728 kubelet[2491]: I0910 00:17:20.159572 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:17:20.278790 kubelet[2491]: E0910 00:17:20.278507 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:17:20.278790 kubelet[2491]: E0910 00:17:20.278631 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:17:20.279480 kubelet[2491]: E0910 00:17:20.278775 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:17:20.839169 kubelet[2491]: I0910 00:17:20.838945 2491 apiserver.go:52] "Watching apiserver" Sep 10 00:17:20.858335 kubelet[2491]: I0910 00:17:20.858302 2491 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 10 00:17:20.884039 kubelet[2491]: E0910 00:17:20.884003 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:17:20.884039 kubelet[2491]: E0910 00:17:20.884027 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:17:20.884388 kubelet[2491]: I0910 00:17:20.884370 2491 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 10 00:17:20.890856 kubelet[2491]: E0910 00:17:20.890822 2491 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 10 00:17:20.890988 kubelet[2491]: E0910 00:17:20.890969 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:17:20.904294 kubelet[2491]: I0910 00:17:20.904222 2491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.9042104370000001 podStartE2EDuration="1.904210437s" podCreationTimestamp="2025-09-10 00:17:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:17:20.904141437 +0000 UTC m=+1.111896521" watchObservedRunningTime="2025-09-10 00:17:20.904210437 +0000 UTC m=+1.111965521" Sep 10 00:17:20.911667 kubelet[2491]: I0910 00:17:20.911608 2491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.911597597 podStartE2EDuration="1.911597597s" podCreationTimestamp="2025-09-10 00:17:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:17:20.911125037 +0000 UTC m=+1.118880121" watchObservedRunningTime="2025-09-10 00:17:20.911597597 +0000 UTC m=+1.119352681" Sep 10 00:17:20.918878 kubelet[2491]: I0910 00:17:20.918804 2491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.918791677 podStartE2EDuration="1.918791677s" podCreationTimestamp="2025-09-10 00:17:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:17:20.918665277 +0000 UTC m=+1.126420361" watchObservedRunningTime="2025-09-10 00:17:20.918791677 +0000 UTC m=+1.126546761" Sep 10 00:17:21.885452 kubelet[2491]: E0910 00:17:21.885380 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:17:21.885452 kubelet[2491]: E0910 00:17:21.885454 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:17:25.416811 kubelet[2491]: I0910 00:17:25.416775 2491 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 10 00:17:25.417194 containerd[1448]: time="2025-09-10T00:17:25.417122858Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 10 00:17:25.417387 kubelet[2491]: I0910 00:17:25.417289 2491 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 10 00:17:26.101550 systemd[1]: Created slice kubepods-besteffort-podca3c08ef_d2e7_480a_a13f_782ad16f9231.slice - libcontainer container kubepods-besteffort-podca3c08ef_d2e7_480a_a13f_782ad16f9231.slice. Sep 10 00:17:26.199093 kubelet[2491]: I0910 00:17:26.199039 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ca3c08ef-d2e7-480a-a13f-782ad16f9231-kube-proxy\") pod \"kube-proxy-k5wql\" (UID: \"ca3c08ef-d2e7-480a-a13f-782ad16f9231\") " pod="kube-system/kube-proxy-k5wql" Sep 10 00:17:26.199093 kubelet[2491]: I0910 00:17:26.199077 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ca3c08ef-d2e7-480a-a13f-782ad16f9231-xtables-lock\") pod \"kube-proxy-k5wql\" (UID: \"ca3c08ef-d2e7-480a-a13f-782ad16f9231\") " pod="kube-system/kube-proxy-k5wql" Sep 10 00:17:26.199093 kubelet[2491]: I0910 00:17:26.199095 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ca3c08ef-d2e7-480a-a13f-782ad16f9231-lib-modules\") pod \"kube-proxy-k5wql\" (UID: \"ca3c08ef-d2e7-480a-a13f-782ad16f9231\") " pod="kube-system/kube-proxy-k5wql" Sep 10 00:17:26.199271 kubelet[2491]: I0910 00:17:26.199126 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5nsq\" (UniqueName: \"kubernetes.io/projected/ca3c08ef-d2e7-480a-a13f-782ad16f9231-kube-api-access-b5nsq\") pod \"kube-proxy-k5wql\" (UID: \"ca3c08ef-d2e7-480a-a13f-782ad16f9231\") " pod="kube-system/kube-proxy-k5wql" Sep 10 00:17:26.412934 kubelet[2491]: E0910 00:17:26.412831 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:17:26.413485 containerd[1448]: time="2025-09-10T00:17:26.413436118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k5wql,Uid:ca3c08ef-d2e7-480a-a13f-782ad16f9231,Namespace:kube-system,Attempt:0,}" Sep 10 00:17:26.431578 containerd[1448]: time="2025-09-10T00:17:26.431497127Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:17:26.432364 containerd[1448]: time="2025-09-10T00:17:26.432093969Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:17:26.432364 containerd[1448]: time="2025-09-10T00:17:26.432122769Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:17:26.432973 containerd[1448]: time="2025-09-10T00:17:26.432897051Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:17:26.463944 systemd[1]: Started cri-containerd-3bf04feb36065922016225227fd9b21ed70636241f7599292d66c7c50023b859.scope - libcontainer container 3bf04feb36065922016225227fd9b21ed70636241f7599292d66c7c50023b859. Sep 10 00:17:26.479816 containerd[1448]: time="2025-09-10T00:17:26.479785141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k5wql,Uid:ca3c08ef-d2e7-480a-a13f-782ad16f9231,Namespace:kube-system,Attempt:0,} returns sandbox id \"3bf04feb36065922016225227fd9b21ed70636241f7599292d66c7c50023b859\"" Sep 10 00:17:26.481107 kubelet[2491]: E0910 00:17:26.481068 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:17:26.489050 containerd[1448]: time="2025-09-10T00:17:26.489022486Z" level=info msg="CreateContainer within sandbox \"3bf04feb36065922016225227fd9b21ed70636241f7599292d66c7c50023b859\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 10 00:17:26.512320 containerd[1448]: time="2025-09-10T00:17:26.512280911Z" level=info msg="CreateContainer within sandbox \"3bf04feb36065922016225227fd9b21ed70636241f7599292d66c7c50023b859\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"89dc551206659ffeefaa3b7abf5de26e845586efaa8d97fd82b44a3fa19e4eff\"" Sep 10 00:17:26.513080 containerd[1448]: time="2025-09-10T00:17:26.513054113Z" level=info msg="StartContainer for \"89dc551206659ffeefaa3b7abf5de26e845586efaa8d97fd82b44a3fa19e4eff\"" Sep 10 00:17:26.541017 systemd[1]: Started cri-containerd-89dc551206659ffeefaa3b7abf5de26e845586efaa8d97fd82b44a3fa19e4eff.scope - libcontainer container 89dc551206659ffeefaa3b7abf5de26e845586efaa8d97fd82b44a3fa19e4eff. Sep 10 00:17:26.568640 containerd[1448]: time="2025-09-10T00:17:26.568538626Z" level=info msg="StartContainer for \"89dc551206659ffeefaa3b7abf5de26e845586efaa8d97fd82b44a3fa19e4eff\" returns successfully" Sep 10 00:17:26.670122 systemd[1]: Created slice kubepods-besteffort-pod95ac6241_fa50_4b4b_93cc_a4ccdd42c2d9.slice - libcontainer container kubepods-besteffort-pod95ac6241_fa50_4b4b_93cc_a4ccdd42c2d9.slice. Sep 10 00:17:26.703455 kubelet[2491]: I0910 00:17:26.703423 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/95ac6241-fa50-4b4b-93cc-a4ccdd42c2d9-var-lib-calico\") pod \"tigera-operator-755d956888-llvkd\" (UID: \"95ac6241-fa50-4b4b-93cc-a4ccdd42c2d9\") " pod="tigera-operator/tigera-operator-755d956888-llvkd" Sep 10 00:17:26.703557 kubelet[2491]: I0910 00:17:26.703462 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6r6g\" (UniqueName: \"kubernetes.io/projected/95ac6241-fa50-4b4b-93cc-a4ccdd42c2d9-kube-api-access-m6r6g\") pod \"tigera-operator-755d956888-llvkd\" (UID: \"95ac6241-fa50-4b4b-93cc-a4ccdd42c2d9\") " pod="tigera-operator/tigera-operator-755d956888-llvkd" Sep 10 00:17:26.894122 kubelet[2491]: E0910 00:17:26.894098 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:17:26.902292 kubelet[2491]: E0910 00:17:26.902228 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:17:26.921529 kubelet[2491]: I0910 00:17:26.921429 2491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-k5wql" podStartSLOduration=0.921415081 podStartE2EDuration="921.415081ms" podCreationTimestamp="2025-09-10 00:17:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:17:26.902479629 +0000 UTC m=+7.110234713" watchObservedRunningTime="2025-09-10 00:17:26.921415081 +0000 UTC m=+7.129170165" Sep 10 00:17:26.973271 containerd[1448]: time="2025-09-10T00:17:26.973228144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-llvkd,Uid:95ac6241-fa50-4b4b-93cc-a4ccdd42c2d9,Namespace:tigera-operator,Attempt:0,}" Sep 10 00:17:26.992802 containerd[1448]: time="2025-09-10T00:17:26.990949873Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:17:26.992802 containerd[1448]: time="2025-09-10T00:17:26.990996473Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:17:26.992802 containerd[1448]: time="2025-09-10T00:17:26.991006393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:17:26.992802 containerd[1448]: time="2025-09-10T00:17:26.991084833Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:17:27.008899 systemd[1]: Started cri-containerd-8324ab29919b704441e88579664642a7d248046b139e1cda1dd002558f28df6e.scope - libcontainer container 8324ab29919b704441e88579664642a7d248046b139e1cda1dd002558f28df6e. Sep 10 00:17:27.036464 containerd[1448]: time="2025-09-10T00:17:27.036421352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-llvkd,Uid:95ac6241-fa50-4b4b-93cc-a4ccdd42c2d9,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"8324ab29919b704441e88579664642a7d248046b139e1cda1dd002558f28df6e\"" Sep 10 00:17:27.037624 containerd[1448]: time="2025-09-10T00:17:27.037601876Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 10 00:17:27.898134 kubelet[2491]: E0910 00:17:27.898097 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:17:28.228005 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1109775694.mount: Deactivated successfully. Sep 10 00:17:28.921440 containerd[1448]: time="2025-09-10T00:17:28.921388565Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:17:28.922343 containerd[1448]: time="2025-09-10T00:17:28.922131367Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 10 00:17:28.923766 containerd[1448]: time="2025-09-10T00:17:28.923058169Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:17:28.937433 containerd[1448]: time="2025-09-10T00:17:28.925061254Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:17:28.937516 containerd[1448]: time="2025-09-10T00:17:28.925923096Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 1.8882917s" Sep 10 00:17:28.937543 containerd[1448]: time="2025-09-10T00:17:28.937522924Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 10 00:17:28.944193 containerd[1448]: time="2025-09-10T00:17:28.944151901Z" level=info msg="CreateContainer within sandbox \"8324ab29919b704441e88579664642a7d248046b139e1cda1dd002558f28df6e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 10 00:17:28.954026 containerd[1448]: time="2025-09-10T00:17:28.953926844Z" level=info msg="CreateContainer within sandbox \"8324ab29919b704441e88579664642a7d248046b139e1cda1dd002558f28df6e\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ed722e59b6393e4949063682b4cad9c77487e95f3c39e1c9b0111766918934e9\"" Sep 10 00:17:28.957833 containerd[1448]: time="2025-09-10T00:17:28.957700173Z" level=info msg="StartContainer for \"ed722e59b6393e4949063682b4cad9c77487e95f3c39e1c9b0111766918934e9\"" Sep 10 00:17:28.985911 systemd[1]: Started cri-containerd-ed722e59b6393e4949063682b4cad9c77487e95f3c39e1c9b0111766918934e9.scope - libcontainer container ed722e59b6393e4949063682b4cad9c77487e95f3c39e1c9b0111766918934e9. Sep 10 00:17:29.037789 containerd[1448]: time="2025-09-10T00:17:29.037653522Z" level=info msg="StartContainer for \"ed722e59b6393e4949063682b4cad9c77487e95f3c39e1c9b0111766918934e9\" returns successfully" Sep 10 00:17:29.796852 kubelet[2491]: E0910 00:17:29.796807 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:17:29.903544 kubelet[2491]: E0910 00:17:29.903149 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:17:29.925015 kubelet[2491]: I0910 00:17:29.924697 2491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-llvkd" podStartSLOduration=2.021417564 podStartE2EDuration="3.924681061s" podCreationTimestamp="2025-09-10 00:17:26 +0000 UTC" firstStartedPulling="2025-09-10 00:17:27.037329075 +0000 UTC m=+7.245084159" lastFinishedPulling="2025-09-10 00:17:28.940592572 +0000 UTC m=+9.148347656" observedRunningTime="2025-09-10 00:17:29.913379315 +0000 UTC m=+10.121134399" watchObservedRunningTime="2025-09-10 00:17:29.924681061 +0000 UTC m=+10.132436145" Sep 10 00:17:30.150301 kubelet[2491]: E0910 00:17:30.150155 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:17:30.919987 kubelet[2491]: E0910 00:17:30.919954 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:17:30.921798 kubelet[2491]: E0910 00:17:30.920637 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:17:31.017655 systemd[1]: cri-containerd-ed722e59b6393e4949063682b4cad9c77487e95f3c39e1c9b0111766918934e9.scope: Deactivated successfully. Sep 10 00:17:31.043798 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ed722e59b6393e4949063682b4cad9c77487e95f3c39e1c9b0111766918934e9-rootfs.mount: Deactivated successfully. Sep 10 00:17:31.063946 containerd[1448]: time="2025-09-10T00:17:31.061559210Z" level=info msg="shim disconnected" id=ed722e59b6393e4949063682b4cad9c77487e95f3c39e1c9b0111766918934e9 namespace=k8s.io Sep 10 00:17:31.063946 containerd[1448]: time="2025-09-10T00:17:31.063944855Z" level=warning msg="cleaning up after shim disconnected" id=ed722e59b6393e4949063682b4cad9c77487e95f3c39e1c9b0111766918934e9 namespace=k8s.io Sep 10 00:17:31.064381 containerd[1448]: time="2025-09-10T00:17:31.063959215Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 00:17:31.925286 kubelet[2491]: I0910 00:17:31.925226 2491 scope.go:117] "RemoveContainer" containerID="ed722e59b6393e4949063682b4cad9c77487e95f3c39e1c9b0111766918934e9" Sep 10 00:17:31.930489 containerd[1448]: time="2025-09-10T00:17:31.930450468Z" level=info msg="CreateContainer within sandbox \"8324ab29919b704441e88579664642a7d248046b139e1cda1dd002558f28df6e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Sep 10 00:17:31.947439 containerd[1448]: time="2025-09-10T00:17:31.947400622Z" level=info msg="CreateContainer within sandbox \"8324ab29919b704441e88579664642a7d248046b139e1cda1dd002558f28df6e\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"12e9acd70844fbdb3b064bcfa5923a70be584c218329382810aa36222bff6857\"" Sep 10 00:17:31.948538 containerd[1448]: time="2025-09-10T00:17:31.947827943Z" level=info msg="StartContainer for \"12e9acd70844fbdb3b064bcfa5923a70be584c218329382810aa36222bff6857\"" Sep 10 00:17:31.974987 systemd[1]: Started cri-containerd-12e9acd70844fbdb3b064bcfa5923a70be584c218329382810aa36222bff6857.scope - libcontainer container 12e9acd70844fbdb3b064bcfa5923a70be584c218329382810aa36222bff6857. Sep 10 00:17:32.008515 containerd[1448]: time="2025-09-10T00:17:32.008472423Z" level=info msg="StartContainer for \"12e9acd70844fbdb3b064bcfa5923a70be584c218329382810aa36222bff6857\" returns successfully" Sep 10 00:17:34.317268 sudo[1620]: pam_unix(sudo:session): session closed for user root Sep 10 00:17:34.319179 sshd[1617]: pam_unix(sshd:session): session closed for user core Sep 10 00:17:34.322551 systemd[1]: sshd@6-10.0.0.124:22-10.0.0.1:50344.service: Deactivated successfully. Sep 10 00:17:34.324319 systemd[1]: session-7.scope: Deactivated successfully. Sep 10 00:17:34.324540 systemd[1]: session-7.scope: Consumed 6.499s CPU time, 155.0M memory peak, 0B memory swap peak. Sep 10 00:17:34.325003 systemd-logind[1423]: Session 7 logged out. Waiting for processes to exit. Sep 10 00:17:34.326065 systemd-logind[1423]: Removed session 7. Sep 10 00:17:34.729851 update_engine[1425]: I20250910 00:17:34.729760 1425 update_attempter.cc:509] Updating boot flags... Sep 10 00:17:34.749785 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2970) Sep 10 00:17:34.810762 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2974) Sep 10 00:17:42.919047 systemd[1]: Created slice kubepods-besteffort-pod2f69f1c0_a1fb_41fe_8011_2d5dedd274af.slice - libcontainer container kubepods-besteffort-pod2f69f1c0_a1fb_41fe_8011_2d5dedd274af.slice. Sep 10 00:17:43.029055 kubelet[2491]: I0910 00:17:43.029000 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/2f69f1c0-a1fb-41fe-8011-2d5dedd274af-typha-certs\") pod \"calico-typha-7d9f8f5dd4-vzncs\" (UID: \"2f69f1c0-a1fb-41fe-8011-2d5dedd274af\") " pod="calico-system/calico-typha-7d9f8f5dd4-vzncs" Sep 10 00:17:43.029055 kubelet[2491]: I0910 00:17:43.029055 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt9h4\" (UniqueName: \"kubernetes.io/projected/2f69f1c0-a1fb-41fe-8011-2d5dedd274af-kube-api-access-wt9h4\") pod \"calico-typha-7d9f8f5dd4-vzncs\" (UID: \"2f69f1c0-a1fb-41fe-8011-2d5dedd274af\") " pod="calico-system/calico-typha-7d9f8f5dd4-vzncs" Sep 10 00:17:43.029477 kubelet[2491]: I0910 00:17:43.029077 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f69f1c0-a1fb-41fe-8011-2d5dedd274af-tigera-ca-bundle\") pod \"calico-typha-7d9f8f5dd4-vzncs\" (UID: \"2f69f1c0-a1fb-41fe-8011-2d5dedd274af\") " pod="calico-system/calico-typha-7d9f8f5dd4-vzncs" Sep 10 00:17:43.055317 systemd[1]: Created slice kubepods-besteffort-pod8f580f50_424d_411a_9be0_b98a88688c47.slice - libcontainer container kubepods-besteffort-pod8f580f50_424d_411a_9be0_b98a88688c47.slice. Sep 10 00:17:43.130980 kubelet[2491]: I0910 00:17:43.130945 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8f580f50-424d-411a-9be0-b98a88688c47-lib-modules\") pod \"calico-node-k999g\" (UID: \"8f580f50-424d-411a-9be0-b98a88688c47\") " pod="calico-system/calico-node-k999g" Sep 10 00:17:43.131160 kubelet[2491]: I0910 00:17:43.131144 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8f580f50-424d-411a-9be0-b98a88688c47-xtables-lock\") pod \"calico-node-k999g\" (UID: \"8f580f50-424d-411a-9be0-b98a88688c47\") " pod="calico-system/calico-node-k999g" Sep 10 00:17:43.131261 kubelet[2491]: I0910 00:17:43.131248 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8f580f50-424d-411a-9be0-b98a88688c47-cni-net-dir\") pod \"calico-node-k999g\" (UID: \"8f580f50-424d-411a-9be0-b98a88688c47\") " pod="calico-system/calico-node-k999g" Sep 10 00:17:43.131379 kubelet[2491]: I0910 00:17:43.131366 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8f580f50-424d-411a-9be0-b98a88688c47-node-certs\") pod \"calico-node-k999g\" (UID: \"8f580f50-424d-411a-9be0-b98a88688c47\") " pod="calico-system/calico-node-k999g" Sep 10 00:17:43.131466 kubelet[2491]: I0910 00:17:43.131436 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8f580f50-424d-411a-9be0-b98a88688c47-var-lib-calico\") pod \"calico-node-k999g\" (UID: \"8f580f50-424d-411a-9be0-b98a88688c47\") " pod="calico-system/calico-node-k999g" Sep 10 00:17:43.131556 kubelet[2491]: I0910 00:17:43.131522 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8f580f50-424d-411a-9be0-b98a88688c47-flexvol-driver-host\") pod \"calico-node-k999g\" (UID: \"8f580f50-424d-411a-9be0-b98a88688c47\") " pod="calico-system/calico-node-k999g" Sep 10 00:17:43.131593 kubelet[2491]: I0910 00:17:43.131561 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4jcw\" (UniqueName: \"kubernetes.io/projected/8f580f50-424d-411a-9be0-b98a88688c47-kube-api-access-x4jcw\") pod \"calico-node-k999g\" (UID: \"8f580f50-424d-411a-9be0-b98a88688c47\") " pod="calico-system/calico-node-k999g" Sep 10 00:17:43.131593 kubelet[2491]: I0910 00:17:43.131579 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8f580f50-424d-411a-9be0-b98a88688c47-policysync\") pod \"calico-node-k999g\" (UID: \"8f580f50-424d-411a-9be0-b98a88688c47\") " pod="calico-system/calico-node-k999g" Sep 10 00:17:43.131632 kubelet[2491]: I0910 00:17:43.131611 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f580f50-424d-411a-9be0-b98a88688c47-tigera-ca-bundle\") pod \"calico-node-k999g\" (UID: \"8f580f50-424d-411a-9be0-b98a88688c47\") " pod="calico-system/calico-node-k999g" Sep 10 00:17:43.131632 kubelet[2491]: I0910 00:17:43.131626 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8f580f50-424d-411a-9be0-b98a88688c47-var-run-calico\") pod \"calico-node-k999g\" (UID: \"8f580f50-424d-411a-9be0-b98a88688c47\") " pod="calico-system/calico-node-k999g" Sep 10 00:17:43.131674 kubelet[2491]: I0910 00:17:43.131640 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8f580f50-424d-411a-9be0-b98a88688c47-cni-bin-dir\") pod \"calico-node-k999g\" (UID: \"8f580f50-424d-411a-9be0-b98a88688c47\") " pod="calico-system/calico-node-k999g" Sep 10 00:17:43.131674 kubelet[2491]: I0910 00:17:43.131657 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8f580f50-424d-411a-9be0-b98a88688c47-cni-log-dir\") pod \"calico-node-k999g\" (UID: \"8f580f50-424d-411a-9be0-b98a88688c47\") " pod="calico-system/calico-node-k999g" Sep 10 00:17:43.223283 kubelet[2491]: E0910 00:17:43.223137 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:17:43.225960 containerd[1448]: time="2025-09-10T00:17:43.225536751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7d9f8f5dd4-vzncs,Uid:2f69f1c0-a1fb-41fe-8011-2d5dedd274af,Namespace:calico-system,Attempt:0,}" Sep 10 00:17:43.238004 kubelet[2491]: E0910 00:17:43.237973 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:43.238196 kubelet[2491]: W0910 00:17:43.238173 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:43.241829 kubelet[2491]: E0910 00:17:43.240728 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:43.247355 kubelet[2491]: E0910 00:17:43.247331 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:43.247355 kubelet[2491]: W0910 00:17:43.247349 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:43.247478 kubelet[2491]: E0910 00:17:43.247368 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:43.250472 containerd[1448]: time="2025-09-10T00:17:43.249932653Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:17:43.250472 containerd[1448]: time="2025-09-10T00:17:43.249992773Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:17:43.250472 containerd[1448]: time="2025-09-10T00:17:43.250003893Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:17:43.250472 containerd[1448]: time="2025-09-10T00:17:43.250096773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:17:43.270216 systemd[1]: Started cri-containerd-d6cbc4581ca550db50dc33daf648b54ebf831154f39d134b3401b51c555ae0b1.scope - libcontainer container d6cbc4581ca550db50dc33daf648b54ebf831154f39d134b3401b51c555ae0b1. Sep 10 00:17:43.295430 kubelet[2491]: E0910 00:17:43.295051 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-65xbg" podUID="cc428a56-c099-4362-852b-dab9e5d9f7b7" Sep 10 00:17:43.308503 containerd[1448]: time="2025-09-10T00:17:43.308466347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7d9f8f5dd4-vzncs,Uid:2f69f1c0-a1fb-41fe-8011-2d5dedd274af,Namespace:calico-system,Attempt:0,} returns sandbox id \"d6cbc4581ca550db50dc33daf648b54ebf831154f39d134b3401b51c555ae0b1\"" Sep 10 00:17:43.309097 kubelet[2491]: E0910 00:17:43.309073 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:43.309097 kubelet[2491]: W0910 00:17:43.309095 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:43.309198 kubelet[2491]: E0910 00:17:43.309116 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:43.309490 kubelet[2491]: E0910 00:17:43.309476 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:43.309969 kubelet[2491]: E0910 00:17:43.309951 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:17:43.310798 containerd[1448]: time="2025-09-10T00:17:43.310650349Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 10 00:17:43.326602 kubelet[2491]: W0910 00:17:43.309490 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:43.326602 kubelet[2491]: E0910 00:17:43.326607 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:43.326959 kubelet[2491]: E0910 00:17:43.326925 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:43.326959 kubelet[2491]: W0910 00:17:43.326945 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:43.326959 kubelet[2491]: E0910 00:17:43.326958 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:43.327169 kubelet[2491]: E0910 00:17:43.327140 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:43.327169 kubelet[2491]: W0910 00:17:43.327156 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:43.327169 kubelet[2491]: E0910 00:17:43.327165 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:43.327490 kubelet[2491]: E0910 00:17:43.327394 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:43.327490 kubelet[2491]: W0910 00:17:43.327410 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:43.327846 kubelet[2491]: E0910 00:17:43.327689 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:43.327971 kubelet[2491]: E0910 00:17:43.327948 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:43.327971 kubelet[2491]: W0910 00:17:43.327964 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:43.328037 kubelet[2491]: E0910 00:17:43.327974 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:43.328164 kubelet[2491]: E0910 00:17:43.328139 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:43.328164 kubelet[2491]: W0910 00:17:43.328152 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:43.328164 kubelet[2491]: E0910 00:17:43.328161 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:43.328684 kubelet[2491]: E0910 00:17:43.328342 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:43.328684 kubelet[2491]: W0910 00:17:43.328358 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:43.328684 kubelet[2491]: E0910 00:17:43.328367 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:43.328684 kubelet[2491]: E0910 00:17:43.328551 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:43.328684 kubelet[2491]: W0910 00:17:43.328561 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:43.328684 kubelet[2491]: E0910 00:17:43.328570 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:43.329160 kubelet[2491]: E0910 00:17:43.328878 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:43.329160 kubelet[2491]: W0910 00:17:43.328894 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:43.329160 kubelet[2491]: E0910 00:17:43.328905 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:43.329915 kubelet[2491]: E0910 00:17:43.329886 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:43.329915 kubelet[2491]: W0910 00:17:43.329907 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:43.329915 kubelet[2491]: E0910 00:17:43.329919 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:43.330240 kubelet[2491]: E0910 00:17:43.330150 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:43.330240 kubelet[2491]: W0910 00:17:43.330163 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:43.330240 kubelet[2491]: E0910 00:17:43.330174 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:43.330487 kubelet[2491]: E0910 00:17:43.330452 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:43.330487 kubelet[2491]: W0910 00:17:43.330466 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:43.330487 kubelet[2491]: E0910 00:17:43.330475 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:43.330741 kubelet[2491]: E0910 00:17:43.330725 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:43.330741 kubelet[2491]: W0910 00:17:43.330735 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:43.330741 kubelet[2491]: E0910 00:17:43.330743 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:43.330946 kubelet[2491]: E0910 00:17:43.330915 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:43.330946 kubelet[2491]: W0910 00:17:43.330923 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:43.330946 kubelet[2491]: E0910 00:17:43.330930 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:43.331145 kubelet[2491]: E0910 00:17:43.331128 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:43.331145 kubelet[2491]: W0910 00:17:43.331140 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:43.331231 kubelet[2491]: E0910 00:17:43.331148 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:43.331383 kubelet[2491]: E0910 00:17:43.331368 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:43.331383 kubelet[2491]: W0910 00:17:43.331380 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:43.331434 kubelet[2491]: E0910 00:17:43.331389 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:43.331606 kubelet[2491]: E0910 00:17:43.331593 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:43.331606 kubelet[2491]: W0910 00:17:43.331603 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:43.331667 kubelet[2491]: E0910 00:17:43.331610 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:43.331829 kubelet[2491]: E0910 00:17:43.331816 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:43.331829 kubelet[2491]: W0910 00:17:43.331828 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:43.331884 kubelet[2491]: E0910 00:17:43.331835 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:43.331992 kubelet[2491]: E0910 00:17:43.331981 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:43.332019 kubelet[2491]: W0910 00:17:43.331991 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:43.332019 kubelet[2491]: E0910 00:17:43.331999 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:43.334006 kubelet[2491]: E0910 00:17:43.333983 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:43.334006 kubelet[2491]: W0910 00:17:43.334000 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:43.334006 kubelet[2491]: E0910 00:17:43.334011 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:43.334118 kubelet[2491]: I0910 00:17:43.334037 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cc428a56-c099-4362-852b-dab9e5d9f7b7-kubelet-dir\") pod \"csi-node-driver-65xbg\" (UID: \"cc428a56-c099-4362-852b-dab9e5d9f7b7\") " pod="calico-system/csi-node-driver-65xbg" Sep 10 00:17:43.334305 kubelet[2491]: E0910 00:17:43.334286 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:43.334305 kubelet[2491]: W0910 00:17:43.334300 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:43.334305 kubelet[2491]: E0910 00:17:43.334308 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:43.334384 kubelet[2491]: I0910 00:17:43.334325 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/cc428a56-c099-4362-852b-dab9e5d9f7b7-socket-dir\") pod \"csi-node-driver-65xbg\" (UID: \"cc428a56-c099-4362-852b-dab9e5d9f7b7\") " pod="calico-system/csi-node-driver-65xbg" Sep 10 00:17:43.334524 kubelet[2491]: E0910 00:17:43.334510 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:43.334524 kubelet[2491]: W0910 00:17:43.334522 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:43.334573 kubelet[2491]: E0910 00:17:43.334530 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:43.334573 kubelet[2491]: I0910 00:17:43.334547 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/cc428a56-c099-4362-852b-dab9e5d9f7b7-varrun\") pod \"csi-node-driver-65xbg\" (UID: \"cc428a56-c099-4362-852b-dab9e5d9f7b7\") " pod="calico-system/csi-node-driver-65xbg" Sep 10 00:17:43.334814 kubelet[2491]: E0910 00:17:43.334794 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:43.334844 kubelet[2491]: W0910 00:17:43.334814 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:43.334844 kubelet[2491]: E0910 00:17:43.334829 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:43.334985 kubelet[2491]: E0910 00:17:43.334971 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:43.334985 kubelet[2491]: W0910 00:17:43.334981 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:43.335055 kubelet[2491]: E0910 00:17:43.334989 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:43.335457 kubelet[2491]: E0910 00:17:43.335149 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:43.335457 kubelet[2491]: W0910 00:17:43.335160 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:43.335457 kubelet[2491]: E0910 00:17:43.335174 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:43.335457 kubelet[2491]: E0910 00:17:43.335323 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:43.335457 kubelet[2491]: W0910 00:17:43.335331 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:43.335457 kubelet[2491]: E0910 00:17:43.335339 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:43.335728 kubelet[2491]: E0910 00:17:43.335710 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:43.335728 kubelet[2491]: W0910 00:17:43.335725 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:43.335806 kubelet[2491]: E0910 00:17:43.335737 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:43.335848 kubelet[2491]: I0910 00:17:43.335831 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/cc428a56-c099-4362-852b-dab9e5d9f7b7-registration-dir\") pod \"csi-node-driver-65xbg\" (UID: \"cc428a56-c099-4362-852b-dab9e5d9f7b7\") " pod="calico-system/csi-node-driver-65xbg" Sep 10 00:17:43.336061 kubelet[2491]: E0910 00:17:43.336043 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:43.336061 kubelet[2491]: W0910 00:17:43.336058 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:43.336117 kubelet[2491]: E0910 00:17:43.336069 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:43.336276 kubelet[2491]: E0910 00:17:43.336261 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:43.336276 kubelet[2491]: W0910 00:17:43.336273 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:43.336322 kubelet[2491]: E0910 00:17:43.336281 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:43.336482 kubelet[2491]: E0910 00:17:43.336468 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:43.336482 kubelet[2491]: W0910 00:17:43.336481 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:43.336523 kubelet[2491]: E0910 00:17:43.336488 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:43.336523 kubelet[2491]: I0910 00:17:43.336507 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wz77c\" (UniqueName: \"kubernetes.io/projected/cc428a56-c099-4362-852b-dab9e5d9f7b7-kube-api-access-wz77c\") pod \"csi-node-driver-65xbg\" (UID: \"cc428a56-c099-4362-852b-dab9e5d9f7b7\") " pod="calico-system/csi-node-driver-65xbg" Sep 10 00:17:43.336704 kubelet[2491]: E0910 00:17:43.336692 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:43.336727 kubelet[2491]: W0910 00:17:43.336704 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:43.336727 kubelet[2491]: E0910 00:17:43.336712 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:43.336914 kubelet[2491]: E0910 00:17:43.336903 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:43.336939 kubelet[2491]: W0910 00:17:43.336914 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:43.336939 kubelet[2491]: E0910 00:17:43.336921 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:43.337125 kubelet[2491]: E0910 00:17:43.337113 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:43.337125 kubelet[2491]: W0910 00:17:43.337123 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:43.337178 kubelet[2491]: E0910 00:17:43.337131 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:43.337275 kubelet[2491]: E0910 00:17:43.337263 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:43.337275 kubelet[2491]: W0910 00:17:43.337272 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:43.337337 kubelet[2491]: E0910 00:17:43.337279 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:43.357879 containerd[1448]: time="2025-09-10T00:17:43.357836513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-k999g,Uid:8f580f50-424d-411a-9be0-b98a88688c47,Namespace:calico-system,Attempt:0,}" Sep 10 00:17:43.376989 containerd[1448]: time="2025-09-10T00:17:43.376892530Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:17:43.377807 containerd[1448]: time="2025-09-10T00:17:43.377705451Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:17:43.377807 containerd[1448]: time="2025-09-10T00:17:43.377732971Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:17:43.378137 containerd[1448]: time="2025-09-10T00:17:43.378034331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:17:43.393902 systemd[1]: Started cri-containerd-5640243e391c5312d8ce52874b90b7ab9cc6943e9a6694fb5bf8fd14c2ea8aa6.scope - libcontainer container 5640243e391c5312d8ce52874b90b7ab9cc6943e9a6694fb5bf8fd14c2ea8aa6. Sep 10 00:17:43.411609 containerd[1448]: time="2025-09-10T00:17:43.411570362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-k999g,Uid:8f580f50-424d-411a-9be0-b98a88688c47,Namespace:calico-system,Attempt:0,} returns sandbox id \"5640243e391c5312d8ce52874b90b7ab9cc6943e9a6694fb5bf8fd14c2ea8aa6\"" Sep 10 00:17:43.438606 kubelet[2491]: E0910 00:17:43.438572 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:43.438606 kubelet[2491]: W0910 00:17:43.438597 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:43.438993 kubelet[2491]: E0910 00:17:43.438620 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:43.439827 kubelet[2491]: E0910 00:17:43.439710 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:43.439827 kubelet[2491]: W0910 00:17:43.439738 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:43.439827 kubelet[2491]: E0910 00:17:43.439763 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:43.440195 kubelet[2491]: E0910 00:17:43.440047 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:43.440195 kubelet[2491]: W0910 00:17:43.440070 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:43.440195 kubelet[2491]: E0910 00:17:43.440085 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:43.440509 kubelet[2491]: E0910 00:17:43.440489 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:43.440509 kubelet[2491]: W0910 00:17:43.440507 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:43.440560 kubelet[2491]: E0910 00:17:43.440520 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:43.441864 kubelet[2491]: E0910 00:17:43.441838 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:43.441864 kubelet[2491]: W0910 00:17:43.441860 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:43.441945 kubelet[2491]: E0910 00:17:43.441875 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:43.442443 kubelet[2491]: E0910 00:17:43.442425 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:43.442443 kubelet[2491]: W0910 00:17:43.442440 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:43.442514 kubelet[2491]: E0910 00:17:43.442453 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:43.442791 kubelet[2491]: E0910 00:17:43.442772 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:43.442791 kubelet[2491]: W0910 00:17:43.442790 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:43.442858 kubelet[2491]: E0910 00:17:43.442803 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:43.443104 kubelet[2491]: E0910 00:17:43.443089 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:43.443104 kubelet[2491]: W0910 00:17:43.443102 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:43.443171 kubelet[2491]: E0910 00:17:43.443143 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:43.445398 kubelet[2491]: E0910 00:17:43.445367 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:43.445398 kubelet[2491]: W0910 00:17:43.445387 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:43.445398 kubelet[2491]: E0910 00:17:43.445402 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:43.445625 kubelet[2491]: E0910 00:17:43.445599 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:43.445625 kubelet[2491]: W0910 00:17:43.445613 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:43.445625 kubelet[2491]: E0910 00:17:43.445621 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:43.446200 kubelet[2491]: E0910 00:17:43.446003 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:43.446200 kubelet[2491]: W0910 00:17:43.446018 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:43.446200 kubelet[2491]: E0910 00:17:43.446030 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:43.446672 kubelet[2491]: E0910 00:17:43.446314 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:43.446672 kubelet[2491]: W0910 00:17:43.446358 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:43.446672 kubelet[2491]: E0910 00:17:43.446371 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:43.446857 kubelet[2491]: E0910 00:17:43.446834 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:43.446857 kubelet[2491]: W0910 00:17:43.446850 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:43.447172 kubelet[2491]: E0910 00:17:43.446864 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:43.447172 kubelet[2491]: E0910 00:17:43.447042 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:43.447172 kubelet[2491]: W0910 00:17:43.447049 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:43.447172 kubelet[2491]: E0910 00:17:43.447057 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:43.447288 kubelet[2491]: E0910 00:17:43.447266 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:43.447288 kubelet[2491]: W0910 00:17:43.447274 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:43.447288 kubelet[2491]: E0910 00:17:43.447283 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:43.448090 kubelet[2491]: E0910 00:17:43.448038 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:43.448090 kubelet[2491]: W0910 00:17:43.448057 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:43.448090 kubelet[2491]: E0910 00:17:43.448072 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:43.449106 kubelet[2491]: E0910 00:17:43.449082 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:43.449106 kubelet[2491]: W0910 00:17:43.449097 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:43.449106 kubelet[2491]: E0910 00:17:43.449108 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:43.449781 kubelet[2491]: E0910 00:17:43.449761 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:43.449781 kubelet[2491]: W0910 00:17:43.449777 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:43.449854 kubelet[2491]: E0910 00:17:43.449790 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:43.450362 kubelet[2491]: E0910 00:17:43.450344 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:43.450390 kubelet[2491]: W0910 00:17:43.450362 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:43.450390 kubelet[2491]: E0910 00:17:43.450376 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:43.451340 kubelet[2491]: E0910 00:17:43.451324 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:43.451370 kubelet[2491]: W0910 00:17:43.451340 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:43.451370 kubelet[2491]: E0910 00:17:43.451353 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:43.452326 kubelet[2491]: E0910 00:17:43.452302 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:43.452326 kubelet[2491]: W0910 00:17:43.452319 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:43.452465 kubelet[2491]: E0910 00:17:43.452332 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:43.453055 kubelet[2491]: E0910 00:17:43.453037 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:43.453055 kubelet[2491]: W0910 00:17:43.453052 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:43.453148 kubelet[2491]: E0910 00:17:43.453065 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:43.453362 kubelet[2491]: E0910 00:17:43.453349 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:43.453362 kubelet[2491]: W0910 00:17:43.453362 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:43.453410 kubelet[2491]: E0910 00:17:43.453372 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:43.453594 kubelet[2491]: E0910 00:17:43.453582 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:43.453594 kubelet[2491]: W0910 00:17:43.453593 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:43.453650 kubelet[2491]: E0910 00:17:43.453601 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:43.453919 kubelet[2491]: E0910 00:17:43.453892 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:43.453919 kubelet[2491]: W0910 00:17:43.453904 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:43.453974 kubelet[2491]: E0910 00:17:43.453931 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:43.466152 kubelet[2491]: E0910 00:17:43.465985 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:43.466152 kubelet[2491]: W0910 00:17:43.466017 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:43.466152 kubelet[2491]: E0910 00:17:43.466036 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:44.314605 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2007878187.mount: Deactivated successfully. Sep 10 00:17:44.873037 kubelet[2491]: E0910 00:17:44.872986 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-65xbg" podUID="cc428a56-c099-4362-852b-dab9e5d9f7b7" Sep 10 00:17:44.965677 containerd[1448]: time="2025-09-10T00:17:44.965629740Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:17:44.966534 containerd[1448]: time="2025-09-10T00:17:44.966385700Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33105775" Sep 10 00:17:44.967278 containerd[1448]: time="2025-09-10T00:17:44.967243541Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:17:44.976327 containerd[1448]: time="2025-09-10T00:17:44.976276949Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:17:44.976765 containerd[1448]: time="2025-09-10T00:17:44.976724669Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 1.66604204s" Sep 10 00:17:44.976796 containerd[1448]: time="2025-09-10T00:17:44.976770429Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 10 00:17:44.977877 containerd[1448]: time="2025-09-10T00:17:44.977847350Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 10 00:17:44.990961 containerd[1448]: time="2025-09-10T00:17:44.990931081Z" level=info msg="CreateContainer within sandbox \"d6cbc4581ca550db50dc33daf648b54ebf831154f39d134b3401b51c555ae0b1\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 10 00:17:45.000251 containerd[1448]: time="2025-09-10T00:17:45.000140049Z" level=info msg="CreateContainer within sandbox \"d6cbc4581ca550db50dc33daf648b54ebf831154f39d134b3401b51c555ae0b1\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a014b3cbb9550ce3f71e6a4247407b9682e918aff175d26ae9cc53c2baa764ee\"" Sep 10 00:17:45.000833 containerd[1448]: time="2025-09-10T00:17:45.000657330Z" level=info msg="StartContainer for \"a014b3cbb9550ce3f71e6a4247407b9682e918aff175d26ae9cc53c2baa764ee\"" Sep 10 00:17:45.027942 systemd[1]: Started cri-containerd-a014b3cbb9550ce3f71e6a4247407b9682e918aff175d26ae9cc53c2baa764ee.scope - libcontainer container a014b3cbb9550ce3f71e6a4247407b9682e918aff175d26ae9cc53c2baa764ee. Sep 10 00:17:45.062372 containerd[1448]: time="2025-09-10T00:17:45.062261340Z" level=info msg="StartContainer for \"a014b3cbb9550ce3f71e6a4247407b9682e918aff175d26ae9cc53c2baa764ee\" returns successfully" Sep 10 00:17:45.950767 kubelet[2491]: E0910 00:17:45.950496 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:17:45.960711 kubelet[2491]: I0910 00:17:45.960574 2491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7d9f8f5dd4-vzncs" podStartSLOduration=2.293203587 podStartE2EDuration="3.960523948s" podCreationTimestamp="2025-09-10 00:17:42 +0000 UTC" firstStartedPulling="2025-09-10 00:17:43.310403669 +0000 UTC m=+23.518158753" lastFinishedPulling="2025-09-10 00:17:44.97772403 +0000 UTC m=+25.185479114" observedRunningTime="2025-09-10 00:17:45.959551347 +0000 UTC m=+26.167306471" watchObservedRunningTime="2025-09-10 00:17:45.960523948 +0000 UTC m=+26.168279032" Sep 10 00:17:46.050892 kubelet[2491]: E0910 00:17:46.050729 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:46.050892 kubelet[2491]: W0910 00:17:46.050777 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:46.050892 kubelet[2491]: E0910 00:17:46.050798 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:46.053574 kubelet[2491]: E0910 00:17:46.053474 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:46.053574 kubelet[2491]: W0910 00:17:46.053494 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:46.053574 kubelet[2491]: E0910 00:17:46.053509 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:46.054617 kubelet[2491]: E0910 00:17:46.054074 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:46.054617 kubelet[2491]: W0910 00:17:46.054089 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:46.054617 kubelet[2491]: E0910 00:17:46.054101 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:46.055224 kubelet[2491]: E0910 00:17:46.054871 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:46.055224 kubelet[2491]: W0910 00:17:46.054887 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:46.055224 kubelet[2491]: E0910 00:17:46.054899 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:46.055422 kubelet[2491]: E0910 00:17:46.055406 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:46.055637 kubelet[2491]: W0910 00:17:46.055463 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:46.055837 kubelet[2491]: E0910 00:17:46.055718 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:46.055958 kubelet[2491]: E0910 00:17:46.055945 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:46.056009 kubelet[2491]: W0910 00:17:46.055998 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:46.056061 kubelet[2491]: E0910 00:17:46.056051 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:46.056344 kubelet[2491]: E0910 00:17:46.056244 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:46.056344 kubelet[2491]: W0910 00:17:46.056257 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:46.056344 kubelet[2491]: E0910 00:17:46.056266 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:46.056491 kubelet[2491]: E0910 00:17:46.056478 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:46.056542 kubelet[2491]: W0910 00:17:46.056532 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:46.056592 kubelet[2491]: E0910 00:17:46.056582 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:46.056813 kubelet[2491]: E0910 00:17:46.056799 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:46.056964 kubelet[2491]: W0910 00:17:46.056874 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:46.056964 kubelet[2491]: E0910 00:17:46.056891 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:46.057085 kubelet[2491]: E0910 00:17:46.057074 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:46.057140 kubelet[2491]: W0910 00:17:46.057128 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:46.057187 kubelet[2491]: E0910 00:17:46.057178 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:46.057477 kubelet[2491]: E0910 00:17:46.057400 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:46.057477 kubelet[2491]: W0910 00:17:46.057411 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:46.057477 kubelet[2491]: E0910 00:17:46.057421 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:46.057829 kubelet[2491]: E0910 00:17:46.057699 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:46.057829 kubelet[2491]: W0910 00:17:46.057712 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:46.057829 kubelet[2491]: E0910 00:17:46.057722 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:46.057965 kubelet[2491]: E0910 00:17:46.057953 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:46.058016 kubelet[2491]: W0910 00:17:46.058005 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:46.058076 kubelet[2491]: E0910 00:17:46.058066 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:46.058287 kubelet[2491]: E0910 00:17:46.058274 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:46.058355 kubelet[2491]: W0910 00:17:46.058342 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:46.058476 kubelet[2491]: E0910 00:17:46.058393 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:46.058564 kubelet[2491]: E0910 00:17:46.058552 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:46.058619 kubelet[2491]: W0910 00:17:46.058608 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:46.058665 kubelet[2491]: E0910 00:17:46.058656 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:46.068981 kubelet[2491]: E0910 00:17:46.068960 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:46.068981 kubelet[2491]: W0910 00:17:46.068977 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:46.069092 kubelet[2491]: E0910 00:17:46.068991 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:46.069488 kubelet[2491]: E0910 00:17:46.069416 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:46.069488 kubelet[2491]: W0910 00:17:46.069432 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:46.069488 kubelet[2491]: E0910 00:17:46.069441 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:46.069704 kubelet[2491]: E0910 00:17:46.069690 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:46.069704 kubelet[2491]: W0910 00:17:46.069700 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:46.069767 kubelet[2491]: E0910 00:17:46.069708 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:46.069945 kubelet[2491]: E0910 00:17:46.069930 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:46.069945 kubelet[2491]: W0910 00:17:46.069943 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:46.070011 kubelet[2491]: E0910 00:17:46.069952 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:46.070152 kubelet[2491]: E0910 00:17:46.070117 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:46.070152 kubelet[2491]: W0910 00:17:46.070127 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:46.070152 kubelet[2491]: E0910 00:17:46.070135 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:46.070294 kubelet[2491]: E0910 00:17:46.070281 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:46.070294 kubelet[2491]: W0910 00:17:46.070291 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:46.070341 kubelet[2491]: E0910 00:17:46.070299 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:46.070477 kubelet[2491]: E0910 00:17:46.070464 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:46.070477 kubelet[2491]: W0910 00:17:46.070476 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:46.070533 kubelet[2491]: E0910 00:17:46.070488 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:46.070714 kubelet[2491]: E0910 00:17:46.070701 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:46.070714 kubelet[2491]: W0910 00:17:46.070713 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:46.070783 kubelet[2491]: E0910 00:17:46.070721 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:46.070992 kubelet[2491]: E0910 00:17:46.070978 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:46.070992 kubelet[2491]: W0910 00:17:46.070990 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:46.071065 kubelet[2491]: E0910 00:17:46.070999 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:46.071438 kubelet[2491]: E0910 00:17:46.071315 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:46.071438 kubelet[2491]: W0910 00:17:46.071330 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:46.071438 kubelet[2491]: E0910 00:17:46.071342 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:46.071581 kubelet[2491]: E0910 00:17:46.071569 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:46.071744 kubelet[2491]: W0910 00:17:46.071626 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:46.071744 kubelet[2491]: E0910 00:17:46.071640 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:46.071991 kubelet[2491]: E0910 00:17:46.071977 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:46.072060 kubelet[2491]: W0910 00:17:46.072048 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:46.072146 kubelet[2491]: E0910 00:17:46.072132 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:46.072630 kubelet[2491]: E0910 00:17:46.072607 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:46.072630 kubelet[2491]: W0910 00:17:46.072626 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:46.073504 kubelet[2491]: E0910 00:17:46.073395 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:46.073932 kubelet[2491]: E0910 00:17:46.073914 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:46.073965 kubelet[2491]: W0910 00:17:46.073932 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:46.073965 kubelet[2491]: E0910 00:17:46.073946 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:46.076890 kubelet[2491]: E0910 00:17:46.076869 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:46.076890 kubelet[2491]: W0910 00:17:46.076887 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:46.076962 kubelet[2491]: E0910 00:17:46.076906 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:46.077332 kubelet[2491]: E0910 00:17:46.077314 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:46.077382 kubelet[2491]: W0910 00:17:46.077333 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:46.077382 kubelet[2491]: E0910 00:17:46.077345 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:46.078847 kubelet[2491]: E0910 00:17:46.078827 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:46.078847 kubelet[2491]: W0910 00:17:46.078844 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:46.078909 kubelet[2491]: E0910 00:17:46.078857 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:46.079400 kubelet[2491]: E0910 00:17:46.079382 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:17:46.079400 kubelet[2491]: W0910 00:17:46.079398 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:17:46.079459 kubelet[2491]: E0910 00:17:46.079436 2491 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:17:46.113103 containerd[1448]: time="2025-09-10T00:17:46.113060786Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:17:46.113834 containerd[1448]: time="2025-09-10T00:17:46.113809746Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4266814" Sep 10 00:17:46.115090 containerd[1448]: time="2025-09-10T00:17:46.115056987Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:17:46.117132 containerd[1448]: time="2025-09-10T00:17:46.117050989Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:17:46.118488 containerd[1448]: time="2025-09-10T00:17:46.118222070Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 1.14033632s" Sep 10 00:17:46.118488 containerd[1448]: time="2025-09-10T00:17:46.118282270Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 10 00:17:46.122605 containerd[1448]: time="2025-09-10T00:17:46.122474873Z" level=info msg="CreateContainer within sandbox \"5640243e391c5312d8ce52874b90b7ab9cc6943e9a6694fb5bf8fd14c2ea8aa6\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 10 00:17:46.139192 containerd[1448]: time="2025-09-10T00:17:46.139154646Z" level=info msg="CreateContainer within sandbox \"5640243e391c5312d8ce52874b90b7ab9cc6943e9a6694fb5bf8fd14c2ea8aa6\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"09a1bb3c1481ae21e57c1964f098145abd2d1a894da35054af1b400361a83207\"" Sep 10 00:17:46.139560 containerd[1448]: time="2025-09-10T00:17:46.139537446Z" level=info msg="StartContainer for \"09a1bb3c1481ae21e57c1964f098145abd2d1a894da35054af1b400361a83207\"" Sep 10 00:17:46.169897 systemd[1]: Started cri-containerd-09a1bb3c1481ae21e57c1964f098145abd2d1a894da35054af1b400361a83207.scope - libcontainer container 09a1bb3c1481ae21e57c1964f098145abd2d1a894da35054af1b400361a83207. Sep 10 00:17:46.197075 containerd[1448]: time="2025-09-10T00:17:46.196974450Z" level=info msg="StartContainer for \"09a1bb3c1481ae21e57c1964f098145abd2d1a894da35054af1b400361a83207\" returns successfully" Sep 10 00:17:46.209890 systemd[1]: cri-containerd-09a1bb3c1481ae21e57c1964f098145abd2d1a894da35054af1b400361a83207.scope: Deactivated successfully. Sep 10 00:17:46.233290 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-09a1bb3c1481ae21e57c1964f098145abd2d1a894da35054af1b400361a83207-rootfs.mount: Deactivated successfully. Sep 10 00:17:46.342304 containerd[1448]: time="2025-09-10T00:17:46.342148800Z" level=info msg="shim disconnected" id=09a1bb3c1481ae21e57c1964f098145abd2d1a894da35054af1b400361a83207 namespace=k8s.io Sep 10 00:17:46.342304 containerd[1448]: time="2025-09-10T00:17:46.342207360Z" level=warning msg="cleaning up after shim disconnected" id=09a1bb3c1481ae21e57c1964f098145abd2d1a894da35054af1b400361a83207 namespace=k8s.io Sep 10 00:17:46.342304 containerd[1448]: time="2025-09-10T00:17:46.342227000Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 00:17:46.872761 kubelet[2491]: E0910 00:17:46.872700 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-65xbg" podUID="cc428a56-c099-4362-852b-dab9e5d9f7b7" Sep 10 00:17:46.953416 kubelet[2491]: I0910 00:17:46.953371 2491 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 00:17:46.953787 kubelet[2491]: E0910 00:17:46.953671 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:17:46.955786 containerd[1448]: time="2025-09-10T00:17:46.955724306Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 10 00:17:48.872805 kubelet[2491]: E0910 00:17:48.872731 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-65xbg" podUID="cc428a56-c099-4362-852b-dab9e5d9f7b7" Sep 10 00:17:49.672535 containerd[1448]: time="2025-09-10T00:17:49.672490381Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:17:49.673376 containerd[1448]: time="2025-09-10T00:17:49.673014222Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 10 00:17:49.674158 containerd[1448]: time="2025-09-10T00:17:49.673876782Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:17:49.676705 containerd[1448]: time="2025-09-10T00:17:49.676657544Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:17:49.677449 containerd[1448]: time="2025-09-10T00:17:49.677345184Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 2.721556638s" Sep 10 00:17:49.677449 containerd[1448]: time="2025-09-10T00:17:49.677372544Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 10 00:17:49.682324 containerd[1448]: time="2025-09-10T00:17:49.682284267Z" level=info msg="CreateContainer within sandbox \"5640243e391c5312d8ce52874b90b7ab9cc6943e9a6694fb5bf8fd14c2ea8aa6\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 10 00:17:49.692347 containerd[1448]: time="2025-09-10T00:17:49.692307754Z" level=info msg="CreateContainer within sandbox \"5640243e391c5312d8ce52874b90b7ab9cc6943e9a6694fb5bf8fd14c2ea8aa6\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f2a5bbbf6fc91d54852f6a1bbf7b8b399cc9b61e65caaaf0f73475dd577bb2ea\"" Sep 10 00:17:49.692891 containerd[1448]: time="2025-09-10T00:17:49.692863354Z" level=info msg="StartContainer for \"f2a5bbbf6fc91d54852f6a1bbf7b8b399cc9b61e65caaaf0f73475dd577bb2ea\"" Sep 10 00:17:49.716908 systemd[1]: Started cri-containerd-f2a5bbbf6fc91d54852f6a1bbf7b8b399cc9b61e65caaaf0f73475dd577bb2ea.scope - libcontainer container f2a5bbbf6fc91d54852f6a1bbf7b8b399cc9b61e65caaaf0f73475dd577bb2ea. Sep 10 00:17:49.743181 containerd[1448]: time="2025-09-10T00:17:49.743142905Z" level=info msg="StartContainer for \"f2a5bbbf6fc91d54852f6a1bbf7b8b399cc9b61e65caaaf0f73475dd577bb2ea\" returns successfully" Sep 10 00:17:50.300503 systemd[1]: cri-containerd-f2a5bbbf6fc91d54852f6a1bbf7b8b399cc9b61e65caaaf0f73475dd577bb2ea.scope: Deactivated successfully. Sep 10 00:17:50.327229 containerd[1448]: time="2025-09-10T00:17:50.327166498Z" level=info msg="shim disconnected" id=f2a5bbbf6fc91d54852f6a1bbf7b8b399cc9b61e65caaaf0f73475dd577bb2ea namespace=k8s.io Sep 10 00:17:50.327229 containerd[1448]: time="2025-09-10T00:17:50.327225098Z" level=warning msg="cleaning up after shim disconnected" id=f2a5bbbf6fc91d54852f6a1bbf7b8b399cc9b61e65caaaf0f73475dd577bb2ea namespace=k8s.io Sep 10 00:17:50.327229 containerd[1448]: time="2025-09-10T00:17:50.327235738Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 00:17:50.337714 kubelet[2491]: I0910 00:17:50.337178 2491 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 10 00:17:50.408647 systemd[1]: Created slice kubepods-burstable-podc5fb1d1a_aa75_4564_aca9_9712fec491bb.slice - libcontainer container kubepods-burstable-podc5fb1d1a_aa75_4564_aca9_9712fec491bb.slice. Sep 10 00:17:50.418478 systemd[1]: Created slice kubepods-besteffort-pod9eadb910_646d_43a4_b7b4_6b854d565ea6.slice - libcontainer container kubepods-besteffort-pod9eadb910_646d_43a4_b7b4_6b854d565ea6.slice. Sep 10 00:17:50.427558 systemd[1]: Created slice kubepods-burstable-pod2f22f2e0_6fae_4277_8ce4_e71e5b1601af.slice - libcontainer container kubepods-burstable-pod2f22f2e0_6fae_4277_8ce4_e71e5b1601af.slice. Sep 10 00:17:50.436860 systemd[1]: Created slice kubepods-besteffort-pod280ac556_7fbc_4068_a5ff_d67586b6c1c2.slice - libcontainer container kubepods-besteffort-pod280ac556_7fbc_4068_a5ff_d67586b6c1c2.slice. Sep 10 00:17:50.441302 systemd[1]: Created slice kubepods-besteffort-pod22f16cc2_19f3_4586_9c67_213437e8718f.slice - libcontainer container kubepods-besteffort-pod22f16cc2_19f3_4586_9c67_213437e8718f.slice. Sep 10 00:17:50.447843 systemd[1]: Created slice kubepods-besteffort-pode6f75a12_f91b_4f7f_9f91_ba290e49ea84.slice - libcontainer container kubepods-besteffort-pode6f75a12_f91b_4f7f_9f91_ba290e49ea84.slice. Sep 10 00:17:50.452468 systemd[1]: Created slice kubepods-besteffort-pod48ece463_fbe3_4a32_8fd2_523723a890ae.slice - libcontainer container kubepods-besteffort-pod48ece463_fbe3_4a32_8fd2_523723a890ae.slice. Sep 10 00:17:50.503785 kubelet[2491]: I0910 00:17:50.503745 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48ece463-fbe3-4a32-8fd2-523723a890ae-config\") pod \"goldmane-54d579b49d-njj6z\" (UID: \"48ece463-fbe3-4a32-8fd2-523723a890ae\") " pod="calico-system/goldmane-54d579b49d-njj6z" Sep 10 00:17:50.503975 kubelet[2491]: I0910 00:17:50.503914 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22f16cc2-19f3-4586-9c67-213437e8718f-tigera-ca-bundle\") pod \"calico-kube-controllers-55c7f99b6f-f6wcn\" (UID: \"22f16cc2-19f3-4586-9c67-213437e8718f\") " pod="calico-system/calico-kube-controllers-55c7f99b6f-f6wcn" Sep 10 00:17:50.503975 kubelet[2491]: I0910 00:17:50.503941 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6m4hq\" (UniqueName: \"kubernetes.io/projected/22f16cc2-19f3-4586-9c67-213437e8718f-kube-api-access-6m4hq\") pod \"calico-kube-controllers-55c7f99b6f-f6wcn\" (UID: \"22f16cc2-19f3-4586-9c67-213437e8718f\") " pod="calico-system/calico-kube-controllers-55c7f99b6f-f6wcn" Sep 10 00:17:50.504068 kubelet[2491]: I0910 00:17:50.504006 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/280ac556-7fbc-4068-a5ff-d67586b6c1c2-whisker-ca-bundle\") pod \"whisker-79d7db7c55-h6dqb\" (UID: \"280ac556-7fbc-4068-a5ff-d67586b6c1c2\") " pod="calico-system/whisker-79d7db7c55-h6dqb" Sep 10 00:17:50.504068 kubelet[2491]: I0910 00:17:50.504051 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcq82\" (UniqueName: \"kubernetes.io/projected/280ac556-7fbc-4068-a5ff-d67586b6c1c2-kube-api-access-mcq82\") pod \"whisker-79d7db7c55-h6dqb\" (UID: \"280ac556-7fbc-4068-a5ff-d67586b6c1c2\") " pod="calico-system/whisker-79d7db7c55-h6dqb" Sep 10 00:17:50.504068 kubelet[2491]: I0910 00:17:50.504066 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ghpk\" (UniqueName: \"kubernetes.io/projected/48ece463-fbe3-4a32-8fd2-523723a890ae-kube-api-access-2ghpk\") pod \"goldmane-54d579b49d-njj6z\" (UID: \"48ece463-fbe3-4a32-8fd2-523723a890ae\") " pod="calico-system/goldmane-54d579b49d-njj6z" Sep 10 00:17:50.504214 kubelet[2491]: I0910 00:17:50.504108 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txf62\" (UniqueName: \"kubernetes.io/projected/9eadb910-646d-43a4-b7b4-6b854d565ea6-kube-api-access-txf62\") pod \"calico-apiserver-6cd6dcff69-qt9l6\" (UID: \"9eadb910-646d-43a4-b7b4-6b854d565ea6\") " pod="calico-apiserver/calico-apiserver-6cd6dcff69-qt9l6" Sep 10 00:17:50.504214 kubelet[2491]: I0910 00:17:50.504130 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5fb1d1a-aa75-4564-aca9-9712fec491bb-config-volume\") pod \"coredns-674b8bbfcf-wv9tb\" (UID: \"c5fb1d1a-aa75-4564-aca9-9712fec491bb\") " pod="kube-system/coredns-674b8bbfcf-wv9tb" Sep 10 00:17:50.504214 kubelet[2491]: I0910 00:17:50.504147 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpmqr\" (UniqueName: \"kubernetes.io/projected/c5fb1d1a-aa75-4564-aca9-9712fec491bb-kube-api-access-mpmqr\") pod \"coredns-674b8bbfcf-wv9tb\" (UID: \"c5fb1d1a-aa75-4564-aca9-9712fec491bb\") " pod="kube-system/coredns-674b8bbfcf-wv9tb" Sep 10 00:17:50.504214 kubelet[2491]: I0910 00:17:50.504194 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48ece463-fbe3-4a32-8fd2-523723a890ae-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-njj6z\" (UID: \"48ece463-fbe3-4a32-8fd2-523723a890ae\") " pod="calico-system/goldmane-54d579b49d-njj6z" Sep 10 00:17:50.504375 kubelet[2491]: I0910 00:17:50.504223 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/280ac556-7fbc-4068-a5ff-d67586b6c1c2-whisker-backend-key-pair\") pod \"whisker-79d7db7c55-h6dqb\" (UID: \"280ac556-7fbc-4068-a5ff-d67586b6c1c2\") " pod="calico-system/whisker-79d7db7c55-h6dqb" Sep 10 00:17:50.504375 kubelet[2491]: I0910 00:17:50.504261 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/48ece463-fbe3-4a32-8fd2-523723a890ae-goldmane-key-pair\") pod \"goldmane-54d579b49d-njj6z\" (UID: \"48ece463-fbe3-4a32-8fd2-523723a890ae\") " pod="calico-system/goldmane-54d579b49d-njj6z" Sep 10 00:17:50.504375 kubelet[2491]: I0910 00:17:50.504280 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9eadb910-646d-43a4-b7b4-6b854d565ea6-calico-apiserver-certs\") pod \"calico-apiserver-6cd6dcff69-qt9l6\" (UID: \"9eadb910-646d-43a4-b7b4-6b854d565ea6\") " pod="calico-apiserver/calico-apiserver-6cd6dcff69-qt9l6" Sep 10 00:17:50.504464 kubelet[2491]: I0910 00:17:50.504429 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2f22f2e0-6fae-4277-8ce4-e71e5b1601af-config-volume\") pod \"coredns-674b8bbfcf-qlcl5\" (UID: \"2f22f2e0-6fae-4277-8ce4-e71e5b1601af\") " pod="kube-system/coredns-674b8bbfcf-qlcl5" Sep 10 00:17:50.504614 kubelet[2491]: I0910 00:17:50.504450 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbxnk\" (UniqueName: \"kubernetes.io/projected/2f22f2e0-6fae-4277-8ce4-e71e5b1601af-kube-api-access-jbxnk\") pod \"coredns-674b8bbfcf-qlcl5\" (UID: \"2f22f2e0-6fae-4277-8ce4-e71e5b1601af\") " pod="kube-system/coredns-674b8bbfcf-qlcl5" Sep 10 00:17:50.504614 kubelet[2491]: I0910 00:17:50.504513 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e6f75a12-f91b-4f7f-9f91-ba290e49ea84-calico-apiserver-certs\") pod \"calico-apiserver-6cd6dcff69-pvd28\" (UID: \"e6f75a12-f91b-4f7f-9f91-ba290e49ea84\") " pod="calico-apiserver/calico-apiserver-6cd6dcff69-pvd28" Sep 10 00:17:50.504614 kubelet[2491]: I0910 00:17:50.504537 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vpxd\" (UniqueName: \"kubernetes.io/projected/e6f75a12-f91b-4f7f-9f91-ba290e49ea84-kube-api-access-2vpxd\") pod \"calico-apiserver-6cd6dcff69-pvd28\" (UID: \"e6f75a12-f91b-4f7f-9f91-ba290e49ea84\") " pod="calico-apiserver/calico-apiserver-6cd6dcff69-pvd28" Sep 10 00:17:50.698784 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f2a5bbbf6fc91d54852f6a1bbf7b8b399cc9b61e65caaaf0f73475dd577bb2ea-rootfs.mount: Deactivated successfully. Sep 10 00:17:50.715266 kubelet[2491]: E0910 00:17:50.715234 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:17:50.715810 containerd[1448]: time="2025-09-10T00:17:50.715774486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wv9tb,Uid:c5fb1d1a-aa75-4564-aca9-9712fec491bb,Namespace:kube-system,Attempt:0,}" Sep 10 00:17:50.722798 containerd[1448]: time="2025-09-10T00:17:50.722505210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cd6dcff69-qt9l6,Uid:9eadb910-646d-43a4-b7b4-6b854d565ea6,Namespace:calico-apiserver,Attempt:0,}" Sep 10 00:17:50.732820 kubelet[2491]: E0910 00:17:50.732510 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:17:50.734717 containerd[1448]: time="2025-09-10T00:17:50.734598577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qlcl5,Uid:2f22f2e0-6fae-4277-8ce4-e71e5b1601af,Namespace:kube-system,Attempt:0,}" Sep 10 00:17:50.756931 containerd[1448]: time="2025-09-10T00:17:50.756808430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cd6dcff69-pvd28,Uid:e6f75a12-f91b-4f7f-9f91-ba290e49ea84,Namespace:calico-apiserver,Attempt:0,}" Sep 10 00:17:50.757264 containerd[1448]: time="2025-09-10T00:17:50.757227991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79d7db7c55-h6dqb,Uid:280ac556-7fbc-4068-a5ff-d67586b6c1c2,Namespace:calico-system,Attempt:0,}" Sep 10 00:17:50.757834 containerd[1448]: time="2025-09-10T00:17:50.757671231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55c7f99b6f-f6wcn,Uid:22f16cc2-19f3-4586-9c67-213437e8718f,Namespace:calico-system,Attempt:0,}" Sep 10 00:17:50.759422 containerd[1448]: time="2025-09-10T00:17:50.758097151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-njj6z,Uid:48ece463-fbe3-4a32-8fd2-523723a890ae,Namespace:calico-system,Attempt:0,}" Sep 10 00:17:50.868068 containerd[1448]: time="2025-09-10T00:17:50.867493055Z" level=error msg="Failed to destroy network for sandbox \"237b2951e6d066a5ef349d91ec76cb766d252860227afe68b6554018af713707\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:17:50.868342 containerd[1448]: time="2025-09-10T00:17:50.868303056Z" level=error msg="encountered an error cleaning up failed sandbox \"237b2951e6d066a5ef349d91ec76cb766d252860227afe68b6554018af713707\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:17:50.868485 containerd[1448]: time="2025-09-10T00:17:50.868453296Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cd6dcff69-qt9l6,Uid:9eadb910-646d-43a4-b7b4-6b854d565ea6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"237b2951e6d066a5ef349d91ec76cb766d252860227afe68b6554018af713707\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:17:50.869058 kubelet[2491]: E0910 00:17:50.869022 2491 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"237b2951e6d066a5ef349d91ec76cb766d252860227afe68b6554018af713707\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:17:50.869170 kubelet[2491]: E0910 00:17:50.869095 2491 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"237b2951e6d066a5ef349d91ec76cb766d252860227afe68b6554018af713707\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cd6dcff69-qt9l6" Sep 10 00:17:50.874044 kubelet[2491]: E0910 00:17:50.874011 2491 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"237b2951e6d066a5ef349d91ec76cb766d252860227afe68b6554018af713707\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cd6dcff69-qt9l6" Sep 10 00:17:50.874157 kubelet[2491]: E0910 00:17:50.874099 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6cd6dcff69-qt9l6_calico-apiserver(9eadb910-646d-43a4-b7b4-6b854d565ea6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6cd6dcff69-qt9l6_calico-apiserver(9eadb910-646d-43a4-b7b4-6b854d565ea6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"237b2951e6d066a5ef349d91ec76cb766d252860227afe68b6554018af713707\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6cd6dcff69-qt9l6" podUID="9eadb910-646d-43a4-b7b4-6b854d565ea6" Sep 10 00:17:50.875982 containerd[1448]: time="2025-09-10T00:17:50.875880860Z" level=error msg="Failed to destroy network for sandbox \"f545795cd7c9a84cc5868a255fc2edd09a29cadcb3d54017ee515da41c43e964\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:17:50.879221 containerd[1448]: time="2025-09-10T00:17:50.878396062Z" level=error msg="Failed to destroy network for sandbox \"c926cec19e0b2deb730a9bb6354c1568bbfabdb70a7d55cc462fe937d639cfcc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:17:50.880107 systemd[1]: Created slice kubepods-besteffort-podcc428a56_c099_4362_852b_dab9e5d9f7b7.slice - libcontainer container kubepods-besteffort-podcc428a56_c099_4362_852b_dab9e5d9f7b7.slice. Sep 10 00:17:50.880689 containerd[1448]: time="2025-09-10T00:17:50.880486943Z" level=error msg="encountered an error cleaning up failed sandbox \"c926cec19e0b2deb730a9bb6354c1568bbfabdb70a7d55cc462fe937d639cfcc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:17:50.880689 containerd[1448]: time="2025-09-10T00:17:50.880550463Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-njj6z,Uid:48ece463-fbe3-4a32-8fd2-523723a890ae,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c926cec19e0b2deb730a9bb6354c1568bbfabdb70a7d55cc462fe937d639cfcc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:17:50.881407 kubelet[2491]: E0910 00:17:50.881017 2491 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c926cec19e0b2deb730a9bb6354c1568bbfabdb70a7d55cc462fe937d639cfcc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:17:50.881407 kubelet[2491]: E0910 00:17:50.881057 2491 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c926cec19e0b2deb730a9bb6354c1568bbfabdb70a7d55cc462fe937d639cfcc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-njj6z" Sep 10 00:17:50.881407 kubelet[2491]: E0910 00:17:50.881080 2491 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c926cec19e0b2deb730a9bb6354c1568bbfabdb70a7d55cc462fe937d639cfcc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-njj6z" Sep 10 00:17:50.881547 kubelet[2491]: E0910 00:17:50.881115 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-njj6z_calico-system(48ece463-fbe3-4a32-8fd2-523723a890ae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-njj6z_calico-system(48ece463-fbe3-4a32-8fd2-523723a890ae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c926cec19e0b2deb730a9bb6354c1568bbfabdb70a7d55cc462fe937d639cfcc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-njj6z" podUID="48ece463-fbe3-4a32-8fd2-523723a890ae" Sep 10 00:17:50.884375 containerd[1448]: time="2025-09-10T00:17:50.884237345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-65xbg,Uid:cc428a56-c099-4362-852b-dab9e5d9f7b7,Namespace:calico-system,Attempt:0,}" Sep 10 00:17:50.885731 containerd[1448]: time="2025-09-10T00:17:50.885692386Z" level=error msg="encountered an error cleaning up failed sandbox \"f545795cd7c9a84cc5868a255fc2edd09a29cadcb3d54017ee515da41c43e964\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:17:50.886239 containerd[1448]: time="2025-09-10T00:17:50.886205666Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wv9tb,Uid:c5fb1d1a-aa75-4564-aca9-9712fec491bb,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f545795cd7c9a84cc5868a255fc2edd09a29cadcb3d54017ee515da41c43e964\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:17:50.887365 kubelet[2491]: E0910 00:17:50.887333 2491 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f545795cd7c9a84cc5868a255fc2edd09a29cadcb3d54017ee515da41c43e964\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:17:50.887441 kubelet[2491]: E0910 00:17:50.887374 2491 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f545795cd7c9a84cc5868a255fc2edd09a29cadcb3d54017ee515da41c43e964\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-wv9tb" Sep 10 00:17:50.887441 kubelet[2491]: E0910 00:17:50.887404 2491 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f545795cd7c9a84cc5868a255fc2edd09a29cadcb3d54017ee515da41c43e964\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-wv9tb" Sep 10 00:17:50.887496 kubelet[2491]: E0910 00:17:50.887443 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-wv9tb_kube-system(c5fb1d1a-aa75-4564-aca9-9712fec491bb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-wv9tb_kube-system(c5fb1d1a-aa75-4564-aca9-9712fec491bb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f545795cd7c9a84cc5868a255fc2edd09a29cadcb3d54017ee515da41c43e964\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-wv9tb" podUID="c5fb1d1a-aa75-4564-aca9-9712fec491bb" Sep 10 00:17:50.898634 containerd[1448]: time="2025-09-10T00:17:50.897941113Z" level=error msg="Failed to destroy network for sandbox \"236b8b75ac710cdcccdabd902fb2ca19e81ab2c20906abf395cf84177e19ad07\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:17:50.898634 containerd[1448]: time="2025-09-10T00:17:50.898362234Z" level=error msg="encountered an error cleaning up failed sandbox \"236b8b75ac710cdcccdabd902fb2ca19e81ab2c20906abf395cf84177e19ad07\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:17:50.898634 containerd[1448]: time="2025-09-10T00:17:50.898403354Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cd6dcff69-pvd28,Uid:e6f75a12-f91b-4f7f-9f91-ba290e49ea84,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"236b8b75ac710cdcccdabd902fb2ca19e81ab2c20906abf395cf84177e19ad07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:17:50.898812 kubelet[2491]: E0910 00:17:50.898593 2491 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"236b8b75ac710cdcccdabd902fb2ca19e81ab2c20906abf395cf84177e19ad07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:17:50.898812 kubelet[2491]: E0910 00:17:50.898657 2491 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"236b8b75ac710cdcccdabd902fb2ca19e81ab2c20906abf395cf84177e19ad07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cd6dcff69-pvd28" Sep 10 00:17:50.898812 kubelet[2491]: E0910 00:17:50.898673 2491 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"236b8b75ac710cdcccdabd902fb2ca19e81ab2c20906abf395cf84177e19ad07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cd6dcff69-pvd28" Sep 10 00:17:50.898901 kubelet[2491]: E0910 00:17:50.898719 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6cd6dcff69-pvd28_calico-apiserver(e6f75a12-f91b-4f7f-9f91-ba290e49ea84)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6cd6dcff69-pvd28_calico-apiserver(e6f75a12-f91b-4f7f-9f91-ba290e49ea84)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"236b8b75ac710cdcccdabd902fb2ca19e81ab2c20906abf395cf84177e19ad07\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6cd6dcff69-pvd28" podUID="e6f75a12-f91b-4f7f-9f91-ba290e49ea84" Sep 10 00:17:50.901997 containerd[1448]: time="2025-09-10T00:17:50.901733596Z" level=error msg="Failed to destroy network for sandbox \"3021ec43b963f0388497de440343db09f02818aaab7fe38470f1090c4ce0c9b9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:17:50.902345 containerd[1448]: time="2025-09-10T00:17:50.902309396Z" level=error msg="encountered an error cleaning up failed sandbox \"3021ec43b963f0388497de440343db09f02818aaab7fe38470f1090c4ce0c9b9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:17:50.902396 containerd[1448]: time="2025-09-10T00:17:50.902364876Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79d7db7c55-h6dqb,Uid:280ac556-7fbc-4068-a5ff-d67586b6c1c2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3021ec43b963f0388497de440343db09f02818aaab7fe38470f1090c4ce0c9b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:17:50.902906 kubelet[2491]: E0910 00:17:50.902839 2491 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3021ec43b963f0388497de440343db09f02818aaab7fe38470f1090c4ce0c9b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:17:50.902971 kubelet[2491]: E0910 00:17:50.902919 2491 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3021ec43b963f0388497de440343db09f02818aaab7fe38470f1090c4ce0c9b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-79d7db7c55-h6dqb" Sep 10 00:17:50.902971 kubelet[2491]: E0910 00:17:50.902936 2491 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3021ec43b963f0388497de440343db09f02818aaab7fe38470f1090c4ce0c9b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-79d7db7c55-h6dqb" Sep 10 00:17:50.903027 kubelet[2491]: E0910 00:17:50.902980 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-79d7db7c55-h6dqb_calico-system(280ac556-7fbc-4068-a5ff-d67586b6c1c2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-79d7db7c55-h6dqb_calico-system(280ac556-7fbc-4068-a5ff-d67586b6c1c2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3021ec43b963f0388497de440343db09f02818aaab7fe38470f1090c4ce0c9b9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-79d7db7c55-h6dqb" podUID="280ac556-7fbc-4068-a5ff-d67586b6c1c2" Sep 10 00:17:50.904192 containerd[1448]: time="2025-09-10T00:17:50.903957317Z" level=error msg="Failed to destroy network for sandbox \"df3fdd8eb494be2e0bbfda368575310fac5cd793ff776dffd99966e4ab4c9a5d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:17:50.905236 containerd[1448]: time="2025-09-10T00:17:50.904824317Z" level=error msg="encountered an error cleaning up failed sandbox \"df3fdd8eb494be2e0bbfda368575310fac5cd793ff776dffd99966e4ab4c9a5d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:17:50.905236 containerd[1448]: time="2025-09-10T00:17:50.904887557Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qlcl5,Uid:2f22f2e0-6fae-4277-8ce4-e71e5b1601af,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"df3fdd8eb494be2e0bbfda368575310fac5cd793ff776dffd99966e4ab4c9a5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:17:50.905335 kubelet[2491]: E0910 00:17:50.905020 2491 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df3fdd8eb494be2e0bbfda368575310fac5cd793ff776dffd99966e4ab4c9a5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:17:50.905335 kubelet[2491]: E0910 00:17:50.905058 2491 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df3fdd8eb494be2e0bbfda368575310fac5cd793ff776dffd99966e4ab4c9a5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-qlcl5" Sep 10 00:17:50.905335 kubelet[2491]: E0910 00:17:50.905075 2491 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df3fdd8eb494be2e0bbfda368575310fac5cd793ff776dffd99966e4ab4c9a5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-qlcl5" Sep 10 00:17:50.905599 kubelet[2491]: E0910 00:17:50.905108 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-qlcl5_kube-system(2f22f2e0-6fae-4277-8ce4-e71e5b1601af)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-qlcl5_kube-system(2f22f2e0-6fae-4277-8ce4-e71e5b1601af)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"df3fdd8eb494be2e0bbfda368575310fac5cd793ff776dffd99966e4ab4c9a5d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-qlcl5" podUID="2f22f2e0-6fae-4277-8ce4-e71e5b1601af" Sep 10 00:17:50.906434 containerd[1448]: time="2025-09-10T00:17:50.906397518Z" level=error msg="Failed to destroy network for sandbox \"f5b08eb23cffa606682e4fc37107e987818118f64a539991a6160e1ac69b86e3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:17:50.906880 containerd[1448]: time="2025-09-10T00:17:50.906846639Z" level=error msg="encountered an error cleaning up failed sandbox \"f5b08eb23cffa606682e4fc37107e987818118f64a539991a6160e1ac69b86e3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:17:50.906949 containerd[1448]: time="2025-09-10T00:17:50.906894079Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55c7f99b6f-f6wcn,Uid:22f16cc2-19f3-4586-9c67-213437e8718f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f5b08eb23cffa606682e4fc37107e987818118f64a539991a6160e1ac69b86e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:17:50.907358 kubelet[2491]: E0910 00:17:50.907075 2491 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5b08eb23cffa606682e4fc37107e987818118f64a539991a6160e1ac69b86e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:17:50.907358 kubelet[2491]: E0910 00:17:50.907151 2491 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5b08eb23cffa606682e4fc37107e987818118f64a539991a6160e1ac69b86e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55c7f99b6f-f6wcn" Sep 10 00:17:50.907358 kubelet[2491]: E0910 00:17:50.907172 2491 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5b08eb23cffa606682e4fc37107e987818118f64a539991a6160e1ac69b86e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55c7f99b6f-f6wcn" Sep 10 00:17:50.907442 kubelet[2491]: E0910 00:17:50.907231 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-55c7f99b6f-f6wcn_calico-system(22f16cc2-19f3-4586-9c67-213437e8718f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-55c7f99b6f-f6wcn_calico-system(22f16cc2-19f3-4586-9c67-213437e8718f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f5b08eb23cffa606682e4fc37107e987818118f64a539991a6160e1ac69b86e3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-55c7f99b6f-f6wcn" podUID="22f16cc2-19f3-4586-9c67-213437e8718f" Sep 10 00:17:50.944743 containerd[1448]: time="2025-09-10T00:17:50.944684221Z" level=error msg="Failed to destroy network for sandbox \"718b009935c17339c8db7c4995f78c36640d04e24a57d704d800e7bfb8296b52\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:17:50.945042 containerd[1448]: time="2025-09-10T00:17:50.945006021Z" level=error msg="encountered an error cleaning up failed sandbox \"718b009935c17339c8db7c4995f78c36640d04e24a57d704d800e7bfb8296b52\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:17:50.945091 containerd[1448]: time="2025-09-10T00:17:50.945055261Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-65xbg,Uid:cc428a56-c099-4362-852b-dab9e5d9f7b7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"718b009935c17339c8db7c4995f78c36640d04e24a57d704d800e7bfb8296b52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:17:50.945334 kubelet[2491]: E0910 00:17:50.945285 2491 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"718b009935c17339c8db7c4995f78c36640d04e24a57d704d800e7bfb8296b52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:17:50.945382 kubelet[2491]: E0910 00:17:50.945353 2491 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"718b009935c17339c8db7c4995f78c36640d04e24a57d704d800e7bfb8296b52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-65xbg" Sep 10 00:17:50.945382 kubelet[2491]: E0910 00:17:50.945373 2491 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"718b009935c17339c8db7c4995f78c36640d04e24a57d704d800e7bfb8296b52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-65xbg" Sep 10 00:17:50.945451 kubelet[2491]: E0910 00:17:50.945425 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-65xbg_calico-system(cc428a56-c099-4362-852b-dab9e5d9f7b7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-65xbg_calico-system(cc428a56-c099-4362-852b-dab9e5d9f7b7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"718b009935c17339c8db7c4995f78c36640d04e24a57d704d800e7bfb8296b52\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-65xbg" podUID="cc428a56-c099-4362-852b-dab9e5d9f7b7" Sep 10 00:17:50.967925 kubelet[2491]: I0910 00:17:50.967819 2491 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df3fdd8eb494be2e0bbfda368575310fac5cd793ff776dffd99966e4ab4c9a5d" Sep 10 00:17:50.968830 containerd[1448]: time="2025-09-10T00:17:50.968787635Z" level=info msg="StopPodSandbox for \"df3fdd8eb494be2e0bbfda368575310fac5cd793ff776dffd99966e4ab4c9a5d\"" Sep 10 00:17:50.968998 containerd[1448]: time="2025-09-10T00:17:50.968969435Z" level=info msg="Ensure that sandbox df3fdd8eb494be2e0bbfda368575310fac5cd793ff776dffd99966e4ab4c9a5d in task-service has been cleanup successfully" Sep 10 00:17:50.970874 kubelet[2491]: I0910 00:17:50.970851 2491 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5b08eb23cffa606682e4fc37107e987818118f64a539991a6160e1ac69b86e3" Sep 10 00:17:50.971825 containerd[1448]: time="2025-09-10T00:17:50.971786957Z" level=info msg="StopPodSandbox for \"f5b08eb23cffa606682e4fc37107e987818118f64a539991a6160e1ac69b86e3\"" Sep 10 00:17:50.972671 containerd[1448]: time="2025-09-10T00:17:50.972079397Z" level=info msg="Ensure that sandbox f5b08eb23cffa606682e4fc37107e987818118f64a539991a6160e1ac69b86e3 in task-service has been cleanup successfully" Sep 10 00:17:50.973190 kubelet[2491]: I0910 00:17:50.973135 2491 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="237b2951e6d066a5ef349d91ec76cb766d252860227afe68b6554018af713707" Sep 10 00:17:50.973970 containerd[1448]: time="2025-09-10T00:17:50.973894718Z" level=info msg="StopPodSandbox for \"237b2951e6d066a5ef349d91ec76cb766d252860227afe68b6554018af713707\"" Sep 10 00:17:50.974221 containerd[1448]: time="2025-09-10T00:17:50.974117238Z" level=info msg="Ensure that sandbox 237b2951e6d066a5ef349d91ec76cb766d252860227afe68b6554018af713707 in task-service has been cleanup successfully" Sep 10 00:17:50.977162 kubelet[2491]: I0910 00:17:50.977142 2491 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f545795cd7c9a84cc5868a255fc2edd09a29cadcb3d54017ee515da41c43e964" Sep 10 00:17:50.977624 containerd[1448]: time="2025-09-10T00:17:50.977590080Z" level=info msg="StopPodSandbox for \"f545795cd7c9a84cc5868a255fc2edd09a29cadcb3d54017ee515da41c43e964\"" Sep 10 00:17:50.977756 containerd[1448]: time="2025-09-10T00:17:50.977714640Z" level=info msg="Ensure that sandbox f545795cd7c9a84cc5868a255fc2edd09a29cadcb3d54017ee515da41c43e964 in task-service has been cleanup successfully" Sep 10 00:17:50.984213 containerd[1448]: time="2025-09-10T00:17:50.984068404Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 10 00:17:50.985875 kubelet[2491]: I0910 00:17:50.985847 2491 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="718b009935c17339c8db7c4995f78c36640d04e24a57d704d800e7bfb8296b52" Sep 10 00:17:50.986437 containerd[1448]: time="2025-09-10T00:17:50.986399325Z" level=info msg="StopPodSandbox for \"718b009935c17339c8db7c4995f78c36640d04e24a57d704d800e7bfb8296b52\"" Sep 10 00:17:50.986801 containerd[1448]: time="2025-09-10T00:17:50.986586445Z" level=info msg="Ensure that sandbox 718b009935c17339c8db7c4995f78c36640d04e24a57d704d800e7bfb8296b52 in task-service has been cleanup successfully" Sep 10 00:17:50.989967 kubelet[2491]: I0910 00:17:50.989937 2491 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c926cec19e0b2deb730a9bb6354c1568bbfabdb70a7d55cc462fe937d639cfcc" Sep 10 00:17:50.991768 containerd[1448]: time="2025-09-10T00:17:50.990848008Z" level=info msg="StopPodSandbox for \"c926cec19e0b2deb730a9bb6354c1568bbfabdb70a7d55cc462fe937d639cfcc\"" Sep 10 00:17:50.991768 containerd[1448]: time="2025-09-10T00:17:50.991654608Z" level=info msg="Ensure that sandbox c926cec19e0b2deb730a9bb6354c1568bbfabdb70a7d55cc462fe937d639cfcc in task-service has been cleanup successfully" Sep 10 00:17:50.995549 kubelet[2491]: I0910 00:17:50.995511 2491 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3021ec43b963f0388497de440343db09f02818aaab7fe38470f1090c4ce0c9b9" Sep 10 00:17:50.997554 containerd[1448]: time="2025-09-10T00:17:50.996772571Z" level=info msg="StopPodSandbox for \"3021ec43b963f0388497de440343db09f02818aaab7fe38470f1090c4ce0c9b9\"" Sep 10 00:17:50.997554 containerd[1448]: time="2025-09-10T00:17:50.996963251Z" level=info msg="Ensure that sandbox 3021ec43b963f0388497de440343db09f02818aaab7fe38470f1090c4ce0c9b9 in task-service has been cleanup successfully" Sep 10 00:17:50.997658 kubelet[2491]: I0910 00:17:50.997186 2491 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="236b8b75ac710cdcccdabd902fb2ca19e81ab2c20906abf395cf84177e19ad07" Sep 10 00:17:50.997787 containerd[1448]: time="2025-09-10T00:17:50.997698372Z" level=info msg="StopPodSandbox for \"236b8b75ac710cdcccdabd902fb2ca19e81ab2c20906abf395cf84177e19ad07\"" Sep 10 00:17:50.999176 containerd[1448]: time="2025-09-10T00:17:50.999134973Z" level=info msg="Ensure that sandbox 236b8b75ac710cdcccdabd902fb2ca19e81ab2c20906abf395cf84177e19ad07 in task-service has been cleanup successfully" Sep 10 00:17:51.026319 containerd[1448]: time="2025-09-10T00:17:51.026272068Z" level=error msg="StopPodSandbox for \"f5b08eb23cffa606682e4fc37107e987818118f64a539991a6160e1ac69b86e3\" failed" error="failed to destroy network for sandbox \"f5b08eb23cffa606682e4fc37107e987818118f64a539991a6160e1ac69b86e3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:17:51.026570 kubelet[2491]: E0910 00:17:51.026515 2491 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f5b08eb23cffa606682e4fc37107e987818118f64a539991a6160e1ac69b86e3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f5b08eb23cffa606682e4fc37107e987818118f64a539991a6160e1ac69b86e3" Sep 10 00:17:51.030853 kubelet[2491]: E0910 00:17:51.030784 2491 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f5b08eb23cffa606682e4fc37107e987818118f64a539991a6160e1ac69b86e3"} Sep 10 00:17:51.030915 kubelet[2491]: E0910 00:17:51.030876 2491 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"22f16cc2-19f3-4586-9c67-213437e8718f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f5b08eb23cffa606682e4fc37107e987818118f64a539991a6160e1ac69b86e3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 10 00:17:51.030976 kubelet[2491]: E0910 00:17:51.030909 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"22f16cc2-19f3-4586-9c67-213437e8718f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f5b08eb23cffa606682e4fc37107e987818118f64a539991a6160e1ac69b86e3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-55c7f99b6f-f6wcn" podUID="22f16cc2-19f3-4586-9c67-213437e8718f" Sep 10 00:17:51.036922 containerd[1448]: time="2025-09-10T00:17:51.036875954Z" level=error msg="StopPodSandbox for \"f545795cd7c9a84cc5868a255fc2edd09a29cadcb3d54017ee515da41c43e964\" failed" error="failed to destroy network for sandbox \"f545795cd7c9a84cc5868a255fc2edd09a29cadcb3d54017ee515da41c43e964\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:17:51.037129 kubelet[2491]: E0910 00:17:51.037079 2491 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f545795cd7c9a84cc5868a255fc2edd09a29cadcb3d54017ee515da41c43e964\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f545795cd7c9a84cc5868a255fc2edd09a29cadcb3d54017ee515da41c43e964" Sep 10 00:17:51.037204 containerd[1448]: time="2025-09-10T00:17:51.037103914Z" level=error msg="StopPodSandbox for \"718b009935c17339c8db7c4995f78c36640d04e24a57d704d800e7bfb8296b52\" failed" error="failed to destroy network for sandbox \"718b009935c17339c8db7c4995f78c36640d04e24a57d704d800e7bfb8296b52\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:17:51.037235 kubelet[2491]: E0910 00:17:51.037140 2491 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f545795cd7c9a84cc5868a255fc2edd09a29cadcb3d54017ee515da41c43e964"} Sep 10 00:17:51.037235 kubelet[2491]: E0910 00:17:51.037174 2491 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c5fb1d1a-aa75-4564-aca9-9712fec491bb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f545795cd7c9a84cc5868a255fc2edd09a29cadcb3d54017ee515da41c43e964\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 10 00:17:51.037235 kubelet[2491]: E0910 00:17:51.037203 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c5fb1d1a-aa75-4564-aca9-9712fec491bb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f545795cd7c9a84cc5868a255fc2edd09a29cadcb3d54017ee515da41c43e964\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-wv9tb" podUID="c5fb1d1a-aa75-4564-aca9-9712fec491bb" Sep 10 00:17:51.037343 kubelet[2491]: E0910 00:17:51.037272 2491 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"718b009935c17339c8db7c4995f78c36640d04e24a57d704d800e7bfb8296b52\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="718b009935c17339c8db7c4995f78c36640d04e24a57d704d800e7bfb8296b52" Sep 10 00:17:51.037343 kubelet[2491]: E0910 00:17:51.037292 2491 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"718b009935c17339c8db7c4995f78c36640d04e24a57d704d800e7bfb8296b52"} Sep 10 00:17:51.037343 kubelet[2491]: E0910 00:17:51.037309 2491 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cc428a56-c099-4362-852b-dab9e5d9f7b7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"718b009935c17339c8db7c4995f78c36640d04e24a57d704d800e7bfb8296b52\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 10 00:17:51.037343 kubelet[2491]: E0910 00:17:51.037328 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cc428a56-c099-4362-852b-dab9e5d9f7b7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"718b009935c17339c8db7c4995f78c36640d04e24a57d704d800e7bfb8296b52\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-65xbg" podUID="cc428a56-c099-4362-852b-dab9e5d9f7b7" Sep 10 00:17:51.039912 containerd[1448]: time="2025-09-10T00:17:51.039856235Z" level=error msg="StopPodSandbox for \"237b2951e6d066a5ef349d91ec76cb766d252860227afe68b6554018af713707\" failed" error="failed to destroy network for sandbox \"237b2951e6d066a5ef349d91ec76cb766d252860227afe68b6554018af713707\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:17:51.040094 kubelet[2491]: E0910 00:17:51.040037 2491 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"237b2951e6d066a5ef349d91ec76cb766d252860227afe68b6554018af713707\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="237b2951e6d066a5ef349d91ec76cb766d252860227afe68b6554018af713707" Sep 10 00:17:51.040148 kubelet[2491]: E0910 00:17:51.040098 2491 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"237b2951e6d066a5ef349d91ec76cb766d252860227afe68b6554018af713707"} Sep 10 00:17:51.040148 kubelet[2491]: E0910 00:17:51.040128 2491 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9eadb910-646d-43a4-b7b4-6b854d565ea6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"237b2951e6d066a5ef349d91ec76cb766d252860227afe68b6554018af713707\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 10 00:17:51.040233 kubelet[2491]: E0910 00:17:51.040146 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9eadb910-646d-43a4-b7b4-6b854d565ea6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"237b2951e6d066a5ef349d91ec76cb766d252860227afe68b6554018af713707\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6cd6dcff69-qt9l6" podUID="9eadb910-646d-43a4-b7b4-6b854d565ea6" Sep 10 00:17:51.040509 containerd[1448]: time="2025-09-10T00:17:51.040475076Z" level=error msg="StopPodSandbox for \"df3fdd8eb494be2e0bbfda368575310fac5cd793ff776dffd99966e4ab4c9a5d\" failed" error="failed to destroy network for sandbox \"df3fdd8eb494be2e0bbfda368575310fac5cd793ff776dffd99966e4ab4c9a5d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:17:51.040640 kubelet[2491]: E0910 00:17:51.040615 2491 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"df3fdd8eb494be2e0bbfda368575310fac5cd793ff776dffd99966e4ab4c9a5d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="df3fdd8eb494be2e0bbfda368575310fac5cd793ff776dffd99966e4ab4c9a5d" Sep 10 00:17:51.040680 kubelet[2491]: E0910 00:17:51.040657 2491 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"df3fdd8eb494be2e0bbfda368575310fac5cd793ff776dffd99966e4ab4c9a5d"} Sep 10 00:17:51.040712 kubelet[2491]: E0910 00:17:51.040681 2491 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2f22f2e0-6fae-4277-8ce4-e71e5b1601af\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"df3fdd8eb494be2e0bbfda368575310fac5cd793ff776dffd99966e4ab4c9a5d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 10 00:17:51.040712 kubelet[2491]: E0910 00:17:51.040699 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2f22f2e0-6fae-4277-8ce4-e71e5b1601af\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"df3fdd8eb494be2e0bbfda368575310fac5cd793ff776dffd99966e4ab4c9a5d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-qlcl5" podUID="2f22f2e0-6fae-4277-8ce4-e71e5b1601af" Sep 10 00:17:51.050346 containerd[1448]: time="2025-09-10T00:17:51.050298081Z" level=error msg="StopPodSandbox for \"c926cec19e0b2deb730a9bb6354c1568bbfabdb70a7d55cc462fe937d639cfcc\" failed" error="failed to destroy network for sandbox \"c926cec19e0b2deb730a9bb6354c1568bbfabdb70a7d55cc462fe937d639cfcc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:17:51.050482 kubelet[2491]: E0910 00:17:51.050451 2491 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c926cec19e0b2deb730a9bb6354c1568bbfabdb70a7d55cc462fe937d639cfcc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c926cec19e0b2deb730a9bb6354c1568bbfabdb70a7d55cc462fe937d639cfcc" Sep 10 00:17:51.050545 kubelet[2491]: E0910 00:17:51.050489 2491 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c926cec19e0b2deb730a9bb6354c1568bbfabdb70a7d55cc462fe937d639cfcc"} Sep 10 00:17:51.050545 kubelet[2491]: E0910 00:17:51.050515 2491 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"48ece463-fbe3-4a32-8fd2-523723a890ae\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c926cec19e0b2deb730a9bb6354c1568bbfabdb70a7d55cc462fe937d639cfcc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 10 00:17:51.050545 kubelet[2491]: E0910 00:17:51.050534 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"48ece463-fbe3-4a32-8fd2-523723a890ae\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c926cec19e0b2deb730a9bb6354c1568bbfabdb70a7d55cc462fe937d639cfcc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-njj6z" podUID="48ece463-fbe3-4a32-8fd2-523723a890ae" Sep 10 00:17:51.055191 containerd[1448]: time="2025-09-10T00:17:51.055150524Z" level=error msg="StopPodSandbox for \"236b8b75ac710cdcccdabd902fb2ca19e81ab2c20906abf395cf84177e19ad07\" failed" error="failed to destroy network for sandbox \"236b8b75ac710cdcccdabd902fb2ca19e81ab2c20906abf395cf84177e19ad07\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:17:51.055412 kubelet[2491]: E0910 00:17:51.055382 2491 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"236b8b75ac710cdcccdabd902fb2ca19e81ab2c20906abf395cf84177e19ad07\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="236b8b75ac710cdcccdabd902fb2ca19e81ab2c20906abf395cf84177e19ad07" Sep 10 00:17:51.055454 kubelet[2491]: E0910 00:17:51.055419 2491 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"236b8b75ac710cdcccdabd902fb2ca19e81ab2c20906abf395cf84177e19ad07"} Sep 10 00:17:51.055483 kubelet[2491]: E0910 00:17:51.055462 2491 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e6f75a12-f91b-4f7f-9f91-ba290e49ea84\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"236b8b75ac710cdcccdabd902fb2ca19e81ab2c20906abf395cf84177e19ad07\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 10 00:17:51.055525 kubelet[2491]: E0910 00:17:51.055486 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e6f75a12-f91b-4f7f-9f91-ba290e49ea84\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"236b8b75ac710cdcccdabd902fb2ca19e81ab2c20906abf395cf84177e19ad07\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6cd6dcff69-pvd28" podUID="e6f75a12-f91b-4f7f-9f91-ba290e49ea84" Sep 10 00:17:51.058170 containerd[1448]: time="2025-09-10T00:17:51.058114725Z" level=error msg="StopPodSandbox for \"3021ec43b963f0388497de440343db09f02818aaab7fe38470f1090c4ce0c9b9\" failed" error="failed to destroy network for sandbox \"3021ec43b963f0388497de440343db09f02818aaab7fe38470f1090c4ce0c9b9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:17:51.058308 kubelet[2491]: E0910 00:17:51.058278 2491 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3021ec43b963f0388497de440343db09f02818aaab7fe38470f1090c4ce0c9b9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3021ec43b963f0388497de440343db09f02818aaab7fe38470f1090c4ce0c9b9" Sep 10 00:17:51.058347 kubelet[2491]: E0910 00:17:51.058322 2491 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3021ec43b963f0388497de440343db09f02818aaab7fe38470f1090c4ce0c9b9"} Sep 10 00:17:51.058369 kubelet[2491]: E0910 00:17:51.058351 2491 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"280ac556-7fbc-4068-a5ff-d67586b6c1c2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3021ec43b963f0388497de440343db09f02818aaab7fe38470f1090c4ce0c9b9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 10 00:17:51.058408 kubelet[2491]: E0910 00:17:51.058369 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"280ac556-7fbc-4068-a5ff-d67586b6c1c2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3021ec43b963f0388497de440343db09f02818aaab7fe38470f1090c4ce0c9b9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-79d7db7c55-h6dqb" podUID="280ac556-7fbc-4068-a5ff-d67586b6c1c2" Sep 10 00:17:51.691604 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-df3fdd8eb494be2e0bbfda368575310fac5cd793ff776dffd99966e4ab4c9a5d-shm.mount: Deactivated successfully. Sep 10 00:17:51.691696 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-237b2951e6d066a5ef349d91ec76cb766d252860227afe68b6554018af713707-shm.mount: Deactivated successfully. Sep 10 00:17:51.691744 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f545795cd7c9a84cc5868a255fc2edd09a29cadcb3d54017ee515da41c43e964-shm.mount: Deactivated successfully. Sep 10 00:17:52.384818 kubelet[2491]: I0910 00:17:52.384618 2491 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 00:17:52.386049 kubelet[2491]: E0910 00:17:52.386028 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:17:53.000948 kubelet[2491]: E0910 00:17:53.000916 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:17:54.792775 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2107730279.mount: Deactivated successfully. Sep 10 00:17:55.024060 containerd[1448]: time="2025-09-10T00:17:55.024005827Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:17:55.024575 containerd[1448]: time="2025-09-10T00:17:55.024549147Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 10 00:17:55.025383 containerd[1448]: time="2025-09-10T00:17:55.025357067Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:17:55.027347 containerd[1448]: time="2025-09-10T00:17:55.027298428Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:17:55.028157 containerd[1448]: time="2025-09-10T00:17:55.027696788Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 4.043573704s" Sep 10 00:17:55.028157 containerd[1448]: time="2025-09-10T00:17:55.027727188Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 10 00:17:55.038225 containerd[1448]: time="2025-09-10T00:17:55.038188473Z" level=info msg="CreateContainer within sandbox \"5640243e391c5312d8ce52874b90b7ab9cc6943e9a6694fb5bf8fd14c2ea8aa6\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 10 00:17:55.069944 containerd[1448]: time="2025-09-10T00:17:55.069829726Z" level=info msg="CreateContainer within sandbox \"5640243e391c5312d8ce52874b90b7ab9cc6943e9a6694fb5bf8fd14c2ea8aa6\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f153fded1657f3daeaf17b9c425988969ba014861463a727f8d9c831bd77035f\"" Sep 10 00:17:55.070497 containerd[1448]: time="2025-09-10T00:17:55.070455527Z" level=info msg="StartContainer for \"f153fded1657f3daeaf17b9c425988969ba014861463a727f8d9c831bd77035f\"" Sep 10 00:17:55.142941 systemd[1]: Started cri-containerd-f153fded1657f3daeaf17b9c425988969ba014861463a727f8d9c831bd77035f.scope - libcontainer container f153fded1657f3daeaf17b9c425988969ba014861463a727f8d9c831bd77035f. Sep 10 00:17:55.169123 containerd[1448]: time="2025-09-10T00:17:55.169078809Z" level=info msg="StartContainer for \"f153fded1657f3daeaf17b9c425988969ba014861463a727f8d9c831bd77035f\" returns successfully" Sep 10 00:17:55.304626 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 10 00:17:55.304737 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 10 00:17:55.405430 containerd[1448]: time="2025-09-10T00:17:55.405231909Z" level=info msg="StopPodSandbox for \"3021ec43b963f0388497de440343db09f02818aaab7fe38470f1090c4ce0c9b9\"" Sep 10 00:17:55.577028 containerd[1448]: 2025-09-10 00:17:55.473 [INFO][3861] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3021ec43b963f0388497de440343db09f02818aaab7fe38470f1090c4ce0c9b9" Sep 10 00:17:55.577028 containerd[1448]: 2025-09-10 00:17:55.474 [INFO][3861] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3021ec43b963f0388497de440343db09f02818aaab7fe38470f1090c4ce0c9b9" iface="eth0" netns="/var/run/netns/cni-21360ac9-cf86-185b-a05c-540e9123eb14" Sep 10 00:17:55.577028 containerd[1448]: 2025-09-10 00:17:55.474 [INFO][3861] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3021ec43b963f0388497de440343db09f02818aaab7fe38470f1090c4ce0c9b9" iface="eth0" netns="/var/run/netns/cni-21360ac9-cf86-185b-a05c-540e9123eb14" Sep 10 00:17:55.577028 containerd[1448]: 2025-09-10 00:17:55.476 [INFO][3861] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3021ec43b963f0388497de440343db09f02818aaab7fe38470f1090c4ce0c9b9" iface="eth0" netns="/var/run/netns/cni-21360ac9-cf86-185b-a05c-540e9123eb14" Sep 10 00:17:55.577028 containerd[1448]: 2025-09-10 00:17:55.477 [INFO][3861] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3021ec43b963f0388497de440343db09f02818aaab7fe38470f1090c4ce0c9b9" Sep 10 00:17:55.577028 containerd[1448]: 2025-09-10 00:17:55.477 [INFO][3861] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3021ec43b963f0388497de440343db09f02818aaab7fe38470f1090c4ce0c9b9" Sep 10 00:17:55.577028 containerd[1448]: 2025-09-10 00:17:55.561 [INFO][3872] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3021ec43b963f0388497de440343db09f02818aaab7fe38470f1090c4ce0c9b9" HandleID="k8s-pod-network.3021ec43b963f0388497de440343db09f02818aaab7fe38470f1090c4ce0c9b9" Workload="localhost-k8s-whisker--79d7db7c55--h6dqb-eth0" Sep 10 00:17:55.577028 containerd[1448]: 2025-09-10 00:17:55.561 [INFO][3872] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:17:55.577028 containerd[1448]: 2025-09-10 00:17:55.561 [INFO][3872] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:17:55.577028 containerd[1448]: 2025-09-10 00:17:55.571 [WARNING][3872] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3021ec43b963f0388497de440343db09f02818aaab7fe38470f1090c4ce0c9b9" HandleID="k8s-pod-network.3021ec43b963f0388497de440343db09f02818aaab7fe38470f1090c4ce0c9b9" Workload="localhost-k8s-whisker--79d7db7c55--h6dqb-eth0" Sep 10 00:17:55.577028 containerd[1448]: 2025-09-10 00:17:55.571 [INFO][3872] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3021ec43b963f0388497de440343db09f02818aaab7fe38470f1090c4ce0c9b9" HandleID="k8s-pod-network.3021ec43b963f0388497de440343db09f02818aaab7fe38470f1090c4ce0c9b9" Workload="localhost-k8s-whisker--79d7db7c55--h6dqb-eth0" Sep 10 00:17:55.577028 containerd[1448]: 2025-09-10 00:17:55.573 [INFO][3872] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:17:55.577028 containerd[1448]: 2025-09-10 00:17:55.575 [INFO][3861] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3021ec43b963f0388497de440343db09f02818aaab7fe38470f1090c4ce0c9b9" Sep 10 00:17:55.577028 containerd[1448]: time="2025-09-10T00:17:55.577065302Z" level=info msg="TearDown network for sandbox \"3021ec43b963f0388497de440343db09f02818aaab7fe38470f1090c4ce0c9b9\" successfully" Sep 10 00:17:55.577028 containerd[1448]: time="2025-09-10T00:17:55.577090262Z" level=info msg="StopPodSandbox for \"3021ec43b963f0388497de440343db09f02818aaab7fe38470f1090c4ce0c9b9\" returns successfully" Sep 10 00:17:55.633714 kubelet[2491]: I0910 00:17:55.633676 2491 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/280ac556-7fbc-4068-a5ff-d67586b6c1c2-whisker-backend-key-pair\") pod \"280ac556-7fbc-4068-a5ff-d67586b6c1c2\" (UID: \"280ac556-7fbc-4068-a5ff-d67586b6c1c2\") " Sep 10 00:17:55.633714 kubelet[2491]: I0910 00:17:55.633716 2491 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/280ac556-7fbc-4068-a5ff-d67586b6c1c2-whisker-ca-bundle\") pod \"280ac556-7fbc-4068-a5ff-d67586b6c1c2\" (UID: \"280ac556-7fbc-4068-a5ff-d67586b6c1c2\") " Sep 10 00:17:55.634096 kubelet[2491]: I0910 00:17:55.633735 2491 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcq82\" (UniqueName: \"kubernetes.io/projected/280ac556-7fbc-4068-a5ff-d67586b6c1c2-kube-api-access-mcq82\") pod \"280ac556-7fbc-4068-a5ff-d67586b6c1c2\" (UID: \"280ac556-7fbc-4068-a5ff-d67586b6c1c2\") " Sep 10 00:17:55.648519 kubelet[2491]: I0910 00:17:55.648424 2491 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/280ac556-7fbc-4068-a5ff-d67586b6c1c2-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "280ac556-7fbc-4068-a5ff-d67586b6c1c2" (UID: "280ac556-7fbc-4068-a5ff-d67586b6c1c2"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 10 00:17:55.648783 kubelet[2491]: I0910 00:17:55.648721 2491 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/280ac556-7fbc-4068-a5ff-d67586b6c1c2-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "280ac556-7fbc-4068-a5ff-d67586b6c1c2" (UID: "280ac556-7fbc-4068-a5ff-d67586b6c1c2"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 10 00:17:55.648962 kubelet[2491]: I0910 00:17:55.648936 2491 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/280ac556-7fbc-4068-a5ff-d67586b6c1c2-kube-api-access-mcq82" (OuterVolumeSpecName: "kube-api-access-mcq82") pod "280ac556-7fbc-4068-a5ff-d67586b6c1c2" (UID: "280ac556-7fbc-4068-a5ff-d67586b6c1c2"). InnerVolumeSpecName "kube-api-access-mcq82". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 10 00:17:55.734338 kubelet[2491]: I0910 00:17:55.734289 2491 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/280ac556-7fbc-4068-a5ff-d67586b6c1c2-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 10 00:17:55.734338 kubelet[2491]: I0910 00:17:55.734335 2491 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/280ac556-7fbc-4068-a5ff-d67586b6c1c2-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 10 00:17:55.734479 kubelet[2491]: I0910 00:17:55.734354 2491 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mcq82\" (UniqueName: \"kubernetes.io/projected/280ac556-7fbc-4068-a5ff-d67586b6c1c2-kube-api-access-mcq82\") on node \"localhost\" DevicePath \"\"" Sep 10 00:17:55.794388 systemd[1]: run-netns-cni\x2d21360ac9\x2dcf86\x2d185b\x2da05c\x2d540e9123eb14.mount: Deactivated successfully. Sep 10 00:17:55.794470 systemd[1]: var-lib-kubelet-pods-280ac556\x2d7fbc\x2d4068\x2da5ff\x2dd67586b6c1c2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmcq82.mount: Deactivated successfully. Sep 10 00:17:55.794527 systemd[1]: var-lib-kubelet-pods-280ac556\x2d7fbc\x2d4068\x2da5ff\x2dd67586b6c1c2-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 10 00:17:55.886045 systemd[1]: Removed slice kubepods-besteffort-pod280ac556_7fbc_4068_a5ff_d67586b6c1c2.slice - libcontainer container kubepods-besteffort-pod280ac556_7fbc_4068_a5ff_d67586b6c1c2.slice. Sep 10 00:17:56.030203 kubelet[2491]: I0910 00:17:56.030055 2491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-k999g" podStartSLOduration=1.4143369080000001 podStartE2EDuration="13.030041254s" podCreationTimestamp="2025-09-10 00:17:43 +0000 UTC" firstStartedPulling="2025-09-10 00:17:43.412630283 +0000 UTC m=+23.620385327" lastFinishedPulling="2025-09-10 00:17:55.028334589 +0000 UTC m=+35.236089673" observedRunningTime="2025-09-10 00:17:56.029112013 +0000 UTC m=+36.236867097" watchObservedRunningTime="2025-09-10 00:17:56.030041254 +0000 UTC m=+36.237796338" Sep 10 00:17:56.130649 systemd[1]: Created slice kubepods-besteffort-pod440f4c5d_e3f7_4132_98dd_88cd5240ab5f.slice - libcontainer container kubepods-besteffort-pod440f4c5d_e3f7_4132_98dd_88cd5240ab5f.slice. Sep 10 00:17:56.138796 kubelet[2491]: I0910 00:17:56.138764 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/440f4c5d-e3f7-4132-98dd-88cd5240ab5f-whisker-ca-bundle\") pod \"whisker-59b6cbfcbb-gq69l\" (UID: \"440f4c5d-e3f7-4132-98dd-88cd5240ab5f\") " pod="calico-system/whisker-59b6cbfcbb-gq69l" Sep 10 00:17:56.139011 kubelet[2491]: I0910 00:17:56.138925 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddrqk\" (UniqueName: \"kubernetes.io/projected/440f4c5d-e3f7-4132-98dd-88cd5240ab5f-kube-api-access-ddrqk\") pod \"whisker-59b6cbfcbb-gq69l\" (UID: \"440f4c5d-e3f7-4132-98dd-88cd5240ab5f\") " pod="calico-system/whisker-59b6cbfcbb-gq69l" Sep 10 00:17:56.139011 kubelet[2491]: I0910 00:17:56.138954 2491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/440f4c5d-e3f7-4132-98dd-88cd5240ab5f-whisker-backend-key-pair\") pod \"whisker-59b6cbfcbb-gq69l\" (UID: \"440f4c5d-e3f7-4132-98dd-88cd5240ab5f\") " pod="calico-system/whisker-59b6cbfcbb-gq69l" Sep 10 00:17:56.433703 containerd[1448]: time="2025-09-10T00:17:56.433666535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59b6cbfcbb-gq69l,Uid:440f4c5d-e3f7-4132-98dd-88cd5240ab5f,Namespace:calico-system,Attempt:0,}" Sep 10 00:17:56.559383 systemd-networkd[1379]: cali73942786591: Link UP Sep 10 00:17:56.559706 systemd-networkd[1379]: cali73942786591: Gained carrier Sep 10 00:17:56.573124 containerd[1448]: 2025-09-10 00:17:56.468 [INFO][3894] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 10 00:17:56.573124 containerd[1448]: 2025-09-10 00:17:56.483 [INFO][3894] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--59b6cbfcbb--gq69l-eth0 whisker-59b6cbfcbb- calico-system 440f4c5d-e3f7-4132-98dd-88cd5240ab5f 920 0 2025-09-10 00:17:56 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:59b6cbfcbb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-59b6cbfcbb-gq69l eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali73942786591 [] [] }} ContainerID="627e24adb5a89478d44afc03b6e0f74fe39a1d0f8cdb2f7484ce43d29387c54e" Namespace="calico-system" Pod="whisker-59b6cbfcbb-gq69l" WorkloadEndpoint="localhost-k8s-whisker--59b6cbfcbb--gq69l-" Sep 10 00:17:56.573124 containerd[1448]: 2025-09-10 00:17:56.483 [INFO][3894] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="627e24adb5a89478d44afc03b6e0f74fe39a1d0f8cdb2f7484ce43d29387c54e" Namespace="calico-system" Pod="whisker-59b6cbfcbb-gq69l" WorkloadEndpoint="localhost-k8s-whisker--59b6cbfcbb--gq69l-eth0" Sep 10 00:17:56.573124 containerd[1448]: 2025-09-10 00:17:56.511 [INFO][3910] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="627e24adb5a89478d44afc03b6e0f74fe39a1d0f8cdb2f7484ce43d29387c54e" HandleID="k8s-pod-network.627e24adb5a89478d44afc03b6e0f74fe39a1d0f8cdb2f7484ce43d29387c54e" Workload="localhost-k8s-whisker--59b6cbfcbb--gq69l-eth0" Sep 10 00:17:56.573124 containerd[1448]: 2025-09-10 00:17:56.512 [INFO][3910] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="627e24adb5a89478d44afc03b6e0f74fe39a1d0f8cdb2f7484ce43d29387c54e" HandleID="k8s-pod-network.627e24adb5a89478d44afc03b6e0f74fe39a1d0f8cdb2f7484ce43d29387c54e" Workload="localhost-k8s-whisker--59b6cbfcbb--gq69l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c810), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-59b6cbfcbb-gq69l", "timestamp":"2025-09-10 00:17:56.511974766 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 00:17:56.573124 containerd[1448]: 2025-09-10 00:17:56.512 [INFO][3910] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:17:56.573124 containerd[1448]: 2025-09-10 00:17:56.512 [INFO][3910] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:17:56.573124 containerd[1448]: 2025-09-10 00:17:56.512 [INFO][3910] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 00:17:56.573124 containerd[1448]: 2025-09-10 00:17:56.523 [INFO][3910] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.627e24adb5a89478d44afc03b6e0f74fe39a1d0f8cdb2f7484ce43d29387c54e" host="localhost" Sep 10 00:17:56.573124 containerd[1448]: 2025-09-10 00:17:56.528 [INFO][3910] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 00:17:56.573124 containerd[1448]: 2025-09-10 00:17:56.532 [INFO][3910] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 00:17:56.573124 containerd[1448]: 2025-09-10 00:17:56.534 [INFO][3910] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 00:17:56.573124 containerd[1448]: 2025-09-10 00:17:56.536 [INFO][3910] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 00:17:56.573124 containerd[1448]: 2025-09-10 00:17:56.536 [INFO][3910] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.627e24adb5a89478d44afc03b6e0f74fe39a1d0f8cdb2f7484ce43d29387c54e" host="localhost" Sep 10 00:17:56.573124 containerd[1448]: 2025-09-10 00:17:56.538 [INFO][3910] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.627e24adb5a89478d44afc03b6e0f74fe39a1d0f8cdb2f7484ce43d29387c54e Sep 10 00:17:56.573124 containerd[1448]: 2025-09-10 00:17:56.544 [INFO][3910] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.627e24adb5a89478d44afc03b6e0f74fe39a1d0f8cdb2f7484ce43d29387c54e" host="localhost" Sep 10 00:17:56.573124 containerd[1448]: 2025-09-10 00:17:56.550 [INFO][3910] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.627e24adb5a89478d44afc03b6e0f74fe39a1d0f8cdb2f7484ce43d29387c54e" host="localhost" Sep 10 00:17:56.573124 containerd[1448]: 2025-09-10 00:17:56.550 [INFO][3910] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.627e24adb5a89478d44afc03b6e0f74fe39a1d0f8cdb2f7484ce43d29387c54e" host="localhost" Sep 10 00:17:56.573124 containerd[1448]: 2025-09-10 00:17:56.550 [INFO][3910] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:17:56.573124 containerd[1448]: 2025-09-10 00:17:56.550 [INFO][3910] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="627e24adb5a89478d44afc03b6e0f74fe39a1d0f8cdb2f7484ce43d29387c54e" HandleID="k8s-pod-network.627e24adb5a89478d44afc03b6e0f74fe39a1d0f8cdb2f7484ce43d29387c54e" Workload="localhost-k8s-whisker--59b6cbfcbb--gq69l-eth0" Sep 10 00:17:56.574152 containerd[1448]: 2025-09-10 00:17:56.552 [INFO][3894] cni-plugin/k8s.go 418: Populated endpoint ContainerID="627e24adb5a89478d44afc03b6e0f74fe39a1d0f8cdb2f7484ce43d29387c54e" Namespace="calico-system" Pod="whisker-59b6cbfcbb-gq69l" WorkloadEndpoint="localhost-k8s-whisker--59b6cbfcbb--gq69l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--59b6cbfcbb--gq69l-eth0", GenerateName:"whisker-59b6cbfcbb-", Namespace:"calico-system", SelfLink:"", UID:"440f4c5d-e3f7-4132-98dd-88cd5240ab5f", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 17, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"59b6cbfcbb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-59b6cbfcbb-gq69l", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali73942786591", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:17:56.574152 containerd[1448]: 2025-09-10 00:17:56.552 [INFO][3894] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="627e24adb5a89478d44afc03b6e0f74fe39a1d0f8cdb2f7484ce43d29387c54e" Namespace="calico-system" Pod="whisker-59b6cbfcbb-gq69l" WorkloadEndpoint="localhost-k8s-whisker--59b6cbfcbb--gq69l-eth0" Sep 10 00:17:56.574152 containerd[1448]: 2025-09-10 00:17:56.552 [INFO][3894] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali73942786591 ContainerID="627e24adb5a89478d44afc03b6e0f74fe39a1d0f8cdb2f7484ce43d29387c54e" Namespace="calico-system" Pod="whisker-59b6cbfcbb-gq69l" WorkloadEndpoint="localhost-k8s-whisker--59b6cbfcbb--gq69l-eth0" Sep 10 00:17:56.574152 containerd[1448]: 2025-09-10 00:17:56.560 [INFO][3894] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="627e24adb5a89478d44afc03b6e0f74fe39a1d0f8cdb2f7484ce43d29387c54e" Namespace="calico-system" Pod="whisker-59b6cbfcbb-gq69l" WorkloadEndpoint="localhost-k8s-whisker--59b6cbfcbb--gq69l-eth0" Sep 10 00:17:56.574152 containerd[1448]: 2025-09-10 00:17:56.560 [INFO][3894] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="627e24adb5a89478d44afc03b6e0f74fe39a1d0f8cdb2f7484ce43d29387c54e" Namespace="calico-system" Pod="whisker-59b6cbfcbb-gq69l" WorkloadEndpoint="localhost-k8s-whisker--59b6cbfcbb--gq69l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--59b6cbfcbb--gq69l-eth0", GenerateName:"whisker-59b6cbfcbb-", Namespace:"calico-system", SelfLink:"", UID:"440f4c5d-e3f7-4132-98dd-88cd5240ab5f", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 17, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"59b6cbfcbb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"627e24adb5a89478d44afc03b6e0f74fe39a1d0f8cdb2f7484ce43d29387c54e", Pod:"whisker-59b6cbfcbb-gq69l", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali73942786591", MAC:"de:d8:f5:fd:07:66", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:17:56.574152 containerd[1448]: 2025-09-10 00:17:56.568 [INFO][3894] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="627e24adb5a89478d44afc03b6e0f74fe39a1d0f8cdb2f7484ce43d29387c54e" Namespace="calico-system" Pod="whisker-59b6cbfcbb-gq69l" WorkloadEndpoint="localhost-k8s-whisker--59b6cbfcbb--gq69l-eth0" Sep 10 00:17:56.588072 containerd[1448]: time="2025-09-10T00:17:56.587860876Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:17:56.588072 containerd[1448]: time="2025-09-10T00:17:56.587905996Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:17:56.588072 containerd[1448]: time="2025-09-10T00:17:56.587916756Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:17:56.588072 containerd[1448]: time="2025-09-10T00:17:56.587984716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:17:56.603458 systemd[1]: Started cri-containerd-627e24adb5a89478d44afc03b6e0f74fe39a1d0f8cdb2f7484ce43d29387c54e.scope - libcontainer container 627e24adb5a89478d44afc03b6e0f74fe39a1d0f8cdb2f7484ce43d29387c54e. Sep 10 00:17:56.625483 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:17:56.665620 containerd[1448]: time="2025-09-10T00:17:56.665574867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59b6cbfcbb-gq69l,Uid:440f4c5d-e3f7-4132-98dd-88cd5240ab5f,Namespace:calico-system,Attempt:0,} returns sandbox id \"627e24adb5a89478d44afc03b6e0f74fe39a1d0f8cdb2f7484ce43d29387c54e\"" Sep 10 00:17:56.667618 containerd[1448]: time="2025-09-10T00:17:56.667584948Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 10 00:17:56.831770 kernel: bpftool[4083]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 10 00:17:56.992646 systemd-networkd[1379]: vxlan.calico: Link UP Sep 10 00:17:56.992653 systemd-networkd[1379]: vxlan.calico: Gained carrier Sep 10 00:17:57.051047 kubelet[2491]: I0910 00:17:57.051016 2491 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 00:17:57.749136 containerd[1448]: time="2025-09-10T00:17:57.749077360Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:17:57.750658 containerd[1448]: time="2025-09-10T00:17:57.750599601Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 10 00:17:57.751880 containerd[1448]: time="2025-09-10T00:17:57.751768241Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:17:57.754105 containerd[1448]: time="2025-09-10T00:17:57.754070002Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:17:57.754801 containerd[1448]: time="2025-09-10T00:17:57.754768442Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 1.087125054s" Sep 10 00:17:57.754871 containerd[1448]: time="2025-09-10T00:17:57.754804722Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 10 00:17:57.760254 containerd[1448]: time="2025-09-10T00:17:57.760204244Z" level=info msg="CreateContainer within sandbox \"627e24adb5a89478d44afc03b6e0f74fe39a1d0f8cdb2f7484ce43d29387c54e\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 10 00:17:57.776665 containerd[1448]: time="2025-09-10T00:17:57.776540450Z" level=info msg="CreateContainer within sandbox \"627e24adb5a89478d44afc03b6e0f74fe39a1d0f8cdb2f7484ce43d29387c54e\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"a1aa5379c13b410f62c4f6d9d8c01220660a6dda2d5064173999e08a26ffce05\"" Sep 10 00:17:57.777281 containerd[1448]: time="2025-09-10T00:17:57.777251571Z" level=info msg="StartContainer for \"a1aa5379c13b410f62c4f6d9d8c01220660a6dda2d5064173999e08a26ffce05\"" Sep 10 00:17:57.817107 systemd[1]: Started cri-containerd-a1aa5379c13b410f62c4f6d9d8c01220660a6dda2d5064173999e08a26ffce05.scope - libcontainer container a1aa5379c13b410f62c4f6d9d8c01220660a6dda2d5064173999e08a26ffce05. Sep 10 00:17:57.847390 containerd[1448]: time="2025-09-10T00:17:57.846898917Z" level=info msg="StartContainer for \"a1aa5379c13b410f62c4f6d9d8c01220660a6dda2d5064173999e08a26ffce05\" returns successfully" Sep 10 00:17:57.848820 containerd[1448]: time="2025-09-10T00:17:57.848459437Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 10 00:17:57.875265 kubelet[2491]: I0910 00:17:57.875215 2491 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="280ac556-7fbc-4068-a5ff-d67586b6c1c2" path="/var/lib/kubelet/pods/280ac556-7fbc-4068-a5ff-d67586b6c1c2/volumes" Sep 10 00:17:58.407978 systemd-networkd[1379]: cali73942786591: Gained IPv6LL Sep 10 00:17:58.599911 systemd-networkd[1379]: vxlan.calico: Gained IPv6LL Sep 10 00:18:00.269438 systemd[1]: Started sshd@7-10.0.0.124:22-10.0.0.1:34120.service - OpenSSH per-connection server daemon (10.0.0.1:34120). Sep 10 00:18:00.310050 sshd[4216]: Accepted publickey for core from 10.0.0.1 port 34120 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:18:00.311398 sshd[4216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:18:00.315324 systemd-logind[1423]: New session 8 of user core. Sep 10 00:18:00.321942 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 10 00:18:00.550282 sshd[4216]: pam_unix(sshd:session): session closed for user core Sep 10 00:18:00.556003 systemd[1]: sshd@7-10.0.0.124:22-10.0.0.1:34120.service: Deactivated successfully. Sep 10 00:18:00.557470 systemd[1]: session-8.scope: Deactivated successfully. Sep 10 00:18:00.558845 systemd-logind[1423]: Session 8 logged out. Waiting for processes to exit. Sep 10 00:18:00.560254 systemd-logind[1423]: Removed session 8. Sep 10 00:18:03.717619 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3378873289.mount: Deactivated successfully. Sep 10 00:18:03.795270 containerd[1448]: time="2025-09-10T00:18:03.795220361Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:18:03.795724 containerd[1448]: time="2025-09-10T00:18:03.795692201Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 10 00:18:03.799688 containerd[1448]: time="2025-09-10T00:18:03.799635162Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:18:03.801981 containerd[1448]: time="2025-09-10T00:18:03.801950283Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:18:03.802631 containerd[1448]: time="2025-09-10T00:18:03.802602203Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 5.954111446s" Sep 10 00:18:03.802682 containerd[1448]: time="2025-09-10T00:18:03.802637603Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 10 00:18:03.806189 containerd[1448]: time="2025-09-10T00:18:03.806156124Z" level=info msg="CreateContainer within sandbox \"627e24adb5a89478d44afc03b6e0f74fe39a1d0f8cdb2f7484ce43d29387c54e\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 10 00:18:03.818811 containerd[1448]: time="2025-09-10T00:18:03.818736967Z" level=info msg="CreateContainer within sandbox \"627e24adb5a89478d44afc03b6e0f74fe39a1d0f8cdb2f7484ce43d29387c54e\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"0df35910d3ed360e4cfb00847369c277eae09f0b7ea8c89a8d3e4324dd4d5ca4\"" Sep 10 00:18:03.819453 containerd[1448]: time="2025-09-10T00:18:03.819424167Z" level=info msg="StartContainer for \"0df35910d3ed360e4cfb00847369c277eae09f0b7ea8c89a8d3e4324dd4d5ca4\"" Sep 10 00:18:03.856905 systemd[1]: Started cri-containerd-0df35910d3ed360e4cfb00847369c277eae09f0b7ea8c89a8d3e4324dd4d5ca4.scope - libcontainer container 0df35910d3ed360e4cfb00847369c277eae09f0b7ea8c89a8d3e4324dd4d5ca4. Sep 10 00:18:03.873384 containerd[1448]: time="2025-09-10T00:18:03.873347461Z" level=info msg="StopPodSandbox for \"237b2951e6d066a5ef349d91ec76cb766d252860227afe68b6554018af713707\"" Sep 10 00:18:03.873486 containerd[1448]: time="2025-09-10T00:18:03.873449261Z" level=info msg="StopPodSandbox for \"718b009935c17339c8db7c4995f78c36640d04e24a57d704d800e7bfb8296b52\"" Sep 10 00:18:03.876072 containerd[1448]: time="2025-09-10T00:18:03.875785942Z" level=info msg="StopPodSandbox for \"236b8b75ac710cdcccdabd902fb2ca19e81ab2c20906abf395cf84177e19ad07\"" Sep 10 00:18:03.900173 containerd[1448]: time="2025-09-10T00:18:03.900120428Z" level=info msg="StartContainer for \"0df35910d3ed360e4cfb00847369c277eae09f0b7ea8c89a8d3e4324dd4d5ca4\" returns successfully" Sep 10 00:18:04.043563 containerd[1448]: 2025-09-10 00:18:03.964 [INFO][4314] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="237b2951e6d066a5ef349d91ec76cb766d252860227afe68b6554018af713707" Sep 10 00:18:04.043563 containerd[1448]: 2025-09-10 00:18:03.970 [INFO][4314] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="237b2951e6d066a5ef349d91ec76cb766d252860227afe68b6554018af713707" iface="eth0" netns="/var/run/netns/cni-e5c80ac8-f5c4-e9de-4dad-d80d44e535e6" Sep 10 00:18:04.043563 containerd[1448]: 2025-09-10 00:18:03.971 [INFO][4314] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="237b2951e6d066a5ef349d91ec76cb766d252860227afe68b6554018af713707" iface="eth0" netns="/var/run/netns/cni-e5c80ac8-f5c4-e9de-4dad-d80d44e535e6" Sep 10 00:18:04.043563 containerd[1448]: 2025-09-10 00:18:03.971 [INFO][4314] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="237b2951e6d066a5ef349d91ec76cb766d252860227afe68b6554018af713707" iface="eth0" netns="/var/run/netns/cni-e5c80ac8-f5c4-e9de-4dad-d80d44e535e6" Sep 10 00:18:04.043563 containerd[1448]: 2025-09-10 00:18:03.971 [INFO][4314] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="237b2951e6d066a5ef349d91ec76cb766d252860227afe68b6554018af713707" Sep 10 00:18:04.043563 containerd[1448]: 2025-09-10 00:18:03.971 [INFO][4314] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="237b2951e6d066a5ef349d91ec76cb766d252860227afe68b6554018af713707" Sep 10 00:18:04.043563 containerd[1448]: 2025-09-10 00:18:04.023 [INFO][4353] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="237b2951e6d066a5ef349d91ec76cb766d252860227afe68b6554018af713707" HandleID="k8s-pod-network.237b2951e6d066a5ef349d91ec76cb766d252860227afe68b6554018af713707" Workload="localhost-k8s-calico--apiserver--6cd6dcff69--qt9l6-eth0" Sep 10 00:18:04.043563 containerd[1448]: 2025-09-10 00:18:04.023 [INFO][4353] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:18:04.043563 containerd[1448]: 2025-09-10 00:18:04.023 [INFO][4353] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:18:04.043563 containerd[1448]: 2025-09-10 00:18:04.033 [WARNING][4353] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="237b2951e6d066a5ef349d91ec76cb766d252860227afe68b6554018af713707" HandleID="k8s-pod-network.237b2951e6d066a5ef349d91ec76cb766d252860227afe68b6554018af713707" Workload="localhost-k8s-calico--apiserver--6cd6dcff69--qt9l6-eth0" Sep 10 00:18:04.043563 containerd[1448]: 2025-09-10 00:18:04.033 [INFO][4353] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="237b2951e6d066a5ef349d91ec76cb766d252860227afe68b6554018af713707" HandleID="k8s-pod-network.237b2951e6d066a5ef349d91ec76cb766d252860227afe68b6554018af713707" Workload="localhost-k8s-calico--apiserver--6cd6dcff69--qt9l6-eth0" Sep 10 00:18:04.043563 containerd[1448]: 2025-09-10 00:18:04.034 [INFO][4353] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:18:04.043563 containerd[1448]: 2025-09-10 00:18:04.041 [INFO][4314] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="237b2951e6d066a5ef349d91ec76cb766d252860227afe68b6554018af713707" Sep 10 00:18:04.044495 containerd[1448]: time="2025-09-10T00:18:04.044370424Z" level=info msg="TearDown network for sandbox \"237b2951e6d066a5ef349d91ec76cb766d252860227afe68b6554018af713707\" successfully" Sep 10 00:18:04.044495 containerd[1448]: time="2025-09-10T00:18:04.044401824Z" level=info msg="StopPodSandbox for \"237b2951e6d066a5ef349d91ec76cb766d252860227afe68b6554018af713707\" returns successfully" Sep 10 00:18:04.050118 containerd[1448]: 2025-09-10 00:18:03.952 [INFO][4308] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="236b8b75ac710cdcccdabd902fb2ca19e81ab2c20906abf395cf84177e19ad07" Sep 10 00:18:04.050118 containerd[1448]: 2025-09-10 00:18:03.952 [INFO][4308] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="236b8b75ac710cdcccdabd902fb2ca19e81ab2c20906abf395cf84177e19ad07" iface="eth0" netns="/var/run/netns/cni-689827ef-5936-62b8-e426-24aaefc98f9d" Sep 10 00:18:04.050118 containerd[1448]: 2025-09-10 00:18:03.952 [INFO][4308] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="236b8b75ac710cdcccdabd902fb2ca19e81ab2c20906abf395cf84177e19ad07" iface="eth0" netns="/var/run/netns/cni-689827ef-5936-62b8-e426-24aaefc98f9d" Sep 10 00:18:04.050118 containerd[1448]: 2025-09-10 00:18:03.953 [INFO][4308] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="236b8b75ac710cdcccdabd902fb2ca19e81ab2c20906abf395cf84177e19ad07" iface="eth0" netns="/var/run/netns/cni-689827ef-5936-62b8-e426-24aaefc98f9d" Sep 10 00:18:04.050118 containerd[1448]: 2025-09-10 00:18:03.953 [INFO][4308] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="236b8b75ac710cdcccdabd902fb2ca19e81ab2c20906abf395cf84177e19ad07" Sep 10 00:18:04.050118 containerd[1448]: 2025-09-10 00:18:03.953 [INFO][4308] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="236b8b75ac710cdcccdabd902fb2ca19e81ab2c20906abf395cf84177e19ad07" Sep 10 00:18:04.050118 containerd[1448]: 2025-09-10 00:18:04.024 [INFO][4341] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="236b8b75ac710cdcccdabd902fb2ca19e81ab2c20906abf395cf84177e19ad07" HandleID="k8s-pod-network.236b8b75ac710cdcccdabd902fb2ca19e81ab2c20906abf395cf84177e19ad07" Workload="localhost-k8s-calico--apiserver--6cd6dcff69--pvd28-eth0" Sep 10 00:18:04.050118 containerd[1448]: 2025-09-10 00:18:04.024 [INFO][4341] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:18:04.050118 containerd[1448]: 2025-09-10 00:18:04.034 [INFO][4341] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:18:04.050118 containerd[1448]: 2025-09-10 00:18:04.044 [WARNING][4341] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="236b8b75ac710cdcccdabd902fb2ca19e81ab2c20906abf395cf84177e19ad07" HandleID="k8s-pod-network.236b8b75ac710cdcccdabd902fb2ca19e81ab2c20906abf395cf84177e19ad07" Workload="localhost-k8s-calico--apiserver--6cd6dcff69--pvd28-eth0" Sep 10 00:18:04.050118 containerd[1448]: 2025-09-10 00:18:04.044 [INFO][4341] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="236b8b75ac710cdcccdabd902fb2ca19e81ab2c20906abf395cf84177e19ad07" HandleID="k8s-pod-network.236b8b75ac710cdcccdabd902fb2ca19e81ab2c20906abf395cf84177e19ad07" Workload="localhost-k8s-calico--apiserver--6cd6dcff69--pvd28-eth0" Sep 10 00:18:04.050118 containerd[1448]: 2025-09-10 00:18:04.046 [INFO][4341] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:18:04.050118 containerd[1448]: 2025-09-10 00:18:04.048 [INFO][4308] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="236b8b75ac710cdcccdabd902fb2ca19e81ab2c20906abf395cf84177e19ad07" Sep 10 00:18:04.050514 containerd[1448]: time="2025-09-10T00:18:04.050224585Z" level=info msg="TearDown network for sandbox \"236b8b75ac710cdcccdabd902fb2ca19e81ab2c20906abf395cf84177e19ad07\" successfully" Sep 10 00:18:04.050514 containerd[1448]: time="2025-09-10T00:18:04.050247505Z" level=info msg="StopPodSandbox for \"236b8b75ac710cdcccdabd902fb2ca19e81ab2c20906abf395cf84177e19ad07\" returns successfully" Sep 10 00:18:04.050850 containerd[1448]: time="2025-09-10T00:18:04.050824145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cd6dcff69-pvd28,Uid:e6f75a12-f91b-4f7f-9f91-ba290e49ea84,Namespace:calico-apiserver,Attempt:1,}" Sep 10 00:18:04.052463 containerd[1448]: time="2025-09-10T00:18:04.052438866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cd6dcff69-qt9l6,Uid:9eadb910-646d-43a4-b7b4-6b854d565ea6,Namespace:calico-apiserver,Attempt:1,}" Sep 10 00:18:04.063344 containerd[1448]: 2025-09-10 00:18:03.949 [INFO][4309] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="718b009935c17339c8db7c4995f78c36640d04e24a57d704d800e7bfb8296b52" Sep 10 00:18:04.063344 containerd[1448]: 2025-09-10 00:18:03.950 [INFO][4309] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="718b009935c17339c8db7c4995f78c36640d04e24a57d704d800e7bfb8296b52" iface="eth0" netns="/var/run/netns/cni-559d524e-86ab-c36a-cef8-89fe93e6ce42" Sep 10 00:18:04.063344 containerd[1448]: 2025-09-10 00:18:03.951 [INFO][4309] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="718b009935c17339c8db7c4995f78c36640d04e24a57d704d800e7bfb8296b52" iface="eth0" netns="/var/run/netns/cni-559d524e-86ab-c36a-cef8-89fe93e6ce42" Sep 10 00:18:04.063344 containerd[1448]: 2025-09-10 00:18:03.954 [INFO][4309] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="718b009935c17339c8db7c4995f78c36640d04e24a57d704d800e7bfb8296b52" iface="eth0" netns="/var/run/netns/cni-559d524e-86ab-c36a-cef8-89fe93e6ce42" Sep 10 00:18:04.063344 containerd[1448]: 2025-09-10 00:18:03.954 [INFO][4309] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="718b009935c17339c8db7c4995f78c36640d04e24a57d704d800e7bfb8296b52" Sep 10 00:18:04.063344 containerd[1448]: 2025-09-10 00:18:03.954 [INFO][4309] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="718b009935c17339c8db7c4995f78c36640d04e24a57d704d800e7bfb8296b52" Sep 10 00:18:04.063344 containerd[1448]: 2025-09-10 00:18:04.029 [INFO][4343] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="718b009935c17339c8db7c4995f78c36640d04e24a57d704d800e7bfb8296b52" HandleID="k8s-pod-network.718b009935c17339c8db7c4995f78c36640d04e24a57d704d800e7bfb8296b52" Workload="localhost-k8s-csi--node--driver--65xbg-eth0" Sep 10 00:18:04.063344 containerd[1448]: 2025-09-10 00:18:04.029 [INFO][4343] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:18:04.063344 containerd[1448]: 2025-09-10 00:18:04.046 [INFO][4343] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:18:04.063344 containerd[1448]: 2025-09-10 00:18:04.057 [WARNING][4343] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="718b009935c17339c8db7c4995f78c36640d04e24a57d704d800e7bfb8296b52" HandleID="k8s-pod-network.718b009935c17339c8db7c4995f78c36640d04e24a57d704d800e7bfb8296b52" Workload="localhost-k8s-csi--node--driver--65xbg-eth0" Sep 10 00:18:04.063344 containerd[1448]: 2025-09-10 00:18:04.057 [INFO][4343] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="718b009935c17339c8db7c4995f78c36640d04e24a57d704d800e7bfb8296b52" HandleID="k8s-pod-network.718b009935c17339c8db7c4995f78c36640d04e24a57d704d800e7bfb8296b52" Workload="localhost-k8s-csi--node--driver--65xbg-eth0" Sep 10 00:18:04.063344 containerd[1448]: 2025-09-10 00:18:04.058 [INFO][4343] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:18:04.063344 containerd[1448]: 2025-09-10 00:18:04.060 [INFO][4309] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="718b009935c17339c8db7c4995f78c36640d04e24a57d704d800e7bfb8296b52" Sep 10 00:18:04.063738 containerd[1448]: time="2025-09-10T00:18:04.063488908Z" level=info msg="TearDown network for sandbox \"718b009935c17339c8db7c4995f78c36640d04e24a57d704d800e7bfb8296b52\" successfully" Sep 10 00:18:04.063738 containerd[1448]: time="2025-09-10T00:18:04.063512188Z" level=info msg="StopPodSandbox for \"718b009935c17339c8db7c4995f78c36640d04e24a57d704d800e7bfb8296b52\" returns successfully" Sep 10 00:18:04.064159 containerd[1448]: time="2025-09-10T00:18:04.064093669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-65xbg,Uid:cc428a56-c099-4362-852b-dab9e5d9f7b7,Namespace:calico-system,Attempt:1,}" Sep 10 00:18:04.095725 kubelet[2491]: I0910 00:18:04.095654 2491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-59b6cbfcbb-gq69l" podStartSLOduration=0.959576661 podStartE2EDuration="8.095640396s" podCreationTimestamp="2025-09-10 00:17:56 +0000 UTC" firstStartedPulling="2025-09-10 00:17:56.667350668 +0000 UTC m=+36.875105752" lastFinishedPulling="2025-09-10 00:18:03.803414443 +0000 UTC m=+44.011169487" observedRunningTime="2025-09-10 00:18:04.095060716 +0000 UTC m=+44.302815800" watchObservedRunningTime="2025-09-10 00:18:04.095640396 +0000 UTC m=+44.303395480" Sep 10 00:18:04.208792 systemd-networkd[1379]: calia2b6e5a128c: Link UP Sep 10 00:18:04.209969 systemd-networkd[1379]: calia2b6e5a128c: Gained carrier Sep 10 00:18:04.222042 containerd[1448]: 2025-09-10 00:18:04.120 [INFO][4369] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6cd6dcff69--pvd28-eth0 calico-apiserver-6cd6dcff69- calico-apiserver e6f75a12-f91b-4f7f-9f91-ba290e49ea84 1000 0 2025-09-10 00:17:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6cd6dcff69 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6cd6dcff69-pvd28 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia2b6e5a128c [] [] }} ContainerID="974de05c0e76159fb05af51e50fd66c73ef854b4c86f1fa9d87a81171df4e53b" Namespace="calico-apiserver" Pod="calico-apiserver-6cd6dcff69-pvd28" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cd6dcff69--pvd28-" Sep 10 00:18:04.222042 containerd[1448]: 2025-09-10 00:18:04.120 [INFO][4369] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="974de05c0e76159fb05af51e50fd66c73ef854b4c86f1fa9d87a81171df4e53b" Namespace="calico-apiserver" Pod="calico-apiserver-6cd6dcff69-pvd28" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cd6dcff69--pvd28-eth0" Sep 10 00:18:04.222042 containerd[1448]: 2025-09-10 00:18:04.148 [INFO][4413] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="974de05c0e76159fb05af51e50fd66c73ef854b4c86f1fa9d87a81171df4e53b" HandleID="k8s-pod-network.974de05c0e76159fb05af51e50fd66c73ef854b4c86f1fa9d87a81171df4e53b" Workload="localhost-k8s-calico--apiserver--6cd6dcff69--pvd28-eth0" Sep 10 00:18:04.222042 containerd[1448]: 2025-09-10 00:18:04.149 [INFO][4413] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="974de05c0e76159fb05af51e50fd66c73ef854b4c86f1fa9d87a81171df4e53b" HandleID="k8s-pod-network.974de05c0e76159fb05af51e50fd66c73ef854b4c86f1fa9d87a81171df4e53b" Workload="localhost-k8s-calico--apiserver--6cd6dcff69--pvd28-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137630), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6cd6dcff69-pvd28", "timestamp":"2025-09-10 00:18:04.148879409 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 00:18:04.222042 containerd[1448]: 2025-09-10 00:18:04.149 [INFO][4413] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:18:04.222042 containerd[1448]: 2025-09-10 00:18:04.149 [INFO][4413] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:18:04.222042 containerd[1448]: 2025-09-10 00:18:04.149 [INFO][4413] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 00:18:04.222042 containerd[1448]: 2025-09-10 00:18:04.167 [INFO][4413] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.974de05c0e76159fb05af51e50fd66c73ef854b4c86f1fa9d87a81171df4e53b" host="localhost" Sep 10 00:18:04.222042 containerd[1448]: 2025-09-10 00:18:04.175 [INFO][4413] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 00:18:04.222042 containerd[1448]: 2025-09-10 00:18:04.183 [INFO][4413] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 00:18:04.222042 containerd[1448]: 2025-09-10 00:18:04.185 [INFO][4413] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 00:18:04.222042 containerd[1448]: 2025-09-10 00:18:04.187 [INFO][4413] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 00:18:04.222042 containerd[1448]: 2025-09-10 00:18:04.187 [INFO][4413] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.974de05c0e76159fb05af51e50fd66c73ef854b4c86f1fa9d87a81171df4e53b" host="localhost" Sep 10 00:18:04.222042 containerd[1448]: 2025-09-10 00:18:04.189 [INFO][4413] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.974de05c0e76159fb05af51e50fd66c73ef854b4c86f1fa9d87a81171df4e53b Sep 10 00:18:04.222042 containerd[1448]: 2025-09-10 00:18:04.193 [INFO][4413] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.974de05c0e76159fb05af51e50fd66c73ef854b4c86f1fa9d87a81171df4e53b" host="localhost" Sep 10 00:18:04.222042 containerd[1448]: 2025-09-10 00:18:04.199 [INFO][4413] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.974de05c0e76159fb05af51e50fd66c73ef854b4c86f1fa9d87a81171df4e53b" host="localhost" Sep 10 00:18:04.222042 containerd[1448]: 2025-09-10 00:18:04.199 [INFO][4413] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.974de05c0e76159fb05af51e50fd66c73ef854b4c86f1fa9d87a81171df4e53b" host="localhost" Sep 10 00:18:04.222042 containerd[1448]: 2025-09-10 00:18:04.199 [INFO][4413] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:18:04.222042 containerd[1448]: 2025-09-10 00:18:04.199 [INFO][4413] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="974de05c0e76159fb05af51e50fd66c73ef854b4c86f1fa9d87a81171df4e53b" HandleID="k8s-pod-network.974de05c0e76159fb05af51e50fd66c73ef854b4c86f1fa9d87a81171df4e53b" Workload="localhost-k8s-calico--apiserver--6cd6dcff69--pvd28-eth0" Sep 10 00:18:04.222865 containerd[1448]: 2025-09-10 00:18:04.202 [INFO][4369] cni-plugin/k8s.go 418: Populated endpoint ContainerID="974de05c0e76159fb05af51e50fd66c73ef854b4c86f1fa9d87a81171df4e53b" Namespace="calico-apiserver" Pod="calico-apiserver-6cd6dcff69-pvd28" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cd6dcff69--pvd28-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6cd6dcff69--pvd28-eth0", GenerateName:"calico-apiserver-6cd6dcff69-", Namespace:"calico-apiserver", SelfLink:"", UID:"e6f75a12-f91b-4f7f-9f91-ba290e49ea84", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 17, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cd6dcff69", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6cd6dcff69-pvd28", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia2b6e5a128c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:18:04.222865 containerd[1448]: 2025-09-10 00:18:04.202 [INFO][4369] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="974de05c0e76159fb05af51e50fd66c73ef854b4c86f1fa9d87a81171df4e53b" Namespace="calico-apiserver" Pod="calico-apiserver-6cd6dcff69-pvd28" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cd6dcff69--pvd28-eth0" Sep 10 00:18:04.222865 containerd[1448]: 2025-09-10 00:18:04.202 [INFO][4369] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia2b6e5a128c ContainerID="974de05c0e76159fb05af51e50fd66c73ef854b4c86f1fa9d87a81171df4e53b" Namespace="calico-apiserver" Pod="calico-apiserver-6cd6dcff69-pvd28" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cd6dcff69--pvd28-eth0" Sep 10 00:18:04.222865 containerd[1448]: 2025-09-10 00:18:04.210 [INFO][4369] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="974de05c0e76159fb05af51e50fd66c73ef854b4c86f1fa9d87a81171df4e53b" Namespace="calico-apiserver" Pod="calico-apiserver-6cd6dcff69-pvd28" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cd6dcff69--pvd28-eth0" Sep 10 00:18:04.222865 containerd[1448]: 2025-09-10 00:18:04.210 [INFO][4369] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="974de05c0e76159fb05af51e50fd66c73ef854b4c86f1fa9d87a81171df4e53b" Namespace="calico-apiserver" Pod="calico-apiserver-6cd6dcff69-pvd28" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cd6dcff69--pvd28-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6cd6dcff69--pvd28-eth0", GenerateName:"calico-apiserver-6cd6dcff69-", Namespace:"calico-apiserver", SelfLink:"", UID:"e6f75a12-f91b-4f7f-9f91-ba290e49ea84", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 17, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cd6dcff69", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"974de05c0e76159fb05af51e50fd66c73ef854b4c86f1fa9d87a81171df4e53b", Pod:"calico-apiserver-6cd6dcff69-pvd28", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia2b6e5a128c", MAC:"22:46:25:96:7b:00", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:18:04.222865 containerd[1448]: 2025-09-10 00:18:04.219 [INFO][4369] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="974de05c0e76159fb05af51e50fd66c73ef854b4c86f1fa9d87a81171df4e53b" Namespace="calico-apiserver" Pod="calico-apiserver-6cd6dcff69-pvd28" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cd6dcff69--pvd28-eth0" Sep 10 00:18:04.247425 containerd[1448]: time="2025-09-10T00:18:04.247339832Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:18:04.247425 containerd[1448]: time="2025-09-10T00:18:04.247400152Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:18:04.247425 containerd[1448]: time="2025-09-10T00:18:04.247418192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:18:04.247678 containerd[1448]: time="2025-09-10T00:18:04.247507912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:18:04.265911 systemd[1]: Started cri-containerd-974de05c0e76159fb05af51e50fd66c73ef854b4c86f1fa9d87a81171df4e53b.scope - libcontainer container 974de05c0e76159fb05af51e50fd66c73ef854b4c86f1fa9d87a81171df4e53b. Sep 10 00:18:04.277386 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:18:04.299359 containerd[1448]: time="2025-09-10T00:18:04.298427804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cd6dcff69-pvd28,Uid:e6f75a12-f91b-4f7f-9f91-ba290e49ea84,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"974de05c0e76159fb05af51e50fd66c73ef854b4c86f1fa9d87a81171df4e53b\"" Sep 10 00:18:04.300001 containerd[1448]: time="2025-09-10T00:18:04.299887045Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 10 00:18:04.312119 systemd-networkd[1379]: cali7de37e33db7: Link UP Sep 10 00:18:04.316051 systemd-networkd[1379]: cali7de37e33db7: Gained carrier Sep 10 00:18:04.329979 containerd[1448]: 2025-09-10 00:18:04.119 [INFO][4380] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6cd6dcff69--qt9l6-eth0 calico-apiserver-6cd6dcff69- calico-apiserver 9eadb910-646d-43a4-b7b4-6b854d565ea6 1001 0 2025-09-10 00:17:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6cd6dcff69 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6cd6dcff69-qt9l6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7de37e33db7 [] [] }} ContainerID="12d8978fa7b0c324fbfc9e6c60ed404b8d09e82865e325fb376b87b454bebc9c" Namespace="calico-apiserver" Pod="calico-apiserver-6cd6dcff69-qt9l6" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cd6dcff69--qt9l6-" Sep 10 00:18:04.329979 containerd[1448]: 2025-09-10 00:18:04.120 [INFO][4380] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="12d8978fa7b0c324fbfc9e6c60ed404b8d09e82865e325fb376b87b454bebc9c" Namespace="calico-apiserver" Pod="calico-apiserver-6cd6dcff69-qt9l6" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cd6dcff69--qt9l6-eth0" Sep 10 00:18:04.329979 containerd[1448]: 2025-09-10 00:18:04.156 [INFO][4414] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="12d8978fa7b0c324fbfc9e6c60ed404b8d09e82865e325fb376b87b454bebc9c" HandleID="k8s-pod-network.12d8978fa7b0c324fbfc9e6c60ed404b8d09e82865e325fb376b87b454bebc9c" Workload="localhost-k8s-calico--apiserver--6cd6dcff69--qt9l6-eth0" Sep 10 00:18:04.329979 containerd[1448]: 2025-09-10 00:18:04.156 [INFO][4414] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="12d8978fa7b0c324fbfc9e6c60ed404b8d09e82865e325fb376b87b454bebc9c" HandleID="k8s-pod-network.12d8978fa7b0c324fbfc9e6c60ed404b8d09e82865e325fb376b87b454bebc9c" Workload="localhost-k8s-calico--apiserver--6cd6dcff69--qt9l6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c36c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6cd6dcff69-qt9l6", "timestamp":"2025-09-10 00:18:04.15618577 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 00:18:04.329979 containerd[1448]: 2025-09-10 00:18:04.156 [INFO][4414] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:18:04.329979 containerd[1448]: 2025-09-10 00:18:04.199 [INFO][4414] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:18:04.329979 containerd[1448]: 2025-09-10 00:18:04.199 [INFO][4414] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 00:18:04.329979 containerd[1448]: 2025-09-10 00:18:04.268 [INFO][4414] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.12d8978fa7b0c324fbfc9e6c60ed404b8d09e82865e325fb376b87b454bebc9c" host="localhost" Sep 10 00:18:04.329979 containerd[1448]: 2025-09-10 00:18:04.274 [INFO][4414] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 00:18:04.329979 containerd[1448]: 2025-09-10 00:18:04.282 [INFO][4414] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 00:18:04.329979 containerd[1448]: 2025-09-10 00:18:04.284 [INFO][4414] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 00:18:04.329979 containerd[1448]: 2025-09-10 00:18:04.286 [INFO][4414] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 00:18:04.329979 containerd[1448]: 2025-09-10 00:18:04.286 [INFO][4414] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.12d8978fa7b0c324fbfc9e6c60ed404b8d09e82865e325fb376b87b454bebc9c" host="localhost" Sep 10 00:18:04.329979 containerd[1448]: 2025-09-10 00:18:04.288 [INFO][4414] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.12d8978fa7b0c324fbfc9e6c60ed404b8d09e82865e325fb376b87b454bebc9c Sep 10 00:18:04.329979 containerd[1448]: 2025-09-10 00:18:04.295 [INFO][4414] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.12d8978fa7b0c324fbfc9e6c60ed404b8d09e82865e325fb376b87b454bebc9c" host="localhost" Sep 10 00:18:04.329979 containerd[1448]: 2025-09-10 00:18:04.302 [INFO][4414] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.12d8978fa7b0c324fbfc9e6c60ed404b8d09e82865e325fb376b87b454bebc9c" host="localhost" Sep 10 00:18:04.329979 containerd[1448]: 2025-09-10 00:18:04.302 [INFO][4414] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.12d8978fa7b0c324fbfc9e6c60ed404b8d09e82865e325fb376b87b454bebc9c" host="localhost" Sep 10 00:18:04.329979 containerd[1448]: 2025-09-10 00:18:04.302 [INFO][4414] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:18:04.329979 containerd[1448]: 2025-09-10 00:18:04.302 [INFO][4414] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="12d8978fa7b0c324fbfc9e6c60ed404b8d09e82865e325fb376b87b454bebc9c" HandleID="k8s-pod-network.12d8978fa7b0c324fbfc9e6c60ed404b8d09e82865e325fb376b87b454bebc9c" Workload="localhost-k8s-calico--apiserver--6cd6dcff69--qt9l6-eth0" Sep 10 00:18:04.330691 containerd[1448]: 2025-09-10 00:18:04.306 [INFO][4380] cni-plugin/k8s.go 418: Populated endpoint ContainerID="12d8978fa7b0c324fbfc9e6c60ed404b8d09e82865e325fb376b87b454bebc9c" Namespace="calico-apiserver" Pod="calico-apiserver-6cd6dcff69-qt9l6" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cd6dcff69--qt9l6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6cd6dcff69--qt9l6-eth0", GenerateName:"calico-apiserver-6cd6dcff69-", Namespace:"calico-apiserver", SelfLink:"", UID:"9eadb910-646d-43a4-b7b4-6b854d565ea6", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 17, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cd6dcff69", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6cd6dcff69-qt9l6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7de37e33db7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:18:04.330691 containerd[1448]: 2025-09-10 00:18:04.307 [INFO][4380] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="12d8978fa7b0c324fbfc9e6c60ed404b8d09e82865e325fb376b87b454bebc9c" Namespace="calico-apiserver" Pod="calico-apiserver-6cd6dcff69-qt9l6" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cd6dcff69--qt9l6-eth0" Sep 10 00:18:04.330691 containerd[1448]: 2025-09-10 00:18:04.307 [INFO][4380] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7de37e33db7 ContainerID="12d8978fa7b0c324fbfc9e6c60ed404b8d09e82865e325fb376b87b454bebc9c" Namespace="calico-apiserver" Pod="calico-apiserver-6cd6dcff69-qt9l6" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cd6dcff69--qt9l6-eth0" Sep 10 00:18:04.330691 containerd[1448]: 2025-09-10 00:18:04.317 [INFO][4380] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="12d8978fa7b0c324fbfc9e6c60ed404b8d09e82865e325fb376b87b454bebc9c" Namespace="calico-apiserver" Pod="calico-apiserver-6cd6dcff69-qt9l6" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cd6dcff69--qt9l6-eth0" Sep 10 00:18:04.330691 containerd[1448]: 2025-09-10 00:18:04.317 [INFO][4380] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="12d8978fa7b0c324fbfc9e6c60ed404b8d09e82865e325fb376b87b454bebc9c" Namespace="calico-apiserver" Pod="calico-apiserver-6cd6dcff69-qt9l6" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cd6dcff69--qt9l6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6cd6dcff69--qt9l6-eth0", GenerateName:"calico-apiserver-6cd6dcff69-", Namespace:"calico-apiserver", SelfLink:"", UID:"9eadb910-646d-43a4-b7b4-6b854d565ea6", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 17, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cd6dcff69", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"12d8978fa7b0c324fbfc9e6c60ed404b8d09e82865e325fb376b87b454bebc9c", Pod:"calico-apiserver-6cd6dcff69-qt9l6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7de37e33db7", MAC:"5a:e7:c1:b1:cb:64", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:18:04.330691 containerd[1448]: 2025-09-10 00:18:04.327 [INFO][4380] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="12d8978fa7b0c324fbfc9e6c60ed404b8d09e82865e325fb376b87b454bebc9c" Namespace="calico-apiserver" Pod="calico-apiserver-6cd6dcff69-qt9l6" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cd6dcff69--qt9l6-eth0" Sep 10 00:18:04.345698 containerd[1448]: time="2025-09-10T00:18:04.345589655Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:18:04.345698 containerd[1448]: time="2025-09-10T00:18:04.345663015Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:18:04.345698 containerd[1448]: time="2025-09-10T00:18:04.345692055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:18:04.345964 containerd[1448]: time="2025-09-10T00:18:04.345798815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:18:04.371977 systemd[1]: Started cri-containerd-12d8978fa7b0c324fbfc9e6c60ed404b8d09e82865e325fb376b87b454bebc9c.scope - libcontainer container 12d8978fa7b0c324fbfc9e6c60ed404b8d09e82865e325fb376b87b454bebc9c. Sep 10 00:18:04.386373 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:18:04.406506 systemd-networkd[1379]: cali6867527523b: Link UP Sep 10 00:18:04.406713 systemd-networkd[1379]: cali6867527523b: Gained carrier Sep 10 00:18:04.424203 containerd[1448]: 2025-09-10 00:18:04.142 [INFO][4389] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--65xbg-eth0 csi-node-driver- calico-system cc428a56-c099-4362-852b-dab9e5d9f7b7 999 0 2025-09-10 00:17:43 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-65xbg eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali6867527523b [] [] }} ContainerID="b9715d94334df2c0b0d778c267c41e3a8fe370119f17b59332cd6b79cc07e217" Namespace="calico-system" Pod="csi-node-driver-65xbg" WorkloadEndpoint="localhost-k8s-csi--node--driver--65xbg-" Sep 10 00:18:04.424203 containerd[1448]: 2025-09-10 00:18:04.143 [INFO][4389] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b9715d94334df2c0b0d778c267c41e3a8fe370119f17b59332cd6b79cc07e217" Namespace="calico-system" Pod="csi-node-driver-65xbg" WorkloadEndpoint="localhost-k8s-csi--node--driver--65xbg-eth0" Sep 10 00:18:04.424203 containerd[1448]: 2025-09-10 00:18:04.175 [INFO][4427] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b9715d94334df2c0b0d778c267c41e3a8fe370119f17b59332cd6b79cc07e217" HandleID="k8s-pod-network.b9715d94334df2c0b0d778c267c41e3a8fe370119f17b59332cd6b79cc07e217" Workload="localhost-k8s-csi--node--driver--65xbg-eth0" Sep 10 00:18:04.424203 containerd[1448]: 2025-09-10 00:18:04.175 [INFO][4427] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b9715d94334df2c0b0d778c267c41e3a8fe370119f17b59332cd6b79cc07e217" HandleID="k8s-pod-network.b9715d94334df2c0b0d778c267c41e3a8fe370119f17b59332cd6b79cc07e217" Workload="localhost-k8s-csi--node--driver--65xbg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3940), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-65xbg", "timestamp":"2025-09-10 00:18:04.175339495 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 00:18:04.424203 containerd[1448]: 2025-09-10 00:18:04.175 [INFO][4427] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:18:04.424203 containerd[1448]: 2025-09-10 00:18:04.302 [INFO][4427] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:18:04.424203 containerd[1448]: 2025-09-10 00:18:04.302 [INFO][4427] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 00:18:04.424203 containerd[1448]: 2025-09-10 00:18:04.369 [INFO][4427] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b9715d94334df2c0b0d778c267c41e3a8fe370119f17b59332cd6b79cc07e217" host="localhost" Sep 10 00:18:04.424203 containerd[1448]: 2025-09-10 00:18:04.374 [INFO][4427] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 00:18:04.424203 containerd[1448]: 2025-09-10 00:18:04.383 [INFO][4427] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 00:18:04.424203 containerd[1448]: 2025-09-10 00:18:04.385 [INFO][4427] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 00:18:04.424203 containerd[1448]: 2025-09-10 00:18:04.387 [INFO][4427] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 00:18:04.424203 containerd[1448]: 2025-09-10 00:18:04.387 [INFO][4427] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b9715d94334df2c0b0d778c267c41e3a8fe370119f17b59332cd6b79cc07e217" host="localhost" Sep 10 00:18:04.424203 containerd[1448]: 2025-09-10 00:18:04.389 [INFO][4427] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b9715d94334df2c0b0d778c267c41e3a8fe370119f17b59332cd6b79cc07e217 Sep 10 00:18:04.424203 containerd[1448]: 2025-09-10 00:18:04.394 [INFO][4427] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b9715d94334df2c0b0d778c267c41e3a8fe370119f17b59332cd6b79cc07e217" host="localhost" Sep 10 00:18:04.424203 containerd[1448]: 2025-09-10 00:18:04.400 [INFO][4427] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.b9715d94334df2c0b0d778c267c41e3a8fe370119f17b59332cd6b79cc07e217" host="localhost" Sep 10 00:18:04.424203 containerd[1448]: 2025-09-10 00:18:04.400 [INFO][4427] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.b9715d94334df2c0b0d778c267c41e3a8fe370119f17b59332cd6b79cc07e217" host="localhost" Sep 10 00:18:04.424203 containerd[1448]: 2025-09-10 00:18:04.400 [INFO][4427] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:18:04.424203 containerd[1448]: 2025-09-10 00:18:04.400 [INFO][4427] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="b9715d94334df2c0b0d778c267c41e3a8fe370119f17b59332cd6b79cc07e217" HandleID="k8s-pod-network.b9715d94334df2c0b0d778c267c41e3a8fe370119f17b59332cd6b79cc07e217" Workload="localhost-k8s-csi--node--driver--65xbg-eth0" Sep 10 00:18:04.425076 containerd[1448]: 2025-09-10 00:18:04.404 [INFO][4389] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b9715d94334df2c0b0d778c267c41e3a8fe370119f17b59332cd6b79cc07e217" Namespace="calico-system" Pod="csi-node-driver-65xbg" WorkloadEndpoint="localhost-k8s-csi--node--driver--65xbg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--65xbg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cc428a56-c099-4362-852b-dab9e5d9f7b7", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 17, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-65xbg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6867527523b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:18:04.425076 containerd[1448]: 2025-09-10 00:18:04.404 [INFO][4389] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="b9715d94334df2c0b0d778c267c41e3a8fe370119f17b59332cd6b79cc07e217" Namespace="calico-system" Pod="csi-node-driver-65xbg" WorkloadEndpoint="localhost-k8s-csi--node--driver--65xbg-eth0" Sep 10 00:18:04.425076 containerd[1448]: 2025-09-10 00:18:04.404 [INFO][4389] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6867527523b ContainerID="b9715d94334df2c0b0d778c267c41e3a8fe370119f17b59332cd6b79cc07e217" Namespace="calico-system" Pod="csi-node-driver-65xbg" WorkloadEndpoint="localhost-k8s-csi--node--driver--65xbg-eth0" Sep 10 00:18:04.425076 containerd[1448]: 2025-09-10 00:18:04.406 [INFO][4389] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b9715d94334df2c0b0d778c267c41e3a8fe370119f17b59332cd6b79cc07e217" Namespace="calico-system" Pod="csi-node-driver-65xbg" WorkloadEndpoint="localhost-k8s-csi--node--driver--65xbg-eth0" Sep 10 00:18:04.425076 containerd[1448]: 2025-09-10 00:18:04.407 [INFO][4389] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b9715d94334df2c0b0d778c267c41e3a8fe370119f17b59332cd6b79cc07e217" Namespace="calico-system" Pod="csi-node-driver-65xbg" WorkloadEndpoint="localhost-k8s-csi--node--driver--65xbg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--65xbg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cc428a56-c099-4362-852b-dab9e5d9f7b7", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 17, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b9715d94334df2c0b0d778c267c41e3a8fe370119f17b59332cd6b79cc07e217", Pod:"csi-node-driver-65xbg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6867527523b", MAC:"6a:3f:09:42:59:fa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:18:04.425076 containerd[1448]: 2025-09-10 00:18:04.420 [INFO][4389] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b9715d94334df2c0b0d778c267c41e3a8fe370119f17b59332cd6b79cc07e217" Namespace="calico-system" Pod="csi-node-driver-65xbg" WorkloadEndpoint="localhost-k8s-csi--node--driver--65xbg-eth0" Sep 10 00:18:04.429143 containerd[1448]: time="2025-09-10T00:18:04.429096315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cd6dcff69-qt9l6,Uid:9eadb910-646d-43a4-b7b4-6b854d565ea6,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"12d8978fa7b0c324fbfc9e6c60ed404b8d09e82865e325fb376b87b454bebc9c\"" Sep 10 00:18:04.447127 systemd[1]: run-netns-cni\x2d559d524e\x2d86ab\x2dc36a\x2dcef8\x2d89fe93e6ce42.mount: Deactivated successfully. Sep 10 00:18:04.447226 systemd[1]: run-netns-cni\x2d689827ef\x2d5936\x2d62b8\x2de426\x2d24aaefc98f9d.mount: Deactivated successfully. Sep 10 00:18:04.447270 systemd[1]: run-netns-cni\x2de5c80ac8\x2df5c4\x2de9de\x2d4dad\x2dd80d44e535e6.mount: Deactivated successfully. Sep 10 00:18:04.449576 containerd[1448]: time="2025-09-10T00:18:04.449314280Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:18:04.449576 containerd[1448]: time="2025-09-10T00:18:04.449367480Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:18:04.449576 containerd[1448]: time="2025-09-10T00:18:04.449378240Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:18:04.449576 containerd[1448]: time="2025-09-10T00:18:04.449449080Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:18:04.471920 systemd[1]: Started cri-containerd-b9715d94334df2c0b0d778c267c41e3a8fe370119f17b59332cd6b79cc07e217.scope - libcontainer container b9715d94334df2c0b0d778c267c41e3a8fe370119f17b59332cd6b79cc07e217. Sep 10 00:18:04.480992 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:18:04.490184 containerd[1448]: time="2025-09-10T00:18:04.490133970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-65xbg,Uid:cc428a56-c099-4362-852b-dab9e5d9f7b7,Namespace:calico-system,Attempt:1,} returns sandbox id \"b9715d94334df2c0b0d778c267c41e3a8fe370119f17b59332cd6b79cc07e217\"" Sep 10 00:18:04.874158 containerd[1448]: time="2025-09-10T00:18:04.874108621Z" level=info msg="StopPodSandbox for \"f545795cd7c9a84cc5868a255fc2edd09a29cadcb3d54017ee515da41c43e964\"" Sep 10 00:18:04.874503 containerd[1448]: time="2025-09-10T00:18:04.874261381Z" level=info msg="StopPodSandbox for \"f5b08eb23cffa606682e4fc37107e987818118f64a539991a6160e1ac69b86e3\"" Sep 10 00:18:04.875112 containerd[1448]: time="2025-09-10T00:18:04.874623501Z" level=info msg="StopPodSandbox for \"df3fdd8eb494be2e0bbfda368575310fac5cd793ff776dffd99966e4ab4c9a5d\"" Sep 10 00:18:04.980957 containerd[1448]: 2025-09-10 00:18:04.933 [INFO][4601] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f5b08eb23cffa606682e4fc37107e987818118f64a539991a6160e1ac69b86e3" Sep 10 00:18:04.980957 containerd[1448]: 2025-09-10 00:18:04.934 [INFO][4601] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f5b08eb23cffa606682e4fc37107e987818118f64a539991a6160e1ac69b86e3" iface="eth0" netns="/var/run/netns/cni-0cb85102-c531-3e56-5f29-65a581f18251" Sep 10 00:18:04.980957 containerd[1448]: 2025-09-10 00:18:04.934 [INFO][4601] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f5b08eb23cffa606682e4fc37107e987818118f64a539991a6160e1ac69b86e3" iface="eth0" netns="/var/run/netns/cni-0cb85102-c531-3e56-5f29-65a581f18251" Sep 10 00:18:04.980957 containerd[1448]: 2025-09-10 00:18:04.934 [INFO][4601] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f5b08eb23cffa606682e4fc37107e987818118f64a539991a6160e1ac69b86e3" iface="eth0" netns="/var/run/netns/cni-0cb85102-c531-3e56-5f29-65a581f18251" Sep 10 00:18:04.980957 containerd[1448]: 2025-09-10 00:18:04.934 [INFO][4601] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f5b08eb23cffa606682e4fc37107e987818118f64a539991a6160e1ac69b86e3" Sep 10 00:18:04.980957 containerd[1448]: 2025-09-10 00:18:04.934 [INFO][4601] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f5b08eb23cffa606682e4fc37107e987818118f64a539991a6160e1ac69b86e3" Sep 10 00:18:04.980957 containerd[1448]: 2025-09-10 00:18:04.964 [INFO][4645] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f5b08eb23cffa606682e4fc37107e987818118f64a539991a6160e1ac69b86e3" HandleID="k8s-pod-network.f5b08eb23cffa606682e4fc37107e987818118f64a539991a6160e1ac69b86e3" Workload="localhost-k8s-calico--kube--controllers--55c7f99b6f--f6wcn-eth0" Sep 10 00:18:04.980957 containerd[1448]: 2025-09-10 00:18:04.964 [INFO][4645] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:18:04.980957 containerd[1448]: 2025-09-10 00:18:04.965 [INFO][4645] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:18:04.980957 containerd[1448]: 2025-09-10 00:18:04.974 [WARNING][4645] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f5b08eb23cffa606682e4fc37107e987818118f64a539991a6160e1ac69b86e3" HandleID="k8s-pod-network.f5b08eb23cffa606682e4fc37107e987818118f64a539991a6160e1ac69b86e3" Workload="localhost-k8s-calico--kube--controllers--55c7f99b6f--f6wcn-eth0" Sep 10 00:18:04.980957 containerd[1448]: 2025-09-10 00:18:04.974 [INFO][4645] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f5b08eb23cffa606682e4fc37107e987818118f64a539991a6160e1ac69b86e3" HandleID="k8s-pod-network.f5b08eb23cffa606682e4fc37107e987818118f64a539991a6160e1ac69b86e3" Workload="localhost-k8s-calico--kube--controllers--55c7f99b6f--f6wcn-eth0" Sep 10 00:18:04.980957 containerd[1448]: 2025-09-10 00:18:04.975 [INFO][4645] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:18:04.980957 containerd[1448]: 2025-09-10 00:18:04.978 [INFO][4601] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f5b08eb23cffa606682e4fc37107e987818118f64a539991a6160e1ac69b86e3" Sep 10 00:18:04.982773 containerd[1448]: time="2025-09-10T00:18:04.981582087Z" level=info msg="TearDown network for sandbox \"f5b08eb23cffa606682e4fc37107e987818118f64a539991a6160e1ac69b86e3\" successfully" Sep 10 00:18:04.982773 containerd[1448]: time="2025-09-10T00:18:04.981611447Z" level=info msg="StopPodSandbox for \"f5b08eb23cffa606682e4fc37107e987818118f64a539991a6160e1ac69b86e3\" returns successfully" Sep 10 00:18:04.984456 systemd[1]: run-netns-cni\x2d0cb85102\x2dc531\x2d3e56\x2d5f29\x2d65a581f18251.mount: Deactivated successfully. Sep 10 00:18:04.985052 containerd[1448]: time="2025-09-10T00:18:04.985025168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55c7f99b6f-f6wcn,Uid:22f16cc2-19f3-4586-9c67-213437e8718f,Namespace:calico-system,Attempt:1,}" Sep 10 00:18:04.999239 containerd[1448]: 2025-09-10 00:18:04.939 [INFO][4624] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f545795cd7c9a84cc5868a255fc2edd09a29cadcb3d54017ee515da41c43e964" Sep 10 00:18:04.999239 containerd[1448]: 2025-09-10 00:18:04.939 [INFO][4624] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f545795cd7c9a84cc5868a255fc2edd09a29cadcb3d54017ee515da41c43e964" iface="eth0" netns="/var/run/netns/cni-9b7af024-1a90-cff3-ab34-c9ee90cddd05" Sep 10 00:18:04.999239 containerd[1448]: 2025-09-10 00:18:04.940 [INFO][4624] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f545795cd7c9a84cc5868a255fc2edd09a29cadcb3d54017ee515da41c43e964" iface="eth0" netns="/var/run/netns/cni-9b7af024-1a90-cff3-ab34-c9ee90cddd05" Sep 10 00:18:04.999239 containerd[1448]: 2025-09-10 00:18:04.940 [INFO][4624] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f545795cd7c9a84cc5868a255fc2edd09a29cadcb3d54017ee515da41c43e964" iface="eth0" netns="/var/run/netns/cni-9b7af024-1a90-cff3-ab34-c9ee90cddd05" Sep 10 00:18:04.999239 containerd[1448]: 2025-09-10 00:18:04.940 [INFO][4624] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f545795cd7c9a84cc5868a255fc2edd09a29cadcb3d54017ee515da41c43e964" Sep 10 00:18:04.999239 containerd[1448]: 2025-09-10 00:18:04.940 [INFO][4624] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f545795cd7c9a84cc5868a255fc2edd09a29cadcb3d54017ee515da41c43e964" Sep 10 00:18:04.999239 containerd[1448]: 2025-09-10 00:18:04.963 [INFO][4651] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f545795cd7c9a84cc5868a255fc2edd09a29cadcb3d54017ee515da41c43e964" HandleID="k8s-pod-network.f545795cd7c9a84cc5868a255fc2edd09a29cadcb3d54017ee515da41c43e964" Workload="localhost-k8s-coredns--674b8bbfcf--wv9tb-eth0" Sep 10 00:18:04.999239 containerd[1448]: 2025-09-10 00:18:04.963 [INFO][4651] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:18:04.999239 containerd[1448]: 2025-09-10 00:18:04.975 [INFO][4651] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:18:04.999239 containerd[1448]: 2025-09-10 00:18:04.987 [WARNING][4651] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f545795cd7c9a84cc5868a255fc2edd09a29cadcb3d54017ee515da41c43e964" HandleID="k8s-pod-network.f545795cd7c9a84cc5868a255fc2edd09a29cadcb3d54017ee515da41c43e964" Workload="localhost-k8s-coredns--674b8bbfcf--wv9tb-eth0" Sep 10 00:18:04.999239 containerd[1448]: 2025-09-10 00:18:04.987 [INFO][4651] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f545795cd7c9a84cc5868a255fc2edd09a29cadcb3d54017ee515da41c43e964" HandleID="k8s-pod-network.f545795cd7c9a84cc5868a255fc2edd09a29cadcb3d54017ee515da41c43e964" Workload="localhost-k8s-coredns--674b8bbfcf--wv9tb-eth0" Sep 10 00:18:04.999239 containerd[1448]: 2025-09-10 00:18:04.989 [INFO][4651] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:18:04.999239 containerd[1448]: 2025-09-10 00:18:04.997 [INFO][4624] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f545795cd7c9a84cc5868a255fc2edd09a29cadcb3d54017ee515da41c43e964" Sep 10 00:18:04.999572 containerd[1448]: time="2025-09-10T00:18:04.999406891Z" level=info msg="TearDown network for sandbox \"f545795cd7c9a84cc5868a255fc2edd09a29cadcb3d54017ee515da41c43e964\" successfully" Sep 10 00:18:04.999572 containerd[1448]: time="2025-09-10T00:18:04.999442931Z" level=info msg="StopPodSandbox for \"f545795cd7c9a84cc5868a255fc2edd09a29cadcb3d54017ee515da41c43e964\" returns successfully" Sep 10 00:18:05.000449 kubelet[2491]: E0910 00:18:05.000373 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:18:05.001611 containerd[1448]: time="2025-09-10T00:18:05.001262651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wv9tb,Uid:c5fb1d1a-aa75-4564-aca9-9712fec491bb,Namespace:kube-system,Attempt:1,}" Sep 10 00:18:05.001505 systemd[1]: run-netns-cni\x2d9b7af024\x2d1a90\x2dcff3\x2dab34\x2dc9ee90cddd05.mount: Deactivated successfully. Sep 10 00:18:05.010791 containerd[1448]: 2025-09-10 00:18:04.944 [INFO][4628] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="df3fdd8eb494be2e0bbfda368575310fac5cd793ff776dffd99966e4ab4c9a5d" Sep 10 00:18:05.010791 containerd[1448]: 2025-09-10 00:18:04.944 [INFO][4628] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="df3fdd8eb494be2e0bbfda368575310fac5cd793ff776dffd99966e4ab4c9a5d" iface="eth0" netns="/var/run/netns/cni-ab69d2d1-e16e-7a8d-14ba-51e987546b20" Sep 10 00:18:05.010791 containerd[1448]: 2025-09-10 00:18:04.944 [INFO][4628] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="df3fdd8eb494be2e0bbfda368575310fac5cd793ff776dffd99966e4ab4c9a5d" iface="eth0" netns="/var/run/netns/cni-ab69d2d1-e16e-7a8d-14ba-51e987546b20" Sep 10 00:18:05.010791 containerd[1448]: 2025-09-10 00:18:04.945 [INFO][4628] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="df3fdd8eb494be2e0bbfda368575310fac5cd793ff776dffd99966e4ab4c9a5d" iface="eth0" netns="/var/run/netns/cni-ab69d2d1-e16e-7a8d-14ba-51e987546b20" Sep 10 00:18:05.010791 containerd[1448]: 2025-09-10 00:18:04.945 [INFO][4628] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="df3fdd8eb494be2e0bbfda368575310fac5cd793ff776dffd99966e4ab4c9a5d" Sep 10 00:18:05.010791 containerd[1448]: 2025-09-10 00:18:04.945 [INFO][4628] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="df3fdd8eb494be2e0bbfda368575310fac5cd793ff776dffd99966e4ab4c9a5d" Sep 10 00:18:05.010791 containerd[1448]: 2025-09-10 00:18:04.971 [INFO][4658] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="df3fdd8eb494be2e0bbfda368575310fac5cd793ff776dffd99966e4ab4c9a5d" HandleID="k8s-pod-network.df3fdd8eb494be2e0bbfda368575310fac5cd793ff776dffd99966e4ab4c9a5d" Workload="localhost-k8s-coredns--674b8bbfcf--qlcl5-eth0" Sep 10 00:18:05.010791 containerd[1448]: 2025-09-10 00:18:04.971 [INFO][4658] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:18:05.010791 containerd[1448]: 2025-09-10 00:18:04.989 [INFO][4658] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:18:05.010791 containerd[1448]: 2025-09-10 00:18:05.000 [WARNING][4658] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="df3fdd8eb494be2e0bbfda368575310fac5cd793ff776dffd99966e4ab4c9a5d" HandleID="k8s-pod-network.df3fdd8eb494be2e0bbfda368575310fac5cd793ff776dffd99966e4ab4c9a5d" Workload="localhost-k8s-coredns--674b8bbfcf--qlcl5-eth0" Sep 10 00:18:05.010791 containerd[1448]: 2025-09-10 00:18:05.000 [INFO][4658] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="df3fdd8eb494be2e0bbfda368575310fac5cd793ff776dffd99966e4ab4c9a5d" HandleID="k8s-pod-network.df3fdd8eb494be2e0bbfda368575310fac5cd793ff776dffd99966e4ab4c9a5d" Workload="localhost-k8s-coredns--674b8bbfcf--qlcl5-eth0" Sep 10 00:18:05.010791 containerd[1448]: 2025-09-10 00:18:05.002 [INFO][4658] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:18:05.010791 containerd[1448]: 2025-09-10 00:18:05.008 [INFO][4628] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="df3fdd8eb494be2e0bbfda368575310fac5cd793ff776dffd99966e4ab4c9a5d" Sep 10 00:18:05.011152 containerd[1448]: time="2025-09-10T00:18:05.010916414Z" level=info msg="TearDown network for sandbox \"df3fdd8eb494be2e0bbfda368575310fac5cd793ff776dffd99966e4ab4c9a5d\" successfully" Sep 10 00:18:05.011152 containerd[1448]: time="2025-09-10T00:18:05.010940774Z" level=info msg="StopPodSandbox for \"df3fdd8eb494be2e0bbfda368575310fac5cd793ff776dffd99966e4ab4c9a5d\" returns successfully" Sep 10 00:18:05.011300 kubelet[2491]: E0910 00:18:05.011271 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:18:05.011744 containerd[1448]: time="2025-09-10T00:18:05.011717894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qlcl5,Uid:2f22f2e0-6fae-4277-8ce4-e71e5b1601af,Namespace:kube-system,Attempt:1,}" Sep 10 00:18:05.123829 systemd-networkd[1379]: cali7f36fa0fda3: Link UP Sep 10 00:18:05.124697 systemd-networkd[1379]: cali7f36fa0fda3: Gained carrier Sep 10 00:18:05.143126 containerd[1448]: 2025-09-10 00:18:05.045 [INFO][4671] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--55c7f99b6f--f6wcn-eth0 calico-kube-controllers-55c7f99b6f- calico-system 22f16cc2-19f3-4586-9c67-213437e8718f 1034 0 2025-09-10 00:17:43 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:55c7f99b6f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-55c7f99b6f-f6wcn eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7f36fa0fda3 [] [] }} ContainerID="ed35986597b722440989965d3066e4f8e9750a0ad594bfb15ad32fa6fcb9853b" Namespace="calico-system" Pod="calico-kube-controllers-55c7f99b6f-f6wcn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55c7f99b6f--f6wcn-" Sep 10 00:18:05.143126 containerd[1448]: 2025-09-10 00:18:05.045 [INFO][4671] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ed35986597b722440989965d3066e4f8e9750a0ad594bfb15ad32fa6fcb9853b" Namespace="calico-system" Pod="calico-kube-controllers-55c7f99b6f-f6wcn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55c7f99b6f--f6wcn-eth0" Sep 10 00:18:05.143126 containerd[1448]: 2025-09-10 00:18:05.078 [INFO][4712] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ed35986597b722440989965d3066e4f8e9750a0ad594bfb15ad32fa6fcb9853b" HandleID="k8s-pod-network.ed35986597b722440989965d3066e4f8e9750a0ad594bfb15ad32fa6fcb9853b" Workload="localhost-k8s-calico--kube--controllers--55c7f99b6f--f6wcn-eth0" Sep 10 00:18:05.143126 containerd[1448]: 2025-09-10 00:18:05.078 [INFO][4712] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ed35986597b722440989965d3066e4f8e9750a0ad594bfb15ad32fa6fcb9853b" HandleID="k8s-pod-network.ed35986597b722440989965d3066e4f8e9750a0ad594bfb15ad32fa6fcb9853b" Workload="localhost-k8s-calico--kube--controllers--55c7f99b6f--f6wcn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000323490), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-55c7f99b6f-f6wcn", "timestamp":"2025-09-10 00:18:05.078686149 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 00:18:05.143126 containerd[1448]: 2025-09-10 00:18:05.078 [INFO][4712] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:18:05.143126 containerd[1448]: 2025-09-10 00:18:05.078 [INFO][4712] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:18:05.143126 containerd[1448]: 2025-09-10 00:18:05.078 [INFO][4712] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 00:18:05.143126 containerd[1448]: 2025-09-10 00:18:05.089 [INFO][4712] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ed35986597b722440989965d3066e4f8e9750a0ad594bfb15ad32fa6fcb9853b" host="localhost" Sep 10 00:18:05.143126 containerd[1448]: 2025-09-10 00:18:05.095 [INFO][4712] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 00:18:05.143126 containerd[1448]: 2025-09-10 00:18:05.101 [INFO][4712] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 00:18:05.143126 containerd[1448]: 2025-09-10 00:18:05.102 [INFO][4712] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 00:18:05.143126 containerd[1448]: 2025-09-10 00:18:05.105 [INFO][4712] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 00:18:05.143126 containerd[1448]: 2025-09-10 00:18:05.105 [INFO][4712] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ed35986597b722440989965d3066e4f8e9750a0ad594bfb15ad32fa6fcb9853b" host="localhost" Sep 10 00:18:05.143126 containerd[1448]: 2025-09-10 00:18:05.107 [INFO][4712] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ed35986597b722440989965d3066e4f8e9750a0ad594bfb15ad32fa6fcb9853b Sep 10 00:18:05.143126 containerd[1448]: 2025-09-10 00:18:05.110 [INFO][4712] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ed35986597b722440989965d3066e4f8e9750a0ad594bfb15ad32fa6fcb9853b" host="localhost" Sep 10 00:18:05.143126 containerd[1448]: 2025-09-10 00:18:05.115 [INFO][4712] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.ed35986597b722440989965d3066e4f8e9750a0ad594bfb15ad32fa6fcb9853b" host="localhost" Sep 10 00:18:05.143126 containerd[1448]: 2025-09-10 00:18:05.115 [INFO][4712] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.ed35986597b722440989965d3066e4f8e9750a0ad594bfb15ad32fa6fcb9853b" host="localhost" Sep 10 00:18:05.143126 containerd[1448]: 2025-09-10 00:18:05.115 [INFO][4712] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:18:05.143126 containerd[1448]: 2025-09-10 00:18:05.115 [INFO][4712] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="ed35986597b722440989965d3066e4f8e9750a0ad594bfb15ad32fa6fcb9853b" HandleID="k8s-pod-network.ed35986597b722440989965d3066e4f8e9750a0ad594bfb15ad32fa6fcb9853b" Workload="localhost-k8s-calico--kube--controllers--55c7f99b6f--f6wcn-eth0" Sep 10 00:18:05.143664 containerd[1448]: 2025-09-10 00:18:05.119 [INFO][4671] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ed35986597b722440989965d3066e4f8e9750a0ad594bfb15ad32fa6fcb9853b" Namespace="calico-system" Pod="calico-kube-controllers-55c7f99b6f-f6wcn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55c7f99b6f--f6wcn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--55c7f99b6f--f6wcn-eth0", GenerateName:"calico-kube-controllers-55c7f99b6f-", Namespace:"calico-system", SelfLink:"", UID:"22f16cc2-19f3-4586-9c67-213437e8718f", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 17, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55c7f99b6f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-55c7f99b6f-f6wcn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7f36fa0fda3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:18:05.143664 containerd[1448]: 2025-09-10 00:18:05.119 [INFO][4671] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="ed35986597b722440989965d3066e4f8e9750a0ad594bfb15ad32fa6fcb9853b" Namespace="calico-system" Pod="calico-kube-controllers-55c7f99b6f-f6wcn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55c7f99b6f--f6wcn-eth0" Sep 10 00:18:05.143664 containerd[1448]: 2025-09-10 00:18:05.119 [INFO][4671] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7f36fa0fda3 ContainerID="ed35986597b722440989965d3066e4f8e9750a0ad594bfb15ad32fa6fcb9853b" Namespace="calico-system" Pod="calico-kube-controllers-55c7f99b6f-f6wcn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55c7f99b6f--f6wcn-eth0" Sep 10 00:18:05.143664 containerd[1448]: 2025-09-10 00:18:05.125 [INFO][4671] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ed35986597b722440989965d3066e4f8e9750a0ad594bfb15ad32fa6fcb9853b" Namespace="calico-system" Pod="calico-kube-controllers-55c7f99b6f-f6wcn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55c7f99b6f--f6wcn-eth0" Sep 10 00:18:05.143664 containerd[1448]: 2025-09-10 00:18:05.126 [INFO][4671] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ed35986597b722440989965d3066e4f8e9750a0ad594bfb15ad32fa6fcb9853b" Namespace="calico-system" Pod="calico-kube-controllers-55c7f99b6f-f6wcn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55c7f99b6f--f6wcn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--55c7f99b6f--f6wcn-eth0", GenerateName:"calico-kube-controllers-55c7f99b6f-", Namespace:"calico-system", SelfLink:"", UID:"22f16cc2-19f3-4586-9c67-213437e8718f", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 17, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55c7f99b6f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ed35986597b722440989965d3066e4f8e9750a0ad594bfb15ad32fa6fcb9853b", Pod:"calico-kube-controllers-55c7f99b6f-f6wcn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7f36fa0fda3", MAC:"56:7b:a2:c0:a7:64", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:18:05.143664 containerd[1448]: 2025-09-10 00:18:05.135 [INFO][4671] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ed35986597b722440989965d3066e4f8e9750a0ad594bfb15ad32fa6fcb9853b" Namespace="calico-system" Pod="calico-kube-controllers-55c7f99b6f-f6wcn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55c7f99b6f--f6wcn-eth0" Sep 10 00:18:05.161603 containerd[1448]: time="2025-09-10T00:18:05.161490687Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:18:05.161603 containerd[1448]: time="2025-09-10T00:18:05.161557567Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:18:05.161603 containerd[1448]: time="2025-09-10T00:18:05.161570847Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:18:05.161868 containerd[1448]: time="2025-09-10T00:18:05.161650567Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:18:05.180919 systemd[1]: Started cri-containerd-ed35986597b722440989965d3066e4f8e9750a0ad594bfb15ad32fa6fcb9853b.scope - libcontainer container ed35986597b722440989965d3066e4f8e9750a0ad594bfb15ad32fa6fcb9853b. Sep 10 00:18:05.191432 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:18:05.218015 containerd[1448]: time="2025-09-10T00:18:05.217940820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55c7f99b6f-f6wcn,Uid:22f16cc2-19f3-4586-9c67-213437e8718f,Namespace:calico-system,Attempt:1,} returns sandbox id \"ed35986597b722440989965d3066e4f8e9750a0ad594bfb15ad32fa6fcb9853b\"" Sep 10 00:18:05.229107 systemd-networkd[1379]: cali1599ac329ac: Link UP Sep 10 00:18:05.229396 systemd-networkd[1379]: cali1599ac329ac: Gained carrier Sep 10 00:18:05.238286 containerd[1448]: 2025-09-10 00:18:05.068 [INFO][4694] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--qlcl5-eth0 coredns-674b8bbfcf- kube-system 2f22f2e0-6fae-4277-8ce4-e71e5b1601af 1036 0 2025-09-10 00:17:26 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-qlcl5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1599ac329ac [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="cc5fa47ddc892a40fc29558c473dd0d8b53a26a0f1b414b37c2f3d97b6dccffc" Namespace="kube-system" Pod="coredns-674b8bbfcf-qlcl5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qlcl5-" Sep 10 00:18:05.238286 containerd[1448]: 2025-09-10 00:18:05.068 [INFO][4694] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cc5fa47ddc892a40fc29558c473dd0d8b53a26a0f1b414b37c2f3d97b6dccffc" Namespace="kube-system" Pod="coredns-674b8bbfcf-qlcl5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qlcl5-eth0" Sep 10 00:18:05.238286 containerd[1448]: 2025-09-10 00:18:05.097 [INFO][4719] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cc5fa47ddc892a40fc29558c473dd0d8b53a26a0f1b414b37c2f3d97b6dccffc" HandleID="k8s-pod-network.cc5fa47ddc892a40fc29558c473dd0d8b53a26a0f1b414b37c2f3d97b6dccffc" Workload="localhost-k8s-coredns--674b8bbfcf--qlcl5-eth0" Sep 10 00:18:05.238286 containerd[1448]: 2025-09-10 00:18:05.097 [INFO][4719] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cc5fa47ddc892a40fc29558c473dd0d8b53a26a0f1b414b37c2f3d97b6dccffc" HandleID="k8s-pod-network.cc5fa47ddc892a40fc29558c473dd0d8b53a26a0f1b414b37c2f3d97b6dccffc" Workload="localhost-k8s-coredns--674b8bbfcf--qlcl5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d810), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-qlcl5", "timestamp":"2025-09-10 00:18:05.097218793 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 00:18:05.238286 containerd[1448]: 2025-09-10 00:18:05.097 [INFO][4719] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:18:05.238286 containerd[1448]: 2025-09-10 00:18:05.115 [INFO][4719] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:18:05.238286 containerd[1448]: 2025-09-10 00:18:05.115 [INFO][4719] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 00:18:05.238286 containerd[1448]: 2025-09-10 00:18:05.190 [INFO][4719] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cc5fa47ddc892a40fc29558c473dd0d8b53a26a0f1b414b37c2f3d97b6dccffc" host="localhost" Sep 10 00:18:05.238286 containerd[1448]: 2025-09-10 00:18:05.196 [INFO][4719] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 00:18:05.238286 containerd[1448]: 2025-09-10 00:18:05.201 [INFO][4719] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 00:18:05.238286 containerd[1448]: 2025-09-10 00:18:05.203 [INFO][4719] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 00:18:05.238286 containerd[1448]: 2025-09-10 00:18:05.205 [INFO][4719] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 00:18:05.238286 containerd[1448]: 2025-09-10 00:18:05.205 [INFO][4719] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cc5fa47ddc892a40fc29558c473dd0d8b53a26a0f1b414b37c2f3d97b6dccffc" host="localhost" Sep 10 00:18:05.238286 containerd[1448]: 2025-09-10 00:18:05.206 [INFO][4719] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cc5fa47ddc892a40fc29558c473dd0d8b53a26a0f1b414b37c2f3d97b6dccffc Sep 10 00:18:05.238286 containerd[1448]: 2025-09-10 00:18:05.211 [INFO][4719] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cc5fa47ddc892a40fc29558c473dd0d8b53a26a0f1b414b37c2f3d97b6dccffc" host="localhost" Sep 10 00:18:05.238286 containerd[1448]: 2025-09-10 00:18:05.218 [INFO][4719] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.cc5fa47ddc892a40fc29558c473dd0d8b53a26a0f1b414b37c2f3d97b6dccffc" host="localhost" Sep 10 00:18:05.238286 containerd[1448]: 2025-09-10 00:18:05.218 [INFO][4719] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.cc5fa47ddc892a40fc29558c473dd0d8b53a26a0f1b414b37c2f3d97b6dccffc" host="localhost" Sep 10 00:18:05.238286 containerd[1448]: 2025-09-10 00:18:05.218 [INFO][4719] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:18:05.238286 containerd[1448]: 2025-09-10 00:18:05.218 [INFO][4719] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="cc5fa47ddc892a40fc29558c473dd0d8b53a26a0f1b414b37c2f3d97b6dccffc" HandleID="k8s-pod-network.cc5fa47ddc892a40fc29558c473dd0d8b53a26a0f1b414b37c2f3d97b6dccffc" Workload="localhost-k8s-coredns--674b8bbfcf--qlcl5-eth0" Sep 10 00:18:05.238801 containerd[1448]: 2025-09-10 00:18:05.222 [INFO][4694] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cc5fa47ddc892a40fc29558c473dd0d8b53a26a0f1b414b37c2f3d97b6dccffc" Namespace="kube-system" Pod="coredns-674b8bbfcf-qlcl5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qlcl5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--qlcl5-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"2f22f2e0-6fae-4277-8ce4-e71e5b1601af", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 17, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-qlcl5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1599ac329ac", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:18:05.238801 containerd[1448]: 2025-09-10 00:18:05.222 [INFO][4694] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="cc5fa47ddc892a40fc29558c473dd0d8b53a26a0f1b414b37c2f3d97b6dccffc" Namespace="kube-system" Pod="coredns-674b8bbfcf-qlcl5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qlcl5-eth0" Sep 10 00:18:05.238801 containerd[1448]: 2025-09-10 00:18:05.222 [INFO][4694] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1599ac329ac ContainerID="cc5fa47ddc892a40fc29558c473dd0d8b53a26a0f1b414b37c2f3d97b6dccffc" Namespace="kube-system" Pod="coredns-674b8bbfcf-qlcl5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qlcl5-eth0" Sep 10 00:18:05.238801 containerd[1448]: 2025-09-10 00:18:05.226 [INFO][4694] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cc5fa47ddc892a40fc29558c473dd0d8b53a26a0f1b414b37c2f3d97b6dccffc" Namespace="kube-system" Pod="coredns-674b8bbfcf-qlcl5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qlcl5-eth0" Sep 10 00:18:05.238801 containerd[1448]: 2025-09-10 00:18:05.226 [INFO][4694] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cc5fa47ddc892a40fc29558c473dd0d8b53a26a0f1b414b37c2f3d97b6dccffc" Namespace="kube-system" Pod="coredns-674b8bbfcf-qlcl5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qlcl5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--qlcl5-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"2f22f2e0-6fae-4277-8ce4-e71e5b1601af", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 17, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cc5fa47ddc892a40fc29558c473dd0d8b53a26a0f1b414b37c2f3d97b6dccffc", Pod:"coredns-674b8bbfcf-qlcl5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1599ac329ac", MAC:"f6:3e:ed:69:d8:b9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:18:05.238801 containerd[1448]: 2025-09-10 00:18:05.235 [INFO][4694] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cc5fa47ddc892a40fc29558c473dd0d8b53a26a0f1b414b37c2f3d97b6dccffc" Namespace="kube-system" Pod="coredns-674b8bbfcf-qlcl5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qlcl5-eth0" Sep 10 00:18:05.254986 containerd[1448]: time="2025-09-10T00:18:05.254896388Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:18:05.255459 containerd[1448]: time="2025-09-10T00:18:05.255411148Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:18:05.255495 containerd[1448]: time="2025-09-10T00:18:05.255457548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:18:05.255599 containerd[1448]: time="2025-09-10T00:18:05.255565988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:18:05.272898 systemd[1]: Started cri-containerd-cc5fa47ddc892a40fc29558c473dd0d8b53a26a0f1b414b37c2f3d97b6dccffc.scope - libcontainer container cc5fa47ddc892a40fc29558c473dd0d8b53a26a0f1b414b37c2f3d97b6dccffc. Sep 10 00:18:05.282881 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:18:05.308164 containerd[1448]: time="2025-09-10T00:18:05.308056440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qlcl5,Uid:2f22f2e0-6fae-4277-8ce4-e71e5b1601af,Namespace:kube-system,Attempt:1,} returns sandbox id \"cc5fa47ddc892a40fc29558c473dd0d8b53a26a0f1b414b37c2f3d97b6dccffc\"" Sep 10 00:18:05.309243 kubelet[2491]: E0910 00:18:05.309033 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:18:05.317917 containerd[1448]: time="2025-09-10T00:18:05.317874842Z" level=info msg="CreateContainer within sandbox \"cc5fa47ddc892a40fc29558c473dd0d8b53a26a0f1b414b37c2f3d97b6dccffc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 10 00:18:05.330966 containerd[1448]: time="2025-09-10T00:18:05.330911805Z" level=info msg="CreateContainer within sandbox \"cc5fa47ddc892a40fc29558c473dd0d8b53a26a0f1b414b37c2f3d97b6dccffc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8870fa461f8104e265a54dc2f9fb163d72a3ec483dfb52a633d6b765e35c00d2\"" Sep 10 00:18:05.332170 containerd[1448]: time="2025-09-10T00:18:05.332134725Z" level=info msg="StartContainer for \"8870fa461f8104e265a54dc2f9fb163d72a3ec483dfb52a633d6b765e35c00d2\"" Sep 10 00:18:05.335089 systemd-networkd[1379]: cali8244a20a9f7: Link UP Sep 10 00:18:05.339870 systemd-networkd[1379]: cali8244a20a9f7: Gained carrier Sep 10 00:18:05.359839 containerd[1448]: 2025-09-10 00:18:05.065 [INFO][4682] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--wv9tb-eth0 coredns-674b8bbfcf- kube-system c5fb1d1a-aa75-4564-aca9-9712fec491bb 1035 0 2025-09-10 00:17:26 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-wv9tb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8244a20a9f7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e35c674a53025fdb8af8c42726328d1b34b30a2405e072046beb04403c5b1d18" Namespace="kube-system" Pod="coredns-674b8bbfcf-wv9tb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wv9tb-" Sep 10 00:18:05.359839 containerd[1448]: 2025-09-10 00:18:05.066 [INFO][4682] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e35c674a53025fdb8af8c42726328d1b34b30a2405e072046beb04403c5b1d18" Namespace="kube-system" Pod="coredns-674b8bbfcf-wv9tb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wv9tb-eth0" Sep 10 00:18:05.359839 containerd[1448]: 2025-09-10 00:18:05.100 [INFO][4721] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e35c674a53025fdb8af8c42726328d1b34b30a2405e072046beb04403c5b1d18" HandleID="k8s-pod-network.e35c674a53025fdb8af8c42726328d1b34b30a2405e072046beb04403c5b1d18" Workload="localhost-k8s-coredns--674b8bbfcf--wv9tb-eth0" Sep 10 00:18:05.359839 containerd[1448]: 2025-09-10 00:18:05.101 [INFO][4721] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e35c674a53025fdb8af8c42726328d1b34b30a2405e072046beb04403c5b1d18" HandleID="k8s-pod-network.e35c674a53025fdb8af8c42726328d1b34b30a2405e072046beb04403c5b1d18" Workload="localhost-k8s-coredns--674b8bbfcf--wv9tb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001af540), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-wv9tb", "timestamp":"2025-09-10 00:18:05.100954554 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 00:18:05.359839 containerd[1448]: 2025-09-10 00:18:05.101 [INFO][4721] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:18:05.359839 containerd[1448]: 2025-09-10 00:18:05.218 [INFO][4721] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:18:05.359839 containerd[1448]: 2025-09-10 00:18:05.219 [INFO][4721] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 00:18:05.359839 containerd[1448]: 2025-09-10 00:18:05.290 [INFO][4721] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e35c674a53025fdb8af8c42726328d1b34b30a2405e072046beb04403c5b1d18" host="localhost" Sep 10 00:18:05.359839 containerd[1448]: 2025-09-10 00:18:05.297 [INFO][4721] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 00:18:05.359839 containerd[1448]: 2025-09-10 00:18:05.301 [INFO][4721] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 00:18:05.359839 containerd[1448]: 2025-09-10 00:18:05.304 [INFO][4721] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 00:18:05.359839 containerd[1448]: 2025-09-10 00:18:05.306 [INFO][4721] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 00:18:05.359839 containerd[1448]: 2025-09-10 00:18:05.306 [INFO][4721] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e35c674a53025fdb8af8c42726328d1b34b30a2405e072046beb04403c5b1d18" host="localhost" Sep 10 00:18:05.359839 containerd[1448]: 2025-09-10 00:18:05.309 [INFO][4721] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e35c674a53025fdb8af8c42726328d1b34b30a2405e072046beb04403c5b1d18 Sep 10 00:18:05.359839 containerd[1448]: 2025-09-10 00:18:05.316 [INFO][4721] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e35c674a53025fdb8af8c42726328d1b34b30a2405e072046beb04403c5b1d18" host="localhost" Sep 10 00:18:05.359839 containerd[1448]: 2025-09-10 00:18:05.324 [INFO][4721] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.e35c674a53025fdb8af8c42726328d1b34b30a2405e072046beb04403c5b1d18" host="localhost" Sep 10 00:18:05.359839 containerd[1448]: 2025-09-10 00:18:05.324 [INFO][4721] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.e35c674a53025fdb8af8c42726328d1b34b30a2405e072046beb04403c5b1d18" host="localhost" Sep 10 00:18:05.359839 containerd[1448]: 2025-09-10 00:18:05.324 [INFO][4721] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:18:05.359839 containerd[1448]: 2025-09-10 00:18:05.324 [INFO][4721] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="e35c674a53025fdb8af8c42726328d1b34b30a2405e072046beb04403c5b1d18" HandleID="k8s-pod-network.e35c674a53025fdb8af8c42726328d1b34b30a2405e072046beb04403c5b1d18" Workload="localhost-k8s-coredns--674b8bbfcf--wv9tb-eth0" Sep 10 00:18:05.360399 containerd[1448]: 2025-09-10 00:18:05.329 [INFO][4682] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e35c674a53025fdb8af8c42726328d1b34b30a2405e072046beb04403c5b1d18" Namespace="kube-system" Pod="coredns-674b8bbfcf-wv9tb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wv9tb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--wv9tb-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c5fb1d1a-aa75-4564-aca9-9712fec491bb", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 17, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-wv9tb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8244a20a9f7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:18:05.360399 containerd[1448]: 2025-09-10 00:18:05.330 [INFO][4682] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="e35c674a53025fdb8af8c42726328d1b34b30a2405e072046beb04403c5b1d18" Namespace="kube-system" Pod="coredns-674b8bbfcf-wv9tb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wv9tb-eth0" Sep 10 00:18:05.360399 containerd[1448]: 2025-09-10 00:18:05.330 [INFO][4682] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8244a20a9f7 ContainerID="e35c674a53025fdb8af8c42726328d1b34b30a2405e072046beb04403c5b1d18" Namespace="kube-system" Pod="coredns-674b8bbfcf-wv9tb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wv9tb-eth0" Sep 10 00:18:05.360399 containerd[1448]: 2025-09-10 00:18:05.339 [INFO][4682] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e35c674a53025fdb8af8c42726328d1b34b30a2405e072046beb04403c5b1d18" Namespace="kube-system" Pod="coredns-674b8bbfcf-wv9tb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wv9tb-eth0" Sep 10 00:18:05.360399 containerd[1448]: 2025-09-10 00:18:05.341 [INFO][4682] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e35c674a53025fdb8af8c42726328d1b34b30a2405e072046beb04403c5b1d18" Namespace="kube-system" Pod="coredns-674b8bbfcf-wv9tb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wv9tb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--wv9tb-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c5fb1d1a-aa75-4564-aca9-9712fec491bb", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 17, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e35c674a53025fdb8af8c42726328d1b34b30a2405e072046beb04403c5b1d18", Pod:"coredns-674b8bbfcf-wv9tb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8244a20a9f7", MAC:"3a:cf:0f:37:c6:1e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:18:05.360399 containerd[1448]: 2025-09-10 00:18:05.352 [INFO][4682] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e35c674a53025fdb8af8c42726328d1b34b30a2405e072046beb04403c5b1d18" Namespace="kube-system" Pod="coredns-674b8bbfcf-wv9tb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wv9tb-eth0" Sep 10 00:18:05.382338 containerd[1448]: time="2025-09-10T00:18:05.382185376Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:18:05.384196 containerd[1448]: time="2025-09-10T00:18:05.383862177Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:18:05.384196 containerd[1448]: time="2025-09-10T00:18:05.383897257Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:18:05.384196 containerd[1448]: time="2025-09-10T00:18:05.384082017Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:18:05.383987 systemd[1]: Started cri-containerd-8870fa461f8104e265a54dc2f9fb163d72a3ec483dfb52a633d6b765e35c00d2.scope - libcontainer container 8870fa461f8104e265a54dc2f9fb163d72a3ec483dfb52a633d6b765e35c00d2. Sep 10 00:18:05.414931 systemd[1]: Started cri-containerd-e35c674a53025fdb8af8c42726328d1b34b30a2405e072046beb04403c5b1d18.scope - libcontainer container e35c674a53025fdb8af8c42726328d1b34b30a2405e072046beb04403c5b1d18. Sep 10 00:18:05.418206 containerd[1448]: time="2025-09-10T00:18:05.418083264Z" level=info msg="StartContainer for \"8870fa461f8104e265a54dc2f9fb163d72a3ec483dfb52a633d6b765e35c00d2\" returns successfully" Sep 10 00:18:05.426514 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:18:05.447916 systemd[1]: run-netns-cni\x2dab69d2d1\x2de16e\x2d7a8d\x2d14ba\x2d51e987546b20.mount: Deactivated successfully. Sep 10 00:18:05.470989 containerd[1448]: time="2025-09-10T00:18:05.470889276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wv9tb,Uid:c5fb1d1a-aa75-4564-aca9-9712fec491bb,Namespace:kube-system,Attempt:1,} returns sandbox id \"e35c674a53025fdb8af8c42726328d1b34b30a2405e072046beb04403c5b1d18\"" Sep 10 00:18:05.482517 kubelet[2491]: E0910 00:18:05.482484 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:18:05.507863 containerd[1448]: time="2025-09-10T00:18:05.507804404Z" level=info msg="CreateContainer within sandbox \"e35c674a53025fdb8af8c42726328d1b34b30a2405e072046beb04403c5b1d18\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 10 00:18:05.511964 systemd-networkd[1379]: calia2b6e5a128c: Gained IPv6LL Sep 10 00:18:05.528436 containerd[1448]: time="2025-09-10T00:18:05.528255529Z" level=info msg="CreateContainer within sandbox \"e35c674a53025fdb8af8c42726328d1b34b30a2405e072046beb04403c5b1d18\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a6fce12a0918ced88543f32d975c89adfe7196b66f6fb609fc1f1abc7f5345c8\"" Sep 10 00:18:05.529077 containerd[1448]: time="2025-09-10T00:18:05.529049729Z" level=info msg="StartContainer for \"a6fce12a0918ced88543f32d975c89adfe7196b66f6fb609fc1f1abc7f5345c8\"" Sep 10 00:18:05.567238 systemd[1]: Started cri-containerd-a6fce12a0918ced88543f32d975c89adfe7196b66f6fb609fc1f1abc7f5345c8.scope - libcontainer container a6fce12a0918ced88543f32d975c89adfe7196b66f6fb609fc1f1abc7f5345c8. Sep 10 00:18:05.568991 systemd[1]: Started sshd@8-10.0.0.124:22-10.0.0.1:34132.service - OpenSSH per-connection server daemon (10.0.0.1:34132). Sep 10 00:18:05.576331 systemd-networkd[1379]: cali6867527523b: Gained IPv6LL Sep 10 00:18:05.618041 containerd[1448]: time="2025-09-10T00:18:05.617978469Z" level=info msg="StartContainer for \"a6fce12a0918ced88543f32d975c89adfe7196b66f6fb609fc1f1abc7f5345c8\" returns successfully" Sep 10 00:18:05.640604 sshd[4950]: Accepted publickey for core from 10.0.0.1 port 34132 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:18:05.642959 sshd[4950]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:18:05.658849 systemd-logind[1423]: New session 9 of user core. Sep 10 00:18:05.664469 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 10 00:18:05.768393 systemd-networkd[1379]: cali7de37e33db7: Gained IPv6LL Sep 10 00:18:05.876764 containerd[1448]: time="2025-09-10T00:18:05.876707887Z" level=info msg="StopPodSandbox for \"c926cec19e0b2deb730a9bb6354c1568bbfabdb70a7d55cc462fe937d639cfcc\"" Sep 10 00:18:05.950467 sshd[4950]: pam_unix(sshd:session): session closed for user core Sep 10 00:18:05.955941 systemd[1]: sshd@8-10.0.0.124:22-10.0.0.1:34132.service: Deactivated successfully. Sep 10 00:18:05.960398 systemd[1]: session-9.scope: Deactivated successfully. Sep 10 00:18:05.961936 systemd-logind[1423]: Session 9 logged out. Waiting for processes to exit. Sep 10 00:18:05.963696 systemd-logind[1423]: Removed session 9. Sep 10 00:18:05.985823 containerd[1448]: 2025-09-10 00:18:05.939 [INFO][5000] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c926cec19e0b2deb730a9bb6354c1568bbfabdb70a7d55cc462fe937d639cfcc" Sep 10 00:18:05.985823 containerd[1448]: 2025-09-10 00:18:05.939 [INFO][5000] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c926cec19e0b2deb730a9bb6354c1568bbfabdb70a7d55cc462fe937d639cfcc" iface="eth0" netns="/var/run/netns/cni-4c1b06b0-98e3-b2f7-12c9-80dc90550c5c" Sep 10 00:18:05.985823 containerd[1448]: 2025-09-10 00:18:05.939 [INFO][5000] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c926cec19e0b2deb730a9bb6354c1568bbfabdb70a7d55cc462fe937d639cfcc" iface="eth0" netns="/var/run/netns/cni-4c1b06b0-98e3-b2f7-12c9-80dc90550c5c" Sep 10 00:18:05.985823 containerd[1448]: 2025-09-10 00:18:05.940 [INFO][5000] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c926cec19e0b2deb730a9bb6354c1568bbfabdb70a7d55cc462fe937d639cfcc" iface="eth0" netns="/var/run/netns/cni-4c1b06b0-98e3-b2f7-12c9-80dc90550c5c" Sep 10 00:18:05.985823 containerd[1448]: 2025-09-10 00:18:05.940 [INFO][5000] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c926cec19e0b2deb730a9bb6354c1568bbfabdb70a7d55cc462fe937d639cfcc" Sep 10 00:18:05.985823 containerd[1448]: 2025-09-10 00:18:05.940 [INFO][5000] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c926cec19e0b2deb730a9bb6354c1568bbfabdb70a7d55cc462fe937d639cfcc" Sep 10 00:18:05.985823 containerd[1448]: 2025-09-10 00:18:05.966 [INFO][5009] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c926cec19e0b2deb730a9bb6354c1568bbfabdb70a7d55cc462fe937d639cfcc" HandleID="k8s-pod-network.c926cec19e0b2deb730a9bb6354c1568bbfabdb70a7d55cc462fe937d639cfcc" Workload="localhost-k8s-goldmane--54d579b49d--njj6z-eth0" Sep 10 00:18:05.985823 containerd[1448]: 2025-09-10 00:18:05.966 [INFO][5009] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:18:05.985823 containerd[1448]: 2025-09-10 00:18:05.967 [INFO][5009] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:18:05.985823 containerd[1448]: 2025-09-10 00:18:05.977 [WARNING][5009] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c926cec19e0b2deb730a9bb6354c1568bbfabdb70a7d55cc462fe937d639cfcc" HandleID="k8s-pod-network.c926cec19e0b2deb730a9bb6354c1568bbfabdb70a7d55cc462fe937d639cfcc" Workload="localhost-k8s-goldmane--54d579b49d--njj6z-eth0" Sep 10 00:18:05.985823 containerd[1448]: 2025-09-10 00:18:05.977 [INFO][5009] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c926cec19e0b2deb730a9bb6354c1568bbfabdb70a7d55cc462fe937d639cfcc" HandleID="k8s-pod-network.c926cec19e0b2deb730a9bb6354c1568bbfabdb70a7d55cc462fe937d639cfcc" Workload="localhost-k8s-goldmane--54d579b49d--njj6z-eth0" Sep 10 00:18:05.985823 containerd[1448]: 2025-09-10 00:18:05.979 [INFO][5009] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:18:05.985823 containerd[1448]: 2025-09-10 00:18:05.982 [INFO][5000] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c926cec19e0b2deb730a9bb6354c1568bbfabdb70a7d55cc462fe937d639cfcc" Sep 10 00:18:05.986792 containerd[1448]: time="2025-09-10T00:18:05.985934711Z" level=info msg="TearDown network for sandbox \"c926cec19e0b2deb730a9bb6354c1568bbfabdb70a7d55cc462fe937d639cfcc\" successfully" Sep 10 00:18:05.986792 containerd[1448]: time="2025-09-10T00:18:05.985960431Z" level=info msg="StopPodSandbox for \"c926cec19e0b2deb730a9bb6354c1568bbfabdb70a7d55cc462fe937d639cfcc\" returns successfully" Sep 10 00:18:05.986792 containerd[1448]: time="2025-09-10T00:18:05.986676191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-njj6z,Uid:48ece463-fbe3-4a32-8fd2-523723a890ae,Namespace:calico-system,Attempt:1,}" Sep 10 00:18:06.095772 kubelet[2491]: E0910 00:18:06.095358 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:18:06.103546 kubelet[2491]: E0910 00:18:06.101106 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:18:06.109936 kubelet[2491]: I0910 00:18:06.109024 2491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-wv9tb" podStartSLOduration=40.108740697 podStartE2EDuration="40.108740697s" podCreationTimestamp="2025-09-10 00:17:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:18:06.107816777 +0000 UTC m=+46.315571861" watchObservedRunningTime="2025-09-10 00:18:06.108740697 +0000 UTC m=+46.316495781" Sep 10 00:18:06.120002 kubelet[2491]: I0910 00:18:06.119915 2491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-qlcl5" podStartSLOduration=40.119901739 podStartE2EDuration="40.119901739s" podCreationTimestamp="2025-09-10 00:17:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:18:06.119613219 +0000 UTC m=+46.327368303" watchObservedRunningTime="2025-09-10 00:18:06.119901739 +0000 UTC m=+46.327656823" Sep 10 00:18:06.148362 systemd-networkd[1379]: cali9949c91eb48: Link UP Sep 10 00:18:06.150106 systemd-networkd[1379]: cali9949c91eb48: Gained carrier Sep 10 00:18:06.151885 systemd-networkd[1379]: cali7f36fa0fda3: Gained IPv6LL Sep 10 00:18:06.166280 containerd[1448]: 2025-09-10 00:18:06.035 [INFO][5020] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--njj6z-eth0 goldmane-54d579b49d- calico-system 48ece463-fbe3-4a32-8fd2-523723a890ae 1059 0 2025-09-10 00:17:43 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-njj6z eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali9949c91eb48 [] [] }} ContainerID="1701ed35cffea5e1aea19de87473fd52178657c9f71650207637ad87f62b4ad8" Namespace="calico-system" Pod="goldmane-54d579b49d-njj6z" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--njj6z-" Sep 10 00:18:06.166280 containerd[1448]: 2025-09-10 00:18:06.038 [INFO][5020] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1701ed35cffea5e1aea19de87473fd52178657c9f71650207637ad87f62b4ad8" Namespace="calico-system" Pod="goldmane-54d579b49d-njj6z" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--njj6z-eth0" Sep 10 00:18:06.166280 containerd[1448]: 2025-09-10 00:18:06.082 [INFO][5035] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1701ed35cffea5e1aea19de87473fd52178657c9f71650207637ad87f62b4ad8" HandleID="k8s-pod-network.1701ed35cffea5e1aea19de87473fd52178657c9f71650207637ad87f62b4ad8" Workload="localhost-k8s-goldmane--54d579b49d--njj6z-eth0" Sep 10 00:18:06.166280 containerd[1448]: 2025-09-10 00:18:06.082 [INFO][5035] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1701ed35cffea5e1aea19de87473fd52178657c9f71650207637ad87f62b4ad8" HandleID="k8s-pod-network.1701ed35cffea5e1aea19de87473fd52178657c9f71650207637ad87f62b4ad8" Workload="localhost-k8s-goldmane--54d579b49d--njj6z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400011a4d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-njj6z", "timestamp":"2025-09-10 00:18:06.082155851 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 00:18:06.166280 containerd[1448]: 2025-09-10 00:18:06.082 [INFO][5035] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:18:06.166280 containerd[1448]: 2025-09-10 00:18:06.082 [INFO][5035] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:18:06.166280 containerd[1448]: 2025-09-10 00:18:06.082 [INFO][5035] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 00:18:06.166280 containerd[1448]: 2025-09-10 00:18:06.091 [INFO][5035] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1701ed35cffea5e1aea19de87473fd52178657c9f71650207637ad87f62b4ad8" host="localhost" Sep 10 00:18:06.166280 containerd[1448]: 2025-09-10 00:18:06.104 [INFO][5035] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 00:18:06.166280 containerd[1448]: 2025-09-10 00:18:06.112 [INFO][5035] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 00:18:06.166280 containerd[1448]: 2025-09-10 00:18:06.115 [INFO][5035] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 00:18:06.166280 containerd[1448]: 2025-09-10 00:18:06.117 [INFO][5035] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 00:18:06.166280 containerd[1448]: 2025-09-10 00:18:06.118 [INFO][5035] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1701ed35cffea5e1aea19de87473fd52178657c9f71650207637ad87f62b4ad8" host="localhost" Sep 10 00:18:06.166280 containerd[1448]: 2025-09-10 00:18:06.121 [INFO][5035] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1701ed35cffea5e1aea19de87473fd52178657c9f71650207637ad87f62b4ad8 Sep 10 00:18:06.166280 containerd[1448]: 2025-09-10 00:18:06.130 [INFO][5035] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1701ed35cffea5e1aea19de87473fd52178657c9f71650207637ad87f62b4ad8" host="localhost" Sep 10 00:18:06.166280 containerd[1448]: 2025-09-10 00:18:06.140 [INFO][5035] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.1701ed35cffea5e1aea19de87473fd52178657c9f71650207637ad87f62b4ad8" host="localhost" Sep 10 00:18:06.166280 containerd[1448]: 2025-09-10 00:18:06.140 [INFO][5035] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.1701ed35cffea5e1aea19de87473fd52178657c9f71650207637ad87f62b4ad8" host="localhost" Sep 10 00:18:06.166280 containerd[1448]: 2025-09-10 00:18:06.141 [INFO][5035] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:18:06.166280 containerd[1448]: 2025-09-10 00:18:06.141 [INFO][5035] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="1701ed35cffea5e1aea19de87473fd52178657c9f71650207637ad87f62b4ad8" HandleID="k8s-pod-network.1701ed35cffea5e1aea19de87473fd52178657c9f71650207637ad87f62b4ad8" Workload="localhost-k8s-goldmane--54d579b49d--njj6z-eth0" Sep 10 00:18:06.166830 containerd[1448]: 2025-09-10 00:18:06.144 [INFO][5020] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1701ed35cffea5e1aea19de87473fd52178657c9f71650207637ad87f62b4ad8" Namespace="calico-system" Pod="goldmane-54d579b49d-njj6z" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--njj6z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--njj6z-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"48ece463-fbe3-4a32-8fd2-523723a890ae", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 17, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-njj6z", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9949c91eb48", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:18:06.166830 containerd[1448]: 2025-09-10 00:18:06.144 [INFO][5020] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="1701ed35cffea5e1aea19de87473fd52178657c9f71650207637ad87f62b4ad8" Namespace="calico-system" Pod="goldmane-54d579b49d-njj6z" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--njj6z-eth0" Sep 10 00:18:06.166830 containerd[1448]: 2025-09-10 00:18:06.145 [INFO][5020] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9949c91eb48 ContainerID="1701ed35cffea5e1aea19de87473fd52178657c9f71650207637ad87f62b4ad8" Namespace="calico-system" Pod="goldmane-54d579b49d-njj6z" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--njj6z-eth0" Sep 10 00:18:06.166830 containerd[1448]: 2025-09-10 00:18:06.150 [INFO][5020] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1701ed35cffea5e1aea19de87473fd52178657c9f71650207637ad87f62b4ad8" Namespace="calico-system" Pod="goldmane-54d579b49d-njj6z" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--njj6z-eth0" Sep 10 00:18:06.166830 containerd[1448]: 2025-09-10 00:18:06.151 [INFO][5020] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1701ed35cffea5e1aea19de87473fd52178657c9f71650207637ad87f62b4ad8" Namespace="calico-system" Pod="goldmane-54d579b49d-njj6z" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--njj6z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--njj6z-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"48ece463-fbe3-4a32-8fd2-523723a890ae", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 17, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1701ed35cffea5e1aea19de87473fd52178657c9f71650207637ad87f62b4ad8", Pod:"goldmane-54d579b49d-njj6z", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9949c91eb48", MAC:"5a:77:55:ad:25:6b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:18:06.166830 containerd[1448]: 2025-09-10 00:18:06.163 [INFO][5020] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1701ed35cffea5e1aea19de87473fd52178657c9f71650207637ad87f62b4ad8" Namespace="calico-system" Pod="goldmane-54d579b49d-njj6z" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--njj6z-eth0" Sep 10 00:18:06.185572 containerd[1448]: time="2025-09-10T00:18:06.185477073Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:18:06.185572 containerd[1448]: time="2025-09-10T00:18:06.185552313Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:18:06.187032 containerd[1448]: time="2025-09-10T00:18:06.186974113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:18:06.187149 containerd[1448]: time="2025-09-10T00:18:06.187081833Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:18:06.213976 systemd[1]: Started cri-containerd-1701ed35cffea5e1aea19de87473fd52178657c9f71650207637ad87f62b4ad8.scope - libcontainer container 1701ed35cffea5e1aea19de87473fd52178657c9f71650207637ad87f62b4ad8. Sep 10 00:18:06.228029 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:18:06.263442 containerd[1448]: time="2025-09-10T00:18:06.263398809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-njj6z,Uid:48ece463-fbe3-4a32-8fd2-523723a890ae,Namespace:calico-system,Attempt:1,} returns sandbox id \"1701ed35cffea5e1aea19de87473fd52178657c9f71650207637ad87f62b4ad8\"" Sep 10 00:18:06.426996 containerd[1448]: time="2025-09-10T00:18:06.426954563Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:18:06.427785 containerd[1448]: time="2025-09-10T00:18:06.427429323Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 10 00:18:06.429172 containerd[1448]: time="2025-09-10T00:18:06.429138204Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:18:06.438816 containerd[1448]: time="2025-09-10T00:18:06.438732366Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:18:06.440791 containerd[1448]: time="2025-09-10T00:18:06.439299246Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 2.139296321s" Sep 10 00:18:06.440791 containerd[1448]: time="2025-09-10T00:18:06.439333646Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 10 00:18:06.441218 containerd[1448]: time="2025-09-10T00:18:06.441182206Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 10 00:18:06.443256 systemd[1]: run-netns-cni\x2d4c1b06b0\x2d98e3\x2db2f7\x2d12c9\x2d80dc90550c5c.mount: Deactivated successfully. Sep 10 00:18:06.469280 containerd[1448]: time="2025-09-10T00:18:06.469185052Z" level=info msg="CreateContainer within sandbox \"974de05c0e76159fb05af51e50fd66c73ef854b4c86f1fa9d87a81171df4e53b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 10 00:18:06.484007 containerd[1448]: time="2025-09-10T00:18:06.483960295Z" level=info msg="CreateContainer within sandbox \"974de05c0e76159fb05af51e50fd66c73ef854b4c86f1fa9d87a81171df4e53b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"49b93f2869644ef04edd0cadfbbc5e668ef14ac0bdedd8500103b48df8fdd708\"" Sep 10 00:18:06.485033 containerd[1448]: time="2025-09-10T00:18:06.485007815Z" level=info msg="StartContainer for \"49b93f2869644ef04edd0cadfbbc5e668ef14ac0bdedd8500103b48df8fdd708\"" Sep 10 00:18:06.516950 systemd[1]: Started cri-containerd-49b93f2869644ef04edd0cadfbbc5e668ef14ac0bdedd8500103b48df8fdd708.scope - libcontainer container 49b93f2869644ef04edd0cadfbbc5e668ef14ac0bdedd8500103b48df8fdd708. Sep 10 00:18:06.550341 containerd[1448]: time="2025-09-10T00:18:06.550297989Z" level=info msg="StartContainer for \"49b93f2869644ef04edd0cadfbbc5e668ef14ac0bdedd8500103b48df8fdd708\" returns successfully" Sep 10 00:18:06.600439 systemd-networkd[1379]: cali1599ac329ac: Gained IPv6LL Sep 10 00:18:06.727917 systemd-networkd[1379]: cali8244a20a9f7: Gained IPv6LL Sep 10 00:18:06.765443 containerd[1448]: time="2025-09-10T00:18:06.765393394Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:18:06.767847 containerd[1448]: time="2025-09-10T00:18:06.766993434Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 10 00:18:06.773060 containerd[1448]: time="2025-09-10T00:18:06.773026556Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 331.81347ms" Sep 10 00:18:06.773193 containerd[1448]: time="2025-09-10T00:18:06.773175956Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 10 00:18:06.774843 containerd[1448]: time="2025-09-10T00:18:06.774810036Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 10 00:18:06.778775 containerd[1448]: time="2025-09-10T00:18:06.778723437Z" level=info msg="CreateContainer within sandbox \"12d8978fa7b0c324fbfc9e6c60ed404b8d09e82865e325fb376b87b454bebc9c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 10 00:18:06.799436 containerd[1448]: time="2025-09-10T00:18:06.799382321Z" level=info msg="CreateContainer within sandbox \"12d8978fa7b0c324fbfc9e6c60ed404b8d09e82865e325fb376b87b454bebc9c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6672987ad275a75b42dc8fc80249197627d767ae44367bd5ea1c5aababea2bd5\"" Sep 10 00:18:06.802384 containerd[1448]: time="2025-09-10T00:18:06.802166682Z" level=info msg="StartContainer for \"6672987ad275a75b42dc8fc80249197627d767ae44367bd5ea1c5aababea2bd5\"" Sep 10 00:18:06.833924 systemd[1]: Started cri-containerd-6672987ad275a75b42dc8fc80249197627d767ae44367bd5ea1c5aababea2bd5.scope - libcontainer container 6672987ad275a75b42dc8fc80249197627d767ae44367bd5ea1c5aababea2bd5. Sep 10 00:18:06.881099 containerd[1448]: time="2025-09-10T00:18:06.880229618Z" level=info msg="StartContainer for \"6672987ad275a75b42dc8fc80249197627d767ae44367bd5ea1c5aababea2bd5\" returns successfully" Sep 10 00:18:07.133995 kubelet[2491]: E0910 00:18:07.133619 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:18:07.140831 kubelet[2491]: I0910 00:18:07.140777 2491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6cd6dcff69-qt9l6" podStartSLOduration=27.797110831 podStartE2EDuration="30.140762231s" podCreationTimestamp="2025-09-10 00:17:37 +0000 UTC" firstStartedPulling="2025-09-10 00:18:04.430372636 +0000 UTC m=+44.638127680" lastFinishedPulling="2025-09-10 00:18:06.774023996 +0000 UTC m=+46.981779080" observedRunningTime="2025-09-10 00:18:07.13962719 +0000 UTC m=+47.347382274" watchObservedRunningTime="2025-09-10 00:18:07.140762231 +0000 UTC m=+47.348517315" Sep 10 00:18:07.141714 kubelet[2491]: E0910 00:18:07.141431 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:18:07.688011 systemd-networkd[1379]: cali9949c91eb48: Gained IPv6LL Sep 10 00:18:08.081425 containerd[1448]: time="2025-09-10T00:18:08.081175614Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:18:08.086587 containerd[1448]: time="2025-09-10T00:18:08.086554535Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 10 00:18:08.087370 containerd[1448]: time="2025-09-10T00:18:08.087313735Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:18:08.092514 containerd[1448]: time="2025-09-10T00:18:08.092390416Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:18:08.093629 containerd[1448]: time="2025-09-10T00:18:08.093590136Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 1.31863982s" Sep 10 00:18:08.093629 containerd[1448]: time="2025-09-10T00:18:08.093630056Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 10 00:18:08.095689 containerd[1448]: time="2025-09-10T00:18:08.095649777Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 10 00:18:08.098630 containerd[1448]: time="2025-09-10T00:18:08.098596737Z" level=info msg="CreateContainer within sandbox \"b9715d94334df2c0b0d778c267c41e3a8fe370119f17b59332cd6b79cc07e217\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 10 00:18:08.116102 containerd[1448]: time="2025-09-10T00:18:08.115239780Z" level=info msg="CreateContainer within sandbox \"b9715d94334df2c0b0d778c267c41e3a8fe370119f17b59332cd6b79cc07e217\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"75df63ff91e2e9903eb8c8b5dfbac2471ca5ea81af98196ff089f6aa128a8e58\"" Sep 10 00:18:08.117177 containerd[1448]: time="2025-09-10T00:18:08.116806780Z" level=info msg="StartContainer for \"75df63ff91e2e9903eb8c8b5dfbac2471ca5ea81af98196ff089f6aa128a8e58\"" Sep 10 00:18:08.142768 kubelet[2491]: E0910 00:18:08.142594 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:18:08.162913 systemd[1]: Started cri-containerd-75df63ff91e2e9903eb8c8b5dfbac2471ca5ea81af98196ff089f6aa128a8e58.scope - libcontainer container 75df63ff91e2e9903eb8c8b5dfbac2471ca5ea81af98196ff089f6aa128a8e58. Sep 10 00:18:08.198806 containerd[1448]: time="2025-09-10T00:18:08.198733676Z" level=info msg="StartContainer for \"75df63ff91e2e9903eb8c8b5dfbac2471ca5ea81af98196ff089f6aa128a8e58\" returns successfully" Sep 10 00:18:08.438769 kubelet[2491]: I0910 00:18:08.438705 2491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6cd6dcff69-pvd28" podStartSLOduration=29.298117919 podStartE2EDuration="31.43868628s" podCreationTimestamp="2025-09-10 00:17:37 +0000 UTC" firstStartedPulling="2025-09-10 00:18:04.299630765 +0000 UTC m=+44.507385849" lastFinishedPulling="2025-09-10 00:18:06.440199166 +0000 UTC m=+46.647954210" observedRunningTime="2025-09-10 00:18:07.151377353 +0000 UTC m=+47.359132397" watchObservedRunningTime="2025-09-10 00:18:08.43868628 +0000 UTC m=+48.646441364" Sep 10 00:18:08.900674 kubelet[2491]: I0910 00:18:08.900558 2491 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 00:18:08.958975 systemd[1]: run-containerd-runc-k8s.io-f153fded1657f3daeaf17b9c425988969ba014861463a727f8d9c831bd77035f-runc.HukBAh.mount: Deactivated successfully. Sep 10 00:18:09.051365 systemd[1]: run-containerd-runc-k8s.io-f153fded1657f3daeaf17b9c425988969ba014861463a727f8d9c831bd77035f-runc.nc9vK5.mount: Deactivated successfully. Sep 10 00:18:10.037917 containerd[1448]: time="2025-09-10T00:18:10.037136761Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:18:10.038333 containerd[1448]: time="2025-09-10T00:18:10.037917601Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 10 00:18:10.038926 containerd[1448]: time="2025-09-10T00:18:10.038900801Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:18:10.041169 containerd[1448]: time="2025-09-10T00:18:10.041134602Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:18:10.042382 containerd[1448]: time="2025-09-10T00:18:10.042043722Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 1.946348705s" Sep 10 00:18:10.042825 containerd[1448]: time="2025-09-10T00:18:10.042796962Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 10 00:18:10.043927 containerd[1448]: time="2025-09-10T00:18:10.043886362Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 10 00:18:10.057730 containerd[1448]: time="2025-09-10T00:18:10.057597844Z" level=info msg="CreateContainer within sandbox \"ed35986597b722440989965d3066e4f8e9750a0ad594bfb15ad32fa6fcb9853b\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 10 00:18:10.082898 containerd[1448]: time="2025-09-10T00:18:10.082844928Z" level=info msg="CreateContainer within sandbox \"ed35986597b722440989965d3066e4f8e9750a0ad594bfb15ad32fa6fcb9853b\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"941282b1aecba0914c8fe14f99b063864e8dc2ac7f03ec81e11c958f2f05322c\"" Sep 10 00:18:10.083384 containerd[1448]: time="2025-09-10T00:18:10.083357648Z" level=info msg="StartContainer for \"941282b1aecba0914c8fe14f99b063864e8dc2ac7f03ec81e11c958f2f05322c\"" Sep 10 00:18:10.115257 systemd[1]: Started cri-containerd-941282b1aecba0914c8fe14f99b063864e8dc2ac7f03ec81e11c958f2f05322c.scope - libcontainer container 941282b1aecba0914c8fe14f99b063864e8dc2ac7f03ec81e11c958f2f05322c. Sep 10 00:18:10.158789 containerd[1448]: time="2025-09-10T00:18:10.158698141Z" level=info msg="StartContainer for \"941282b1aecba0914c8fe14f99b063864e8dc2ac7f03ec81e11c958f2f05322c\" returns successfully" Sep 10 00:18:10.963486 systemd[1]: Started sshd@9-10.0.0.124:22-10.0.0.1:50014.service - OpenSSH per-connection server daemon (10.0.0.1:50014). Sep 10 00:18:11.010092 sshd[5337]: Accepted publickey for core from 10.0.0.1 port 50014 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:18:11.011695 sshd[5337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:18:11.015444 systemd-logind[1423]: New session 10 of user core. Sep 10 00:18:11.031910 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 10 00:18:11.266071 kubelet[2491]: I0910 00:18:11.265914 2491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-55c7f99b6f-f6wcn" podStartSLOduration=23.441609935 podStartE2EDuration="28.265895757s" podCreationTimestamp="2025-09-10 00:17:43 +0000 UTC" firstStartedPulling="2025-09-10 00:18:05.2192385 +0000 UTC m=+45.426993584" lastFinishedPulling="2025-09-10 00:18:10.043524322 +0000 UTC m=+50.251279406" observedRunningTime="2025-09-10 00:18:11.183622664 +0000 UTC m=+51.391377748" watchObservedRunningTime="2025-09-10 00:18:11.265895757 +0000 UTC m=+51.473650841" Sep 10 00:18:11.335324 sshd[5337]: pam_unix(sshd:session): session closed for user core Sep 10 00:18:11.344508 systemd[1]: sshd@9-10.0.0.124:22-10.0.0.1:50014.service: Deactivated successfully. Sep 10 00:18:11.346730 systemd[1]: session-10.scope: Deactivated successfully. Sep 10 00:18:11.348596 systemd-logind[1423]: Session 10 logged out. Waiting for processes to exit. Sep 10 00:18:11.357195 systemd[1]: Started sshd@10-10.0.0.124:22-10.0.0.1:50024.service - OpenSSH per-connection server daemon (10.0.0.1:50024). Sep 10 00:18:11.358193 systemd-logind[1423]: Removed session 10. Sep 10 00:18:11.451625 sshd[5381]: Accepted publickey for core from 10.0.0.1 port 50024 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:18:11.452943 sshd[5381]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:18:11.456984 systemd-logind[1423]: New session 11 of user core. Sep 10 00:18:11.465901 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 10 00:18:11.747321 sshd[5381]: pam_unix(sshd:session): session closed for user core Sep 10 00:18:11.762114 systemd[1]: sshd@10-10.0.0.124:22-10.0.0.1:50024.service: Deactivated successfully. Sep 10 00:18:11.764858 systemd[1]: session-11.scope: Deactivated successfully. Sep 10 00:18:11.766791 systemd-logind[1423]: Session 11 logged out. Waiting for processes to exit. Sep 10 00:18:11.779087 systemd[1]: Started sshd@11-10.0.0.124:22-10.0.0.1:50034.service - OpenSSH per-connection server daemon (10.0.0.1:50034). Sep 10 00:18:11.780200 systemd-logind[1423]: Removed session 11. Sep 10 00:18:11.845221 sshd[5393]: Accepted publickey for core from 10.0.0.1 port 50034 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:18:11.846552 sshd[5393]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:18:11.851259 systemd-logind[1423]: New session 12 of user core. Sep 10 00:18:11.859940 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 10 00:18:12.216670 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount379403821.mount: Deactivated successfully. Sep 10 00:18:12.364886 sshd[5393]: pam_unix(sshd:session): session closed for user core Sep 10 00:18:12.374468 systemd[1]: sshd@11-10.0.0.124:22-10.0.0.1:50034.service: Deactivated successfully. Sep 10 00:18:12.377213 systemd[1]: session-12.scope: Deactivated successfully. Sep 10 00:18:12.379868 systemd-logind[1423]: Session 12 logged out. Waiting for processes to exit. Sep 10 00:18:12.382599 systemd-logind[1423]: Removed session 12. Sep 10 00:18:12.705894 containerd[1448]: time="2025-09-10T00:18:12.705834488Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:18:12.707246 containerd[1448]: time="2025-09-10T00:18:12.707144528Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 10 00:18:12.708231 containerd[1448]: time="2025-09-10T00:18:12.708199288Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:18:12.710759 containerd[1448]: time="2025-09-10T00:18:12.710728129Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:18:12.712143 containerd[1448]: time="2025-09-10T00:18:12.711466569Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 2.667544567s" Sep 10 00:18:12.712143 containerd[1448]: time="2025-09-10T00:18:12.711503009Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 10 00:18:12.713024 containerd[1448]: time="2025-09-10T00:18:12.712996889Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 10 00:18:12.716657 containerd[1448]: time="2025-09-10T00:18:12.716625209Z" level=info msg="CreateContainer within sandbox \"1701ed35cffea5e1aea19de87473fd52178657c9f71650207637ad87f62b4ad8\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 10 00:18:12.764849 containerd[1448]: time="2025-09-10T00:18:12.764774016Z" level=info msg="CreateContainer within sandbox \"1701ed35cffea5e1aea19de87473fd52178657c9f71650207637ad87f62b4ad8\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"25c571953d791421c0b6936fa523111256794c7899514cfd37478f3aa17a7193\"" Sep 10 00:18:12.765496 containerd[1448]: time="2025-09-10T00:18:12.765357776Z" level=info msg="StartContainer for \"25c571953d791421c0b6936fa523111256794c7899514cfd37478f3aa17a7193\"" Sep 10 00:18:12.796950 systemd[1]: Started cri-containerd-25c571953d791421c0b6936fa523111256794c7899514cfd37478f3aa17a7193.scope - libcontainer container 25c571953d791421c0b6936fa523111256794c7899514cfd37478f3aa17a7193. Sep 10 00:18:12.830574 containerd[1448]: time="2025-09-10T00:18:12.830530586Z" level=info msg="StartContainer for \"25c571953d791421c0b6936fa523111256794c7899514cfd37478f3aa17a7193\" returns successfully" Sep 10 00:18:14.477576 containerd[1448]: time="2025-09-10T00:18:14.476992402Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:18:14.478472 containerd[1448]: time="2025-09-10T00:18:14.477837402Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 10 00:18:14.479076 containerd[1448]: time="2025-09-10T00:18:14.479036882Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:18:14.481516 containerd[1448]: time="2025-09-10T00:18:14.481301043Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:18:14.482187 containerd[1448]: time="2025-09-10T00:18:14.482061603Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 1.769024674s" Sep 10 00:18:14.482187 containerd[1448]: time="2025-09-10T00:18:14.482095283Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 10 00:18:14.486927 containerd[1448]: time="2025-09-10T00:18:14.486891603Z" level=info msg="CreateContainer within sandbox \"b9715d94334df2c0b0d778c267c41e3a8fe370119f17b59332cd6b79cc07e217\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 10 00:18:14.498860 containerd[1448]: time="2025-09-10T00:18:14.498824765Z" level=info msg="CreateContainer within sandbox \"b9715d94334df2c0b0d778c267c41e3a8fe370119f17b59332cd6b79cc07e217\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"c55a3f470c3b196579651436a432260b29b5393953f9bc2ad2e94553a2a01eff\"" Sep 10 00:18:14.499359 containerd[1448]: time="2025-09-10T00:18:14.499331845Z" level=info msg="StartContainer for \"c55a3f470c3b196579651436a432260b29b5393953f9bc2ad2e94553a2a01eff\"" Sep 10 00:18:14.530908 systemd[1]: Started cri-containerd-c55a3f470c3b196579651436a432260b29b5393953f9bc2ad2e94553a2a01eff.scope - libcontainer container c55a3f470c3b196579651436a432260b29b5393953f9bc2ad2e94553a2a01eff. Sep 10 00:18:14.560367 containerd[1448]: time="2025-09-10T00:18:14.560286933Z" level=info msg="StartContainer for \"c55a3f470c3b196579651436a432260b29b5393953f9bc2ad2e94553a2a01eff\" returns successfully" Sep 10 00:18:14.945644 kubelet[2491]: I0910 00:18:14.945603 2491 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 10 00:18:14.952973 kubelet[2491]: I0910 00:18:14.952944 2491 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 10 00:18:15.192213 kubelet[2491]: I0910 00:18:15.191274 2491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-65xbg" podStartSLOduration=22.199705937 podStartE2EDuration="32.19125917s" podCreationTimestamp="2025-09-10 00:17:43 +0000 UTC" firstStartedPulling="2025-09-10 00:18:04.49125189 +0000 UTC m=+44.699006974" lastFinishedPulling="2025-09-10 00:18:14.482805123 +0000 UTC m=+54.690560207" observedRunningTime="2025-09-10 00:18:15.19069081 +0000 UTC m=+55.398445894" watchObservedRunningTime="2025-09-10 00:18:15.19125917 +0000 UTC m=+55.399014254" Sep 10 00:18:15.192213 kubelet[2491]: I0910 00:18:15.191405 2491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-njj6z" podStartSLOduration=25.74402525 podStartE2EDuration="32.19139953s" podCreationTimestamp="2025-09-10 00:17:43 +0000 UTC" firstStartedPulling="2025-09-10 00:18:06.264984609 +0000 UTC m=+46.472739693" lastFinishedPulling="2025-09-10 00:18:12.712358889 +0000 UTC m=+52.920113973" observedRunningTime="2025-09-10 00:18:13.183948834 +0000 UTC m=+53.391703918" watchObservedRunningTime="2025-09-10 00:18:15.19139953 +0000 UTC m=+55.399154614" Sep 10 00:18:17.379284 systemd[1]: Started sshd@12-10.0.0.124:22-10.0.0.1:50048.service - OpenSSH per-connection server daemon (10.0.0.1:50048). Sep 10 00:18:17.422186 sshd[5598]: Accepted publickey for core from 10.0.0.1 port 50048 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:18:17.423627 sshd[5598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:18:17.427225 systemd-logind[1423]: New session 13 of user core. Sep 10 00:18:17.436956 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 10 00:18:17.757273 sshd[5598]: pam_unix(sshd:session): session closed for user core Sep 10 00:18:17.768341 systemd[1]: sshd@12-10.0.0.124:22-10.0.0.1:50048.service: Deactivated successfully. Sep 10 00:18:17.770172 systemd[1]: session-13.scope: Deactivated successfully. Sep 10 00:18:17.771561 systemd-logind[1423]: Session 13 logged out. Waiting for processes to exit. Sep 10 00:18:17.772920 systemd[1]: Started sshd@13-10.0.0.124:22-10.0.0.1:50060.service - OpenSSH per-connection server daemon (10.0.0.1:50060). Sep 10 00:18:17.773736 systemd-logind[1423]: Removed session 13. Sep 10 00:18:17.805691 sshd[5613]: Accepted publickey for core from 10.0.0.1 port 50060 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:18:17.806887 sshd[5613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:18:17.810771 systemd-logind[1423]: New session 14 of user core. Sep 10 00:18:17.819891 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 10 00:18:18.018198 sshd[5613]: pam_unix(sshd:session): session closed for user core Sep 10 00:18:18.027407 systemd[1]: sshd@13-10.0.0.124:22-10.0.0.1:50060.service: Deactivated successfully. Sep 10 00:18:18.029022 systemd[1]: session-14.scope: Deactivated successfully. Sep 10 00:18:18.030439 systemd-logind[1423]: Session 14 logged out. Waiting for processes to exit. Sep 10 00:18:18.035105 systemd[1]: Started sshd@14-10.0.0.124:22-10.0.0.1:50074.service - OpenSSH per-connection server daemon (10.0.0.1:50074). Sep 10 00:18:18.037407 systemd-logind[1423]: Removed session 14. Sep 10 00:18:18.072062 sshd[5625]: Accepted publickey for core from 10.0.0.1 port 50074 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:18:18.073439 sshd[5625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:18:18.077827 systemd-logind[1423]: New session 15 of user core. Sep 10 00:18:18.082941 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 10 00:18:18.690424 sshd[5625]: pam_unix(sshd:session): session closed for user core Sep 10 00:18:18.699544 systemd[1]: sshd@14-10.0.0.124:22-10.0.0.1:50074.service: Deactivated successfully. Sep 10 00:18:18.702438 systemd[1]: session-15.scope: Deactivated successfully. Sep 10 00:18:18.704087 systemd-logind[1423]: Session 15 logged out. Waiting for processes to exit. Sep 10 00:18:18.711068 systemd[1]: Started sshd@15-10.0.0.124:22-10.0.0.1:50082.service - OpenSSH per-connection server daemon (10.0.0.1:50082). Sep 10 00:18:18.713881 systemd-logind[1423]: Removed session 15. Sep 10 00:18:18.750527 sshd[5648]: Accepted publickey for core from 10.0.0.1 port 50082 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:18:18.751929 sshd[5648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:18:18.755613 systemd-logind[1423]: New session 16 of user core. Sep 10 00:18:18.765895 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 10 00:18:19.277065 sshd[5648]: pam_unix(sshd:session): session closed for user core Sep 10 00:18:19.291551 systemd[1]: sshd@15-10.0.0.124:22-10.0.0.1:50082.service: Deactivated successfully. Sep 10 00:18:19.293435 systemd[1]: session-16.scope: Deactivated successfully. Sep 10 00:18:19.295448 systemd-logind[1423]: Session 16 logged out. Waiting for processes to exit. Sep 10 00:18:19.296946 systemd[1]: Started sshd@16-10.0.0.124:22-10.0.0.1:50092.service - OpenSSH per-connection server daemon (10.0.0.1:50092). Sep 10 00:18:19.297617 systemd-logind[1423]: Removed session 16. Sep 10 00:18:19.334417 sshd[5662]: Accepted publickey for core from 10.0.0.1 port 50092 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:18:19.335826 sshd[5662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:18:19.339795 systemd-logind[1423]: New session 17 of user core. Sep 10 00:18:19.345890 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 10 00:18:19.474223 sshd[5662]: pam_unix(sshd:session): session closed for user core Sep 10 00:18:19.477599 systemd[1]: sshd@16-10.0.0.124:22-10.0.0.1:50092.service: Deactivated successfully. Sep 10 00:18:19.479493 systemd[1]: session-17.scope: Deactivated successfully. Sep 10 00:18:19.480067 systemd-logind[1423]: Session 17 logged out. Waiting for processes to exit. Sep 10 00:18:19.480985 systemd-logind[1423]: Removed session 17. Sep 10 00:18:19.858799 containerd[1448]: time="2025-09-10T00:18:19.858741891Z" level=info msg="StopPodSandbox for \"236b8b75ac710cdcccdabd902fb2ca19e81ab2c20906abf395cf84177e19ad07\"" Sep 10 00:18:19.966245 containerd[1448]: 2025-09-10 00:18:19.926 [WARNING][5688] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="236b8b75ac710cdcccdabd902fb2ca19e81ab2c20906abf395cf84177e19ad07" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6cd6dcff69--pvd28-eth0", GenerateName:"calico-apiserver-6cd6dcff69-", Namespace:"calico-apiserver", SelfLink:"", UID:"e6f75a12-f91b-4f7f-9f91-ba290e49ea84", ResourceVersion:"1112", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 17, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cd6dcff69", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"974de05c0e76159fb05af51e50fd66c73ef854b4c86f1fa9d87a81171df4e53b", Pod:"calico-apiserver-6cd6dcff69-pvd28", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia2b6e5a128c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:18:19.966245 containerd[1448]: 2025-09-10 00:18:19.926 [INFO][5688] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="236b8b75ac710cdcccdabd902fb2ca19e81ab2c20906abf395cf84177e19ad07" Sep 10 00:18:19.966245 containerd[1448]: 2025-09-10 00:18:19.927 [INFO][5688] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="236b8b75ac710cdcccdabd902fb2ca19e81ab2c20906abf395cf84177e19ad07" iface="eth0" netns="" Sep 10 00:18:19.966245 containerd[1448]: 2025-09-10 00:18:19.927 [INFO][5688] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="236b8b75ac710cdcccdabd902fb2ca19e81ab2c20906abf395cf84177e19ad07" Sep 10 00:18:19.966245 containerd[1448]: 2025-09-10 00:18:19.927 [INFO][5688] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="236b8b75ac710cdcccdabd902fb2ca19e81ab2c20906abf395cf84177e19ad07" Sep 10 00:18:19.966245 containerd[1448]: 2025-09-10 00:18:19.951 [INFO][5697] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="236b8b75ac710cdcccdabd902fb2ca19e81ab2c20906abf395cf84177e19ad07" HandleID="k8s-pod-network.236b8b75ac710cdcccdabd902fb2ca19e81ab2c20906abf395cf84177e19ad07" Workload="localhost-k8s-calico--apiserver--6cd6dcff69--pvd28-eth0" Sep 10 00:18:19.966245 containerd[1448]: 2025-09-10 00:18:19.951 [INFO][5697] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:18:19.966245 containerd[1448]: 2025-09-10 00:18:19.951 [INFO][5697] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:18:19.966245 containerd[1448]: 2025-09-10 00:18:19.961 [WARNING][5697] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="236b8b75ac710cdcccdabd902fb2ca19e81ab2c20906abf395cf84177e19ad07" HandleID="k8s-pod-network.236b8b75ac710cdcccdabd902fb2ca19e81ab2c20906abf395cf84177e19ad07" Workload="localhost-k8s-calico--apiserver--6cd6dcff69--pvd28-eth0" Sep 10 00:18:19.966245 containerd[1448]: 2025-09-10 00:18:19.961 [INFO][5697] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="236b8b75ac710cdcccdabd902fb2ca19e81ab2c20906abf395cf84177e19ad07" HandleID="k8s-pod-network.236b8b75ac710cdcccdabd902fb2ca19e81ab2c20906abf395cf84177e19ad07" Workload="localhost-k8s-calico--apiserver--6cd6dcff69--pvd28-eth0" Sep 10 00:18:19.966245 containerd[1448]: 2025-09-10 00:18:19.962 [INFO][5697] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:18:19.966245 containerd[1448]: 2025-09-10 00:18:19.964 [INFO][5688] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="236b8b75ac710cdcccdabd902fb2ca19e81ab2c20906abf395cf84177e19ad07" Sep 10 00:18:19.967230 containerd[1448]: time="2025-09-10T00:18:19.966286780Z" level=info msg="TearDown network for sandbox \"236b8b75ac710cdcccdabd902fb2ca19e81ab2c20906abf395cf84177e19ad07\" successfully" Sep 10 00:18:19.967230 containerd[1448]: time="2025-09-10T00:18:19.966311180Z" level=info msg="StopPodSandbox for \"236b8b75ac710cdcccdabd902fb2ca19e81ab2c20906abf395cf84177e19ad07\" returns successfully" Sep 10 00:18:19.970909 containerd[1448]: time="2025-09-10T00:18:19.970865581Z" level=info msg="RemovePodSandbox for \"236b8b75ac710cdcccdabd902fb2ca19e81ab2c20906abf395cf84177e19ad07\"" Sep 10 00:18:19.973043 containerd[1448]: time="2025-09-10T00:18:19.972999461Z" level=info msg="Forcibly stopping sandbox \"236b8b75ac710cdcccdabd902fb2ca19e81ab2c20906abf395cf84177e19ad07\"" Sep 10 00:18:20.046233 containerd[1448]: 2025-09-10 00:18:20.011 [WARNING][5715] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="236b8b75ac710cdcccdabd902fb2ca19e81ab2c20906abf395cf84177e19ad07" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6cd6dcff69--pvd28-eth0", GenerateName:"calico-apiserver-6cd6dcff69-", Namespace:"calico-apiserver", SelfLink:"", UID:"e6f75a12-f91b-4f7f-9f91-ba290e49ea84", ResourceVersion:"1112", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 17, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cd6dcff69", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"974de05c0e76159fb05af51e50fd66c73ef854b4c86f1fa9d87a81171df4e53b", Pod:"calico-apiserver-6cd6dcff69-pvd28", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia2b6e5a128c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:18:20.046233 containerd[1448]: 2025-09-10 00:18:20.012 [INFO][5715] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="236b8b75ac710cdcccdabd902fb2ca19e81ab2c20906abf395cf84177e19ad07" Sep 10 00:18:20.046233 containerd[1448]: 2025-09-10 00:18:20.012 [INFO][5715] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="236b8b75ac710cdcccdabd902fb2ca19e81ab2c20906abf395cf84177e19ad07" iface="eth0" netns="" Sep 10 00:18:20.046233 containerd[1448]: 2025-09-10 00:18:20.012 [INFO][5715] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="236b8b75ac710cdcccdabd902fb2ca19e81ab2c20906abf395cf84177e19ad07" Sep 10 00:18:20.046233 containerd[1448]: 2025-09-10 00:18:20.012 [INFO][5715] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="236b8b75ac710cdcccdabd902fb2ca19e81ab2c20906abf395cf84177e19ad07" Sep 10 00:18:20.046233 containerd[1448]: 2025-09-10 00:18:20.030 [INFO][5724] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="236b8b75ac710cdcccdabd902fb2ca19e81ab2c20906abf395cf84177e19ad07" HandleID="k8s-pod-network.236b8b75ac710cdcccdabd902fb2ca19e81ab2c20906abf395cf84177e19ad07" Workload="localhost-k8s-calico--apiserver--6cd6dcff69--pvd28-eth0" Sep 10 00:18:20.046233 containerd[1448]: 2025-09-10 00:18:20.030 [INFO][5724] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:18:20.046233 containerd[1448]: 2025-09-10 00:18:20.030 [INFO][5724] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:18:20.046233 containerd[1448]: 2025-09-10 00:18:20.041 [WARNING][5724] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="236b8b75ac710cdcccdabd902fb2ca19e81ab2c20906abf395cf84177e19ad07" HandleID="k8s-pod-network.236b8b75ac710cdcccdabd902fb2ca19e81ab2c20906abf395cf84177e19ad07" Workload="localhost-k8s-calico--apiserver--6cd6dcff69--pvd28-eth0" Sep 10 00:18:20.046233 containerd[1448]: 2025-09-10 00:18:20.041 [INFO][5724] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="236b8b75ac710cdcccdabd902fb2ca19e81ab2c20906abf395cf84177e19ad07" HandleID="k8s-pod-network.236b8b75ac710cdcccdabd902fb2ca19e81ab2c20906abf395cf84177e19ad07" Workload="localhost-k8s-calico--apiserver--6cd6dcff69--pvd28-eth0" Sep 10 00:18:20.046233 containerd[1448]: 2025-09-10 00:18:20.042 [INFO][5724] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:18:20.046233 containerd[1448]: 2025-09-10 00:18:20.044 [INFO][5715] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="236b8b75ac710cdcccdabd902fb2ca19e81ab2c20906abf395cf84177e19ad07" Sep 10 00:18:20.046846 containerd[1448]: time="2025-09-10T00:18:20.046266347Z" level=info msg="TearDown network for sandbox \"236b8b75ac710cdcccdabd902fb2ca19e81ab2c20906abf395cf84177e19ad07\" successfully" Sep 10 00:18:20.069606 containerd[1448]: time="2025-09-10T00:18:20.069539869Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"236b8b75ac710cdcccdabd902fb2ca19e81ab2c20906abf395cf84177e19ad07\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 10 00:18:20.069694 containerd[1448]: time="2025-09-10T00:18:20.069649109Z" level=info msg="RemovePodSandbox \"236b8b75ac710cdcccdabd902fb2ca19e81ab2c20906abf395cf84177e19ad07\" returns successfully" Sep 10 00:18:20.070296 containerd[1448]: time="2025-09-10T00:18:20.070256589Z" level=info msg="StopPodSandbox for \"237b2951e6d066a5ef349d91ec76cb766d252860227afe68b6554018af713707\"" Sep 10 00:18:20.144689 containerd[1448]: 2025-09-10 00:18:20.108 [WARNING][5741] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="237b2951e6d066a5ef349d91ec76cb766d252860227afe68b6554018af713707" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6cd6dcff69--qt9l6-eth0", GenerateName:"calico-apiserver-6cd6dcff69-", Namespace:"calico-apiserver", SelfLink:"", UID:"9eadb910-646d-43a4-b7b4-6b854d565ea6", ResourceVersion:"1121", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 17, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cd6dcff69", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"12d8978fa7b0c324fbfc9e6c60ed404b8d09e82865e325fb376b87b454bebc9c", Pod:"calico-apiserver-6cd6dcff69-qt9l6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7de37e33db7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:18:20.144689 containerd[1448]: 2025-09-10 00:18:20.108 [INFO][5741] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="237b2951e6d066a5ef349d91ec76cb766d252860227afe68b6554018af713707" Sep 10 00:18:20.144689 containerd[1448]: 2025-09-10 00:18:20.108 [INFO][5741] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="237b2951e6d066a5ef349d91ec76cb766d252860227afe68b6554018af713707" iface="eth0" netns="" Sep 10 00:18:20.144689 containerd[1448]: 2025-09-10 00:18:20.108 [INFO][5741] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="237b2951e6d066a5ef349d91ec76cb766d252860227afe68b6554018af713707" Sep 10 00:18:20.144689 containerd[1448]: 2025-09-10 00:18:20.108 [INFO][5741] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="237b2951e6d066a5ef349d91ec76cb766d252860227afe68b6554018af713707" Sep 10 00:18:20.144689 containerd[1448]: 2025-09-10 00:18:20.129 [INFO][5749] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="237b2951e6d066a5ef349d91ec76cb766d252860227afe68b6554018af713707" HandleID="k8s-pod-network.237b2951e6d066a5ef349d91ec76cb766d252860227afe68b6554018af713707" Workload="localhost-k8s-calico--apiserver--6cd6dcff69--qt9l6-eth0" Sep 10 00:18:20.144689 containerd[1448]: 2025-09-10 00:18:20.129 [INFO][5749] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:18:20.144689 containerd[1448]: 2025-09-10 00:18:20.129 [INFO][5749] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:18:20.144689 containerd[1448]: 2025-09-10 00:18:20.139 [WARNING][5749] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="237b2951e6d066a5ef349d91ec76cb766d252860227afe68b6554018af713707" HandleID="k8s-pod-network.237b2951e6d066a5ef349d91ec76cb766d252860227afe68b6554018af713707" Workload="localhost-k8s-calico--apiserver--6cd6dcff69--qt9l6-eth0" Sep 10 00:18:20.144689 containerd[1448]: 2025-09-10 00:18:20.139 [INFO][5749] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="237b2951e6d066a5ef349d91ec76cb766d252860227afe68b6554018af713707" HandleID="k8s-pod-network.237b2951e6d066a5ef349d91ec76cb766d252860227afe68b6554018af713707" Workload="localhost-k8s-calico--apiserver--6cd6dcff69--qt9l6-eth0" Sep 10 00:18:20.144689 containerd[1448]: 2025-09-10 00:18:20.141 [INFO][5749] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:18:20.144689 containerd[1448]: 2025-09-10 00:18:20.143 [INFO][5741] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="237b2951e6d066a5ef349d91ec76cb766d252860227afe68b6554018af713707" Sep 10 00:18:20.144689 containerd[1448]: time="2025-09-10T00:18:20.144671996Z" level=info msg="TearDown network for sandbox \"237b2951e6d066a5ef349d91ec76cb766d252860227afe68b6554018af713707\" successfully" Sep 10 00:18:20.145439 containerd[1448]: time="2025-09-10T00:18:20.144695436Z" level=info msg="StopPodSandbox for \"237b2951e6d066a5ef349d91ec76cb766d252860227afe68b6554018af713707\" returns successfully" Sep 10 00:18:20.145439 containerd[1448]: time="2025-09-10T00:18:20.145120796Z" level=info msg="RemovePodSandbox for \"237b2951e6d066a5ef349d91ec76cb766d252860227afe68b6554018af713707\"" Sep 10 00:18:20.145439 containerd[1448]: time="2025-09-10T00:18:20.145159516Z" level=info msg="Forcibly stopping sandbox \"237b2951e6d066a5ef349d91ec76cb766d252860227afe68b6554018af713707\"" Sep 10 00:18:20.210841 containerd[1448]: 2025-09-10 00:18:20.178 [WARNING][5765] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="237b2951e6d066a5ef349d91ec76cb766d252860227afe68b6554018af713707" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6cd6dcff69--qt9l6-eth0", GenerateName:"calico-apiserver-6cd6dcff69-", Namespace:"calico-apiserver", SelfLink:"", UID:"9eadb910-646d-43a4-b7b4-6b854d565ea6", ResourceVersion:"1121", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 17, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cd6dcff69", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"12d8978fa7b0c324fbfc9e6c60ed404b8d09e82865e325fb376b87b454bebc9c", Pod:"calico-apiserver-6cd6dcff69-qt9l6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7de37e33db7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:18:20.210841 containerd[1448]: 2025-09-10 00:18:20.178 [INFO][5765] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="237b2951e6d066a5ef349d91ec76cb766d252860227afe68b6554018af713707" Sep 10 00:18:20.210841 containerd[1448]: 2025-09-10 00:18:20.178 [INFO][5765] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="237b2951e6d066a5ef349d91ec76cb766d252860227afe68b6554018af713707" iface="eth0" netns="" Sep 10 00:18:20.210841 containerd[1448]: 2025-09-10 00:18:20.178 [INFO][5765] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="237b2951e6d066a5ef349d91ec76cb766d252860227afe68b6554018af713707" Sep 10 00:18:20.210841 containerd[1448]: 2025-09-10 00:18:20.178 [INFO][5765] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="237b2951e6d066a5ef349d91ec76cb766d252860227afe68b6554018af713707" Sep 10 00:18:20.210841 containerd[1448]: 2025-09-10 00:18:20.197 [INFO][5773] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="237b2951e6d066a5ef349d91ec76cb766d252860227afe68b6554018af713707" HandleID="k8s-pod-network.237b2951e6d066a5ef349d91ec76cb766d252860227afe68b6554018af713707" Workload="localhost-k8s-calico--apiserver--6cd6dcff69--qt9l6-eth0" Sep 10 00:18:20.210841 containerd[1448]: 2025-09-10 00:18:20.197 [INFO][5773] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:18:20.210841 containerd[1448]: 2025-09-10 00:18:20.197 [INFO][5773] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:18:20.210841 containerd[1448]: 2025-09-10 00:18:20.206 [WARNING][5773] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="237b2951e6d066a5ef349d91ec76cb766d252860227afe68b6554018af713707" HandleID="k8s-pod-network.237b2951e6d066a5ef349d91ec76cb766d252860227afe68b6554018af713707" Workload="localhost-k8s-calico--apiserver--6cd6dcff69--qt9l6-eth0" Sep 10 00:18:20.210841 containerd[1448]: 2025-09-10 00:18:20.206 [INFO][5773] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="237b2951e6d066a5ef349d91ec76cb766d252860227afe68b6554018af713707" HandleID="k8s-pod-network.237b2951e6d066a5ef349d91ec76cb766d252860227afe68b6554018af713707" Workload="localhost-k8s-calico--apiserver--6cd6dcff69--qt9l6-eth0" Sep 10 00:18:20.210841 containerd[1448]: 2025-09-10 00:18:20.207 [INFO][5773] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:18:20.210841 containerd[1448]: 2025-09-10 00:18:20.209 [INFO][5765] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="237b2951e6d066a5ef349d91ec76cb766d252860227afe68b6554018af713707" Sep 10 00:18:20.211249 containerd[1448]: time="2025-09-10T00:18:20.210875441Z" level=info msg="TearDown network for sandbox \"237b2951e6d066a5ef349d91ec76cb766d252860227afe68b6554018af713707\" successfully" Sep 10 00:18:20.228961 containerd[1448]: time="2025-09-10T00:18:20.228742243Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"237b2951e6d066a5ef349d91ec76cb766d252860227afe68b6554018af713707\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 10 00:18:20.228961 containerd[1448]: time="2025-09-10T00:18:20.228853323Z" level=info msg="RemovePodSandbox \"237b2951e6d066a5ef349d91ec76cb766d252860227afe68b6554018af713707\" returns successfully" Sep 10 00:18:20.229350 containerd[1448]: time="2025-09-10T00:18:20.229279563Z" level=info msg="StopPodSandbox for \"3021ec43b963f0388497de440343db09f02818aaab7fe38470f1090c4ce0c9b9\"" Sep 10 00:18:20.293101 containerd[1448]: 2025-09-10 00:18:20.259 [WARNING][5791] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="3021ec43b963f0388497de440343db09f02818aaab7fe38470f1090c4ce0c9b9" WorkloadEndpoint="localhost-k8s-whisker--79d7db7c55--h6dqb-eth0" Sep 10 00:18:20.293101 containerd[1448]: 2025-09-10 00:18:20.259 [INFO][5791] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3021ec43b963f0388497de440343db09f02818aaab7fe38470f1090c4ce0c9b9" Sep 10 00:18:20.293101 containerd[1448]: 2025-09-10 00:18:20.260 [INFO][5791] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3021ec43b963f0388497de440343db09f02818aaab7fe38470f1090c4ce0c9b9" iface="eth0" netns="" Sep 10 00:18:20.293101 containerd[1448]: 2025-09-10 00:18:20.260 [INFO][5791] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3021ec43b963f0388497de440343db09f02818aaab7fe38470f1090c4ce0c9b9" Sep 10 00:18:20.293101 containerd[1448]: 2025-09-10 00:18:20.260 [INFO][5791] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3021ec43b963f0388497de440343db09f02818aaab7fe38470f1090c4ce0c9b9" Sep 10 00:18:20.293101 containerd[1448]: 2025-09-10 00:18:20.278 [INFO][5800] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3021ec43b963f0388497de440343db09f02818aaab7fe38470f1090c4ce0c9b9" HandleID="k8s-pod-network.3021ec43b963f0388497de440343db09f02818aaab7fe38470f1090c4ce0c9b9" Workload="localhost-k8s-whisker--79d7db7c55--h6dqb-eth0" Sep 10 00:18:20.293101 containerd[1448]: 2025-09-10 00:18:20.278 [INFO][5800] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:18:20.293101 containerd[1448]: 2025-09-10 00:18:20.278 [INFO][5800] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:18:20.293101 containerd[1448]: 2025-09-10 00:18:20.288 [WARNING][5800] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3021ec43b963f0388497de440343db09f02818aaab7fe38470f1090c4ce0c9b9" HandleID="k8s-pod-network.3021ec43b963f0388497de440343db09f02818aaab7fe38470f1090c4ce0c9b9" Workload="localhost-k8s-whisker--79d7db7c55--h6dqb-eth0" Sep 10 00:18:20.293101 containerd[1448]: 2025-09-10 00:18:20.288 [INFO][5800] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3021ec43b963f0388497de440343db09f02818aaab7fe38470f1090c4ce0c9b9" HandleID="k8s-pod-network.3021ec43b963f0388497de440343db09f02818aaab7fe38470f1090c4ce0c9b9" Workload="localhost-k8s-whisker--79d7db7c55--h6dqb-eth0" Sep 10 00:18:20.293101 containerd[1448]: 2025-09-10 00:18:20.289 [INFO][5800] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:18:20.293101 containerd[1448]: 2025-09-10 00:18:20.291 [INFO][5791] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3021ec43b963f0388497de440343db09f02818aaab7fe38470f1090c4ce0c9b9" Sep 10 00:18:20.293444 containerd[1448]: time="2025-09-10T00:18:20.293140088Z" level=info msg="TearDown network for sandbox \"3021ec43b963f0388497de440343db09f02818aaab7fe38470f1090c4ce0c9b9\" successfully" Sep 10 00:18:20.293444 containerd[1448]: time="2025-09-10T00:18:20.293166128Z" level=info msg="StopPodSandbox for \"3021ec43b963f0388497de440343db09f02818aaab7fe38470f1090c4ce0c9b9\" returns successfully" Sep 10 00:18:20.293973 containerd[1448]: time="2025-09-10T00:18:20.293943088Z" level=info msg="RemovePodSandbox for \"3021ec43b963f0388497de440343db09f02818aaab7fe38470f1090c4ce0c9b9\"" Sep 10 00:18:20.294035 containerd[1448]: time="2025-09-10T00:18:20.293991688Z" level=info msg="Forcibly stopping sandbox \"3021ec43b963f0388497de440343db09f02818aaab7fe38470f1090c4ce0c9b9\"" Sep 10 00:18:20.357038 containerd[1448]: 2025-09-10 00:18:20.325 [WARNING][5817] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="3021ec43b963f0388497de440343db09f02818aaab7fe38470f1090c4ce0c9b9" WorkloadEndpoint="localhost-k8s-whisker--79d7db7c55--h6dqb-eth0" Sep 10 00:18:20.357038 containerd[1448]: 2025-09-10 00:18:20.325 [INFO][5817] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3021ec43b963f0388497de440343db09f02818aaab7fe38470f1090c4ce0c9b9" Sep 10 00:18:20.357038 containerd[1448]: 2025-09-10 00:18:20.325 [INFO][5817] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3021ec43b963f0388497de440343db09f02818aaab7fe38470f1090c4ce0c9b9" iface="eth0" netns="" Sep 10 00:18:20.357038 containerd[1448]: 2025-09-10 00:18:20.325 [INFO][5817] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3021ec43b963f0388497de440343db09f02818aaab7fe38470f1090c4ce0c9b9" Sep 10 00:18:20.357038 containerd[1448]: 2025-09-10 00:18:20.325 [INFO][5817] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3021ec43b963f0388497de440343db09f02818aaab7fe38470f1090c4ce0c9b9" Sep 10 00:18:20.357038 containerd[1448]: 2025-09-10 00:18:20.342 [INFO][5826] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3021ec43b963f0388497de440343db09f02818aaab7fe38470f1090c4ce0c9b9" HandleID="k8s-pod-network.3021ec43b963f0388497de440343db09f02818aaab7fe38470f1090c4ce0c9b9" Workload="localhost-k8s-whisker--79d7db7c55--h6dqb-eth0" Sep 10 00:18:20.357038 containerd[1448]: 2025-09-10 00:18:20.342 [INFO][5826] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:18:20.357038 containerd[1448]: 2025-09-10 00:18:20.342 [INFO][5826] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:18:20.357038 containerd[1448]: 2025-09-10 00:18:20.352 [WARNING][5826] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3021ec43b963f0388497de440343db09f02818aaab7fe38470f1090c4ce0c9b9" HandleID="k8s-pod-network.3021ec43b963f0388497de440343db09f02818aaab7fe38470f1090c4ce0c9b9" Workload="localhost-k8s-whisker--79d7db7c55--h6dqb-eth0" Sep 10 00:18:20.357038 containerd[1448]: 2025-09-10 00:18:20.352 [INFO][5826] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3021ec43b963f0388497de440343db09f02818aaab7fe38470f1090c4ce0c9b9" HandleID="k8s-pod-network.3021ec43b963f0388497de440343db09f02818aaab7fe38470f1090c4ce0c9b9" Workload="localhost-k8s-whisker--79d7db7c55--h6dqb-eth0" Sep 10 00:18:20.357038 containerd[1448]: 2025-09-10 00:18:20.353 [INFO][5826] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:18:20.357038 containerd[1448]: 2025-09-10 00:18:20.355 [INFO][5817] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3021ec43b963f0388497de440343db09f02818aaab7fe38470f1090c4ce0c9b9" Sep 10 00:18:20.357380 containerd[1448]: time="2025-09-10T00:18:20.357065574Z" level=info msg="TearDown network for sandbox \"3021ec43b963f0388497de440343db09f02818aaab7fe38470f1090c4ce0c9b9\" successfully" Sep 10 00:18:20.366664 containerd[1448]: time="2025-09-10T00:18:20.366619615Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3021ec43b963f0388497de440343db09f02818aaab7fe38470f1090c4ce0c9b9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 10 00:18:20.366734 containerd[1448]: time="2025-09-10T00:18:20.366690535Z" level=info msg="RemovePodSandbox \"3021ec43b963f0388497de440343db09f02818aaab7fe38470f1090c4ce0c9b9\" returns successfully" Sep 10 00:18:20.367261 containerd[1448]: time="2025-09-10T00:18:20.367228055Z" level=info msg="StopPodSandbox for \"df3fdd8eb494be2e0bbfda368575310fac5cd793ff776dffd99966e4ab4c9a5d\"" Sep 10 00:18:20.430669 containerd[1448]: 2025-09-10 00:18:20.397 [WARNING][5844] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="df3fdd8eb494be2e0bbfda368575310fac5cd793ff776dffd99966e4ab4c9a5d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--qlcl5-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"2f22f2e0-6fae-4277-8ce4-e71e5b1601af", ResourceVersion:"1065", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 17, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cc5fa47ddc892a40fc29558c473dd0d8b53a26a0f1b414b37c2f3d97b6dccffc", Pod:"coredns-674b8bbfcf-qlcl5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1599ac329ac", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:18:20.430669 containerd[1448]: 2025-09-10 00:18:20.398 [INFO][5844] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="df3fdd8eb494be2e0bbfda368575310fac5cd793ff776dffd99966e4ab4c9a5d" Sep 10 00:18:20.430669 containerd[1448]: 2025-09-10 00:18:20.398 [INFO][5844] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="df3fdd8eb494be2e0bbfda368575310fac5cd793ff776dffd99966e4ab4c9a5d" iface="eth0" netns="" Sep 10 00:18:20.430669 containerd[1448]: 2025-09-10 00:18:20.398 [INFO][5844] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="df3fdd8eb494be2e0bbfda368575310fac5cd793ff776dffd99966e4ab4c9a5d" Sep 10 00:18:20.430669 containerd[1448]: 2025-09-10 00:18:20.398 [INFO][5844] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="df3fdd8eb494be2e0bbfda368575310fac5cd793ff776dffd99966e4ab4c9a5d" Sep 10 00:18:20.430669 containerd[1448]: 2025-09-10 00:18:20.417 [INFO][5853] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="df3fdd8eb494be2e0bbfda368575310fac5cd793ff776dffd99966e4ab4c9a5d" HandleID="k8s-pod-network.df3fdd8eb494be2e0bbfda368575310fac5cd793ff776dffd99966e4ab4c9a5d" Workload="localhost-k8s-coredns--674b8bbfcf--qlcl5-eth0" Sep 10 00:18:20.430669 containerd[1448]: 2025-09-10 00:18:20.417 [INFO][5853] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:18:20.430669 containerd[1448]: 2025-09-10 00:18:20.417 [INFO][5853] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:18:20.430669 containerd[1448]: 2025-09-10 00:18:20.426 [WARNING][5853] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="df3fdd8eb494be2e0bbfda368575310fac5cd793ff776dffd99966e4ab4c9a5d" HandleID="k8s-pod-network.df3fdd8eb494be2e0bbfda368575310fac5cd793ff776dffd99966e4ab4c9a5d" Workload="localhost-k8s-coredns--674b8bbfcf--qlcl5-eth0" Sep 10 00:18:20.430669 containerd[1448]: 2025-09-10 00:18:20.426 [INFO][5853] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="df3fdd8eb494be2e0bbfda368575310fac5cd793ff776dffd99966e4ab4c9a5d" HandleID="k8s-pod-network.df3fdd8eb494be2e0bbfda368575310fac5cd793ff776dffd99966e4ab4c9a5d" Workload="localhost-k8s-coredns--674b8bbfcf--qlcl5-eth0" Sep 10 00:18:20.430669 containerd[1448]: 2025-09-10 00:18:20.427 [INFO][5853] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:18:20.430669 containerd[1448]: 2025-09-10 00:18:20.428 [INFO][5844] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="df3fdd8eb494be2e0bbfda368575310fac5cd793ff776dffd99966e4ab4c9a5d" Sep 10 00:18:20.430669 containerd[1448]: time="2025-09-10T00:18:20.430642660Z" level=info msg="TearDown network for sandbox \"df3fdd8eb494be2e0bbfda368575310fac5cd793ff776dffd99966e4ab4c9a5d\" successfully" Sep 10 00:18:20.430669 containerd[1448]: time="2025-09-10T00:18:20.430669460Z" level=info msg="StopPodSandbox for \"df3fdd8eb494be2e0bbfda368575310fac5cd793ff776dffd99966e4ab4c9a5d\" returns successfully" Sep 10 00:18:20.431251 containerd[1448]: time="2025-09-10T00:18:20.431170100Z" level=info msg="RemovePodSandbox for \"df3fdd8eb494be2e0bbfda368575310fac5cd793ff776dffd99966e4ab4c9a5d\"" Sep 10 00:18:20.431251 containerd[1448]: time="2025-09-10T00:18:20.431203380Z" level=info msg="Forcibly stopping sandbox \"df3fdd8eb494be2e0bbfda368575310fac5cd793ff776dffd99966e4ab4c9a5d\"" Sep 10 00:18:20.497953 containerd[1448]: 2025-09-10 00:18:20.465 [WARNING][5871] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="df3fdd8eb494be2e0bbfda368575310fac5cd793ff776dffd99966e4ab4c9a5d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--qlcl5-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"2f22f2e0-6fae-4277-8ce4-e71e5b1601af", ResourceVersion:"1065", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 17, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cc5fa47ddc892a40fc29558c473dd0d8b53a26a0f1b414b37c2f3d97b6dccffc", Pod:"coredns-674b8bbfcf-qlcl5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1599ac329ac", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:18:20.497953 containerd[1448]: 2025-09-10 00:18:20.465 [INFO][5871] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="df3fdd8eb494be2e0bbfda368575310fac5cd793ff776dffd99966e4ab4c9a5d" Sep 10 00:18:20.497953 containerd[1448]: 2025-09-10 00:18:20.465 [INFO][5871] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="df3fdd8eb494be2e0bbfda368575310fac5cd793ff776dffd99966e4ab4c9a5d" iface="eth0" netns="" Sep 10 00:18:20.497953 containerd[1448]: 2025-09-10 00:18:20.465 [INFO][5871] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="df3fdd8eb494be2e0bbfda368575310fac5cd793ff776dffd99966e4ab4c9a5d" Sep 10 00:18:20.497953 containerd[1448]: 2025-09-10 00:18:20.465 [INFO][5871] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="df3fdd8eb494be2e0bbfda368575310fac5cd793ff776dffd99966e4ab4c9a5d" Sep 10 00:18:20.497953 containerd[1448]: 2025-09-10 00:18:20.483 [INFO][5880] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="df3fdd8eb494be2e0bbfda368575310fac5cd793ff776dffd99966e4ab4c9a5d" HandleID="k8s-pod-network.df3fdd8eb494be2e0bbfda368575310fac5cd793ff776dffd99966e4ab4c9a5d" Workload="localhost-k8s-coredns--674b8bbfcf--qlcl5-eth0" Sep 10 00:18:20.497953 containerd[1448]: 2025-09-10 00:18:20.483 [INFO][5880] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:18:20.497953 containerd[1448]: 2025-09-10 00:18:20.483 [INFO][5880] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:18:20.497953 containerd[1448]: 2025-09-10 00:18:20.493 [WARNING][5880] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="df3fdd8eb494be2e0bbfda368575310fac5cd793ff776dffd99966e4ab4c9a5d" HandleID="k8s-pod-network.df3fdd8eb494be2e0bbfda368575310fac5cd793ff776dffd99966e4ab4c9a5d" Workload="localhost-k8s-coredns--674b8bbfcf--qlcl5-eth0" Sep 10 00:18:20.497953 containerd[1448]: 2025-09-10 00:18:20.493 [INFO][5880] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="df3fdd8eb494be2e0bbfda368575310fac5cd793ff776dffd99966e4ab4c9a5d" HandleID="k8s-pod-network.df3fdd8eb494be2e0bbfda368575310fac5cd793ff776dffd99966e4ab4c9a5d" Workload="localhost-k8s-coredns--674b8bbfcf--qlcl5-eth0" Sep 10 00:18:20.497953 containerd[1448]: 2025-09-10 00:18:20.494 [INFO][5880] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:18:20.497953 containerd[1448]: 2025-09-10 00:18:20.496 [INFO][5871] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="df3fdd8eb494be2e0bbfda368575310fac5cd793ff776dffd99966e4ab4c9a5d" Sep 10 00:18:20.498456 containerd[1448]: time="2025-09-10T00:18:20.497989826Z" level=info msg="TearDown network for sandbox \"df3fdd8eb494be2e0bbfda368575310fac5cd793ff776dffd99966e4ab4c9a5d\" successfully" Sep 10 00:18:20.507149 containerd[1448]: time="2025-09-10T00:18:20.507093866Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"df3fdd8eb494be2e0bbfda368575310fac5cd793ff776dffd99966e4ab4c9a5d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 10 00:18:20.507234 containerd[1448]: time="2025-09-10T00:18:20.507185226Z" level=info msg="RemovePodSandbox \"df3fdd8eb494be2e0bbfda368575310fac5cd793ff776dffd99966e4ab4c9a5d\" returns successfully" Sep 10 00:18:20.507696 containerd[1448]: time="2025-09-10T00:18:20.507669306Z" level=info msg="StopPodSandbox for \"f5b08eb23cffa606682e4fc37107e987818118f64a539991a6160e1ac69b86e3\"" Sep 10 00:18:20.572938 containerd[1448]: 2025-09-10 00:18:20.540 [WARNING][5898] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f5b08eb23cffa606682e4fc37107e987818118f64a539991a6160e1ac69b86e3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--55c7f99b6f--f6wcn-eth0", GenerateName:"calico-kube-controllers-55c7f99b6f-", Namespace:"calico-system", SelfLink:"", UID:"22f16cc2-19f3-4586-9c67-213437e8718f", ResourceVersion:"1162", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 17, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55c7f99b6f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ed35986597b722440989965d3066e4f8e9750a0ad594bfb15ad32fa6fcb9853b", Pod:"calico-kube-controllers-55c7f99b6f-f6wcn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7f36fa0fda3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:18:20.572938 containerd[1448]: 2025-09-10 00:18:20.540 [INFO][5898] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f5b08eb23cffa606682e4fc37107e987818118f64a539991a6160e1ac69b86e3" Sep 10 00:18:20.572938 containerd[1448]: 2025-09-10 00:18:20.540 [INFO][5898] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f5b08eb23cffa606682e4fc37107e987818118f64a539991a6160e1ac69b86e3" iface="eth0" netns="" Sep 10 00:18:20.572938 containerd[1448]: 2025-09-10 00:18:20.540 [INFO][5898] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f5b08eb23cffa606682e4fc37107e987818118f64a539991a6160e1ac69b86e3" Sep 10 00:18:20.572938 containerd[1448]: 2025-09-10 00:18:20.540 [INFO][5898] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f5b08eb23cffa606682e4fc37107e987818118f64a539991a6160e1ac69b86e3" Sep 10 00:18:20.572938 containerd[1448]: 2025-09-10 00:18:20.558 [INFO][5906] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f5b08eb23cffa606682e4fc37107e987818118f64a539991a6160e1ac69b86e3" HandleID="k8s-pod-network.f5b08eb23cffa606682e4fc37107e987818118f64a539991a6160e1ac69b86e3" Workload="localhost-k8s-calico--kube--controllers--55c7f99b6f--f6wcn-eth0" Sep 10 00:18:20.572938 containerd[1448]: 2025-09-10 00:18:20.558 [INFO][5906] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:18:20.572938 containerd[1448]: 2025-09-10 00:18:20.558 [INFO][5906] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:18:20.572938 containerd[1448]: 2025-09-10 00:18:20.568 [WARNING][5906] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f5b08eb23cffa606682e4fc37107e987818118f64a539991a6160e1ac69b86e3" HandleID="k8s-pod-network.f5b08eb23cffa606682e4fc37107e987818118f64a539991a6160e1ac69b86e3" Workload="localhost-k8s-calico--kube--controllers--55c7f99b6f--f6wcn-eth0" Sep 10 00:18:20.572938 containerd[1448]: 2025-09-10 00:18:20.568 [INFO][5906] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f5b08eb23cffa606682e4fc37107e987818118f64a539991a6160e1ac69b86e3" HandleID="k8s-pod-network.f5b08eb23cffa606682e4fc37107e987818118f64a539991a6160e1ac69b86e3" Workload="localhost-k8s-calico--kube--controllers--55c7f99b6f--f6wcn-eth0" Sep 10 00:18:20.572938 containerd[1448]: 2025-09-10 00:18:20.569 [INFO][5906] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:18:20.572938 containerd[1448]: 2025-09-10 00:18:20.571 [INFO][5898] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f5b08eb23cffa606682e4fc37107e987818118f64a539991a6160e1ac69b86e3" Sep 10 00:18:20.572938 containerd[1448]: time="2025-09-10T00:18:20.572933152Z" level=info msg="TearDown network for sandbox \"f5b08eb23cffa606682e4fc37107e987818118f64a539991a6160e1ac69b86e3\" successfully" Sep 10 00:18:20.572938 containerd[1448]: time="2025-09-10T00:18:20.572958312Z" level=info msg="StopPodSandbox for \"f5b08eb23cffa606682e4fc37107e987818118f64a539991a6160e1ac69b86e3\" returns successfully" Sep 10 00:18:20.573517 containerd[1448]: time="2025-09-10T00:18:20.573377832Z" level=info msg="RemovePodSandbox for \"f5b08eb23cffa606682e4fc37107e987818118f64a539991a6160e1ac69b86e3\"" Sep 10 00:18:20.573517 containerd[1448]: time="2025-09-10T00:18:20.573409032Z" level=info msg="Forcibly stopping sandbox \"f5b08eb23cffa606682e4fc37107e987818118f64a539991a6160e1ac69b86e3\"" Sep 10 00:18:20.652834 containerd[1448]: 2025-09-10 00:18:20.610 [WARNING][5923] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f5b08eb23cffa606682e4fc37107e987818118f64a539991a6160e1ac69b86e3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--55c7f99b6f--f6wcn-eth0", GenerateName:"calico-kube-controllers-55c7f99b6f-", Namespace:"calico-system", SelfLink:"", UID:"22f16cc2-19f3-4586-9c67-213437e8718f", ResourceVersion:"1162", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 17, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55c7f99b6f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ed35986597b722440989965d3066e4f8e9750a0ad594bfb15ad32fa6fcb9853b", Pod:"calico-kube-controllers-55c7f99b6f-f6wcn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7f36fa0fda3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:18:20.652834 containerd[1448]: 2025-09-10 00:18:20.611 [INFO][5923] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f5b08eb23cffa606682e4fc37107e987818118f64a539991a6160e1ac69b86e3" Sep 10 00:18:20.652834 containerd[1448]: 2025-09-10 00:18:20.611 [INFO][5923] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f5b08eb23cffa606682e4fc37107e987818118f64a539991a6160e1ac69b86e3" iface="eth0" netns="" Sep 10 00:18:20.652834 containerd[1448]: 2025-09-10 00:18:20.611 [INFO][5923] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f5b08eb23cffa606682e4fc37107e987818118f64a539991a6160e1ac69b86e3" Sep 10 00:18:20.652834 containerd[1448]: 2025-09-10 00:18:20.611 [INFO][5923] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f5b08eb23cffa606682e4fc37107e987818118f64a539991a6160e1ac69b86e3" Sep 10 00:18:20.652834 containerd[1448]: 2025-09-10 00:18:20.640 [INFO][5931] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f5b08eb23cffa606682e4fc37107e987818118f64a539991a6160e1ac69b86e3" HandleID="k8s-pod-network.f5b08eb23cffa606682e4fc37107e987818118f64a539991a6160e1ac69b86e3" Workload="localhost-k8s-calico--kube--controllers--55c7f99b6f--f6wcn-eth0" Sep 10 00:18:20.652834 containerd[1448]: 2025-09-10 00:18:20.640 [INFO][5931] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:18:20.652834 containerd[1448]: 2025-09-10 00:18:20.640 [INFO][5931] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:18:20.652834 containerd[1448]: 2025-09-10 00:18:20.648 [WARNING][5931] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f5b08eb23cffa606682e4fc37107e987818118f64a539991a6160e1ac69b86e3" HandleID="k8s-pod-network.f5b08eb23cffa606682e4fc37107e987818118f64a539991a6160e1ac69b86e3" Workload="localhost-k8s-calico--kube--controllers--55c7f99b6f--f6wcn-eth0" Sep 10 00:18:20.652834 containerd[1448]: 2025-09-10 00:18:20.648 [INFO][5931] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f5b08eb23cffa606682e4fc37107e987818118f64a539991a6160e1ac69b86e3" HandleID="k8s-pod-network.f5b08eb23cffa606682e4fc37107e987818118f64a539991a6160e1ac69b86e3" Workload="localhost-k8s-calico--kube--controllers--55c7f99b6f--f6wcn-eth0" Sep 10 00:18:20.652834 containerd[1448]: 2025-09-10 00:18:20.649 [INFO][5931] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:18:20.652834 containerd[1448]: 2025-09-10 00:18:20.651 [INFO][5923] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f5b08eb23cffa606682e4fc37107e987818118f64a539991a6160e1ac69b86e3" Sep 10 00:18:20.654159 containerd[1448]: time="2025-09-10T00:18:20.652909479Z" level=info msg="TearDown network for sandbox \"f5b08eb23cffa606682e4fc37107e987818118f64a539991a6160e1ac69b86e3\" successfully" Sep 10 00:18:20.656742 containerd[1448]: time="2025-09-10T00:18:20.656704959Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f5b08eb23cffa606682e4fc37107e987818118f64a539991a6160e1ac69b86e3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 10 00:18:20.656908 containerd[1448]: time="2025-09-10T00:18:20.656890599Z" level=info msg="RemovePodSandbox \"f5b08eb23cffa606682e4fc37107e987818118f64a539991a6160e1ac69b86e3\" returns successfully" Sep 10 00:18:20.657505 containerd[1448]: time="2025-09-10T00:18:20.657478359Z" level=info msg="StopPodSandbox for \"f545795cd7c9a84cc5868a255fc2edd09a29cadcb3d54017ee515da41c43e964\"" Sep 10 00:18:20.738435 containerd[1448]: 2025-09-10 00:18:20.697 [WARNING][5948] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f545795cd7c9a84cc5868a255fc2edd09a29cadcb3d54017ee515da41c43e964" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--wv9tb-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c5fb1d1a-aa75-4564-aca9-9712fec491bb", ResourceVersion:"1070", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 17, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e35c674a53025fdb8af8c42726328d1b34b30a2405e072046beb04403c5b1d18", Pod:"coredns-674b8bbfcf-wv9tb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8244a20a9f7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:18:20.738435 containerd[1448]: 2025-09-10 00:18:20.698 [INFO][5948] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f545795cd7c9a84cc5868a255fc2edd09a29cadcb3d54017ee515da41c43e964" Sep 10 00:18:20.738435 containerd[1448]: 2025-09-10 00:18:20.698 [INFO][5948] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f545795cd7c9a84cc5868a255fc2edd09a29cadcb3d54017ee515da41c43e964" iface="eth0" netns="" Sep 10 00:18:20.738435 containerd[1448]: 2025-09-10 00:18:20.698 [INFO][5948] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f545795cd7c9a84cc5868a255fc2edd09a29cadcb3d54017ee515da41c43e964" Sep 10 00:18:20.738435 containerd[1448]: 2025-09-10 00:18:20.698 [INFO][5948] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f545795cd7c9a84cc5868a255fc2edd09a29cadcb3d54017ee515da41c43e964" Sep 10 00:18:20.738435 containerd[1448]: 2025-09-10 00:18:20.718 [INFO][5957] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f545795cd7c9a84cc5868a255fc2edd09a29cadcb3d54017ee515da41c43e964" HandleID="k8s-pod-network.f545795cd7c9a84cc5868a255fc2edd09a29cadcb3d54017ee515da41c43e964" Workload="localhost-k8s-coredns--674b8bbfcf--wv9tb-eth0" Sep 10 00:18:20.738435 containerd[1448]: 2025-09-10 00:18:20.718 [INFO][5957] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:18:20.738435 containerd[1448]: 2025-09-10 00:18:20.718 [INFO][5957] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:18:20.738435 containerd[1448]: 2025-09-10 00:18:20.728 [WARNING][5957] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f545795cd7c9a84cc5868a255fc2edd09a29cadcb3d54017ee515da41c43e964" HandleID="k8s-pod-network.f545795cd7c9a84cc5868a255fc2edd09a29cadcb3d54017ee515da41c43e964" Workload="localhost-k8s-coredns--674b8bbfcf--wv9tb-eth0" Sep 10 00:18:20.738435 containerd[1448]: 2025-09-10 00:18:20.728 [INFO][5957] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f545795cd7c9a84cc5868a255fc2edd09a29cadcb3d54017ee515da41c43e964" HandleID="k8s-pod-network.f545795cd7c9a84cc5868a255fc2edd09a29cadcb3d54017ee515da41c43e964" Workload="localhost-k8s-coredns--674b8bbfcf--wv9tb-eth0" Sep 10 00:18:20.738435 containerd[1448]: 2025-09-10 00:18:20.734 [INFO][5957] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:18:20.738435 containerd[1448]: 2025-09-10 00:18:20.736 [INFO][5948] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f545795cd7c9a84cc5868a255fc2edd09a29cadcb3d54017ee515da41c43e964" Sep 10 00:18:20.738904 containerd[1448]: time="2025-09-10T00:18:20.738475686Z" level=info msg="TearDown network for sandbox \"f545795cd7c9a84cc5868a255fc2edd09a29cadcb3d54017ee515da41c43e964\" successfully" Sep 10 00:18:20.738904 containerd[1448]: time="2025-09-10T00:18:20.738499926Z" level=info msg="StopPodSandbox for \"f545795cd7c9a84cc5868a255fc2edd09a29cadcb3d54017ee515da41c43e964\" returns successfully" Sep 10 00:18:20.747690 containerd[1448]: time="2025-09-10T00:18:20.747652847Z" level=info msg="RemovePodSandbox for \"f545795cd7c9a84cc5868a255fc2edd09a29cadcb3d54017ee515da41c43e964\"" Sep 10 00:18:20.747690 containerd[1448]: time="2025-09-10T00:18:20.747694807Z" level=info msg="Forcibly stopping sandbox \"f545795cd7c9a84cc5868a255fc2edd09a29cadcb3d54017ee515da41c43e964\"" Sep 10 00:18:20.843317 containerd[1448]: 2025-09-10 00:18:20.785 [WARNING][5975] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f545795cd7c9a84cc5868a255fc2edd09a29cadcb3d54017ee515da41c43e964" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--wv9tb-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c5fb1d1a-aa75-4564-aca9-9712fec491bb", ResourceVersion:"1070", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 17, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e35c674a53025fdb8af8c42726328d1b34b30a2405e072046beb04403c5b1d18", Pod:"coredns-674b8bbfcf-wv9tb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8244a20a9f7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:18:20.843317 containerd[1448]: 2025-09-10 00:18:20.785 [INFO][5975] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f545795cd7c9a84cc5868a255fc2edd09a29cadcb3d54017ee515da41c43e964" Sep 10 00:18:20.843317 containerd[1448]: 2025-09-10 00:18:20.785 [INFO][5975] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f545795cd7c9a84cc5868a255fc2edd09a29cadcb3d54017ee515da41c43e964" iface="eth0" netns="" Sep 10 00:18:20.843317 containerd[1448]: 2025-09-10 00:18:20.785 [INFO][5975] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f545795cd7c9a84cc5868a255fc2edd09a29cadcb3d54017ee515da41c43e964" Sep 10 00:18:20.843317 containerd[1448]: 2025-09-10 00:18:20.785 [INFO][5975] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f545795cd7c9a84cc5868a255fc2edd09a29cadcb3d54017ee515da41c43e964" Sep 10 00:18:20.843317 containerd[1448]: 2025-09-10 00:18:20.814 [INFO][5984] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f545795cd7c9a84cc5868a255fc2edd09a29cadcb3d54017ee515da41c43e964" HandleID="k8s-pod-network.f545795cd7c9a84cc5868a255fc2edd09a29cadcb3d54017ee515da41c43e964" Workload="localhost-k8s-coredns--674b8bbfcf--wv9tb-eth0" Sep 10 00:18:20.843317 containerd[1448]: 2025-09-10 00:18:20.814 [INFO][5984] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:18:20.843317 containerd[1448]: 2025-09-10 00:18:20.815 [INFO][5984] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:18:20.843317 containerd[1448]: 2025-09-10 00:18:20.830 [WARNING][5984] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f545795cd7c9a84cc5868a255fc2edd09a29cadcb3d54017ee515da41c43e964" HandleID="k8s-pod-network.f545795cd7c9a84cc5868a255fc2edd09a29cadcb3d54017ee515da41c43e964" Workload="localhost-k8s-coredns--674b8bbfcf--wv9tb-eth0" Sep 10 00:18:20.843317 containerd[1448]: 2025-09-10 00:18:20.830 [INFO][5984] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f545795cd7c9a84cc5868a255fc2edd09a29cadcb3d54017ee515da41c43e964" HandleID="k8s-pod-network.f545795cd7c9a84cc5868a255fc2edd09a29cadcb3d54017ee515da41c43e964" Workload="localhost-k8s-coredns--674b8bbfcf--wv9tb-eth0" Sep 10 00:18:20.843317 containerd[1448]: 2025-09-10 00:18:20.836 [INFO][5984] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:18:20.843317 containerd[1448]: 2025-09-10 00:18:20.840 [INFO][5975] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f545795cd7c9a84cc5868a255fc2edd09a29cadcb3d54017ee515da41c43e964" Sep 10 00:18:20.843317 containerd[1448]: time="2025-09-10T00:18:20.843345655Z" level=info msg="TearDown network for sandbox \"f545795cd7c9a84cc5868a255fc2edd09a29cadcb3d54017ee515da41c43e964\" successfully" Sep 10 00:18:20.853445 containerd[1448]: time="2025-09-10T00:18:20.850852256Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f545795cd7c9a84cc5868a255fc2edd09a29cadcb3d54017ee515da41c43e964\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 10 00:18:20.853445 containerd[1448]: time="2025-09-10T00:18:20.850948216Z" level=info msg="RemovePodSandbox \"f545795cd7c9a84cc5868a255fc2edd09a29cadcb3d54017ee515da41c43e964\" returns successfully" Sep 10 00:18:20.853445 containerd[1448]: time="2025-09-10T00:18:20.853164576Z" level=info msg="StopPodSandbox for \"718b009935c17339c8db7c4995f78c36640d04e24a57d704d800e7bfb8296b52\"" Sep 10 00:18:20.923419 containerd[1448]: 2025-09-10 00:18:20.885 [WARNING][6002] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="718b009935c17339c8db7c4995f78c36640d04e24a57d704d800e7bfb8296b52" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--65xbg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cc428a56-c099-4362-852b-dab9e5d9f7b7", ResourceVersion:"1206", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 17, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b9715d94334df2c0b0d778c267c41e3a8fe370119f17b59332cd6b79cc07e217", Pod:"csi-node-driver-65xbg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6867527523b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:18:20.923419 containerd[1448]: 2025-09-10 00:18:20.885 [INFO][6002] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="718b009935c17339c8db7c4995f78c36640d04e24a57d704d800e7bfb8296b52" Sep 10 00:18:20.923419 containerd[1448]: 2025-09-10 00:18:20.885 [INFO][6002] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="718b009935c17339c8db7c4995f78c36640d04e24a57d704d800e7bfb8296b52" iface="eth0" netns="" Sep 10 00:18:20.923419 containerd[1448]: 2025-09-10 00:18:20.885 [INFO][6002] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="718b009935c17339c8db7c4995f78c36640d04e24a57d704d800e7bfb8296b52" Sep 10 00:18:20.923419 containerd[1448]: 2025-09-10 00:18:20.885 [INFO][6002] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="718b009935c17339c8db7c4995f78c36640d04e24a57d704d800e7bfb8296b52" Sep 10 00:18:20.923419 containerd[1448]: 2025-09-10 00:18:20.905 [INFO][6010] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="718b009935c17339c8db7c4995f78c36640d04e24a57d704d800e7bfb8296b52" HandleID="k8s-pod-network.718b009935c17339c8db7c4995f78c36640d04e24a57d704d800e7bfb8296b52" Workload="localhost-k8s-csi--node--driver--65xbg-eth0" Sep 10 00:18:20.923419 containerd[1448]: 2025-09-10 00:18:20.905 [INFO][6010] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:18:20.923419 containerd[1448]: 2025-09-10 00:18:20.905 [INFO][6010] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:18:20.923419 containerd[1448]: 2025-09-10 00:18:20.916 [WARNING][6010] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="718b009935c17339c8db7c4995f78c36640d04e24a57d704d800e7bfb8296b52" HandleID="k8s-pod-network.718b009935c17339c8db7c4995f78c36640d04e24a57d704d800e7bfb8296b52" Workload="localhost-k8s-csi--node--driver--65xbg-eth0" Sep 10 00:18:20.923419 containerd[1448]: 2025-09-10 00:18:20.916 [INFO][6010] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="718b009935c17339c8db7c4995f78c36640d04e24a57d704d800e7bfb8296b52" HandleID="k8s-pod-network.718b009935c17339c8db7c4995f78c36640d04e24a57d704d800e7bfb8296b52" Workload="localhost-k8s-csi--node--driver--65xbg-eth0" Sep 10 00:18:20.923419 containerd[1448]: 2025-09-10 00:18:20.918 [INFO][6010] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:18:20.923419 containerd[1448]: 2025-09-10 00:18:20.921 [INFO][6002] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="718b009935c17339c8db7c4995f78c36640d04e24a57d704d800e7bfb8296b52" Sep 10 00:18:20.924171 containerd[1448]: time="2025-09-10T00:18:20.923463062Z" level=info msg="TearDown network for sandbox \"718b009935c17339c8db7c4995f78c36640d04e24a57d704d800e7bfb8296b52\" successfully" Sep 10 00:18:20.924171 containerd[1448]: time="2025-09-10T00:18:20.923487822Z" level=info msg="StopPodSandbox for \"718b009935c17339c8db7c4995f78c36640d04e24a57d704d800e7bfb8296b52\" returns successfully" Sep 10 00:18:20.924946 containerd[1448]: time="2025-09-10T00:18:20.924478622Z" level=info msg="RemovePodSandbox for \"718b009935c17339c8db7c4995f78c36640d04e24a57d704d800e7bfb8296b52\"" Sep 10 00:18:20.924946 containerd[1448]: time="2025-09-10T00:18:20.924520662Z" level=info msg="Forcibly stopping sandbox \"718b009935c17339c8db7c4995f78c36640d04e24a57d704d800e7bfb8296b52\"" Sep 10 00:18:20.997059 containerd[1448]: 2025-09-10 00:18:20.962 [WARNING][6029] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="718b009935c17339c8db7c4995f78c36640d04e24a57d704d800e7bfb8296b52" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--65xbg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cc428a56-c099-4362-852b-dab9e5d9f7b7", ResourceVersion:"1206", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 17, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b9715d94334df2c0b0d778c267c41e3a8fe370119f17b59332cd6b79cc07e217", Pod:"csi-node-driver-65xbg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6867527523b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:18:20.997059 containerd[1448]: 2025-09-10 00:18:20.962 [INFO][6029] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="718b009935c17339c8db7c4995f78c36640d04e24a57d704d800e7bfb8296b52" Sep 10 00:18:20.997059 containerd[1448]: 2025-09-10 00:18:20.962 [INFO][6029] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="718b009935c17339c8db7c4995f78c36640d04e24a57d704d800e7bfb8296b52" iface="eth0" netns="" Sep 10 00:18:20.997059 containerd[1448]: 2025-09-10 00:18:20.962 [INFO][6029] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="718b009935c17339c8db7c4995f78c36640d04e24a57d704d800e7bfb8296b52" Sep 10 00:18:20.997059 containerd[1448]: 2025-09-10 00:18:20.962 [INFO][6029] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="718b009935c17339c8db7c4995f78c36640d04e24a57d704d800e7bfb8296b52" Sep 10 00:18:20.997059 containerd[1448]: 2025-09-10 00:18:20.983 [INFO][6038] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="718b009935c17339c8db7c4995f78c36640d04e24a57d704d800e7bfb8296b52" HandleID="k8s-pod-network.718b009935c17339c8db7c4995f78c36640d04e24a57d704d800e7bfb8296b52" Workload="localhost-k8s-csi--node--driver--65xbg-eth0" Sep 10 00:18:20.997059 containerd[1448]: 2025-09-10 00:18:20.983 [INFO][6038] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:18:20.997059 containerd[1448]: 2025-09-10 00:18:20.983 [INFO][6038] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:18:20.997059 containerd[1448]: 2025-09-10 00:18:20.992 [WARNING][6038] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="718b009935c17339c8db7c4995f78c36640d04e24a57d704d800e7bfb8296b52" HandleID="k8s-pod-network.718b009935c17339c8db7c4995f78c36640d04e24a57d704d800e7bfb8296b52" Workload="localhost-k8s-csi--node--driver--65xbg-eth0" Sep 10 00:18:20.997059 containerd[1448]: 2025-09-10 00:18:20.992 [INFO][6038] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="718b009935c17339c8db7c4995f78c36640d04e24a57d704d800e7bfb8296b52" HandleID="k8s-pod-network.718b009935c17339c8db7c4995f78c36640d04e24a57d704d800e7bfb8296b52" Workload="localhost-k8s-csi--node--driver--65xbg-eth0" Sep 10 00:18:20.997059 containerd[1448]: 2025-09-10 00:18:20.993 [INFO][6038] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:18:20.997059 containerd[1448]: 2025-09-10 00:18:20.995 [INFO][6029] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="718b009935c17339c8db7c4995f78c36640d04e24a57d704d800e7bfb8296b52" Sep 10 00:18:20.997059 containerd[1448]: time="2025-09-10T00:18:20.996985948Z" level=info msg="TearDown network for sandbox \"718b009935c17339c8db7c4995f78c36640d04e24a57d704d800e7bfb8296b52\" successfully" Sep 10 00:18:21.000859 containerd[1448]: time="2025-09-10T00:18:21.000728508Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"718b009935c17339c8db7c4995f78c36640d04e24a57d704d800e7bfb8296b52\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 10 00:18:21.000988 containerd[1448]: time="2025-09-10T00:18:21.000890868Z" level=info msg="RemovePodSandbox \"718b009935c17339c8db7c4995f78c36640d04e24a57d704d800e7bfb8296b52\" returns successfully" Sep 10 00:18:21.001799 containerd[1448]: time="2025-09-10T00:18:21.001563988Z" level=info msg="StopPodSandbox for \"c926cec19e0b2deb730a9bb6354c1568bbfabdb70a7d55cc462fe937d639cfcc\"" Sep 10 00:18:21.067879 containerd[1448]: 2025-09-10 00:18:21.035 [WARNING][6057] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c926cec19e0b2deb730a9bb6354c1568bbfabdb70a7d55cc462fe937d639cfcc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--njj6z-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"48ece463-fbe3-4a32-8fd2-523723a890ae", ResourceVersion:"1193", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 17, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1701ed35cffea5e1aea19de87473fd52178657c9f71650207637ad87f62b4ad8", Pod:"goldmane-54d579b49d-njj6z", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9949c91eb48", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:18:21.067879 containerd[1448]: 2025-09-10 00:18:21.035 [INFO][6057] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c926cec19e0b2deb730a9bb6354c1568bbfabdb70a7d55cc462fe937d639cfcc" Sep 10 00:18:21.067879 containerd[1448]: 2025-09-10 00:18:21.035 [INFO][6057] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c926cec19e0b2deb730a9bb6354c1568bbfabdb70a7d55cc462fe937d639cfcc" iface="eth0" netns="" Sep 10 00:18:21.067879 containerd[1448]: 2025-09-10 00:18:21.035 [INFO][6057] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c926cec19e0b2deb730a9bb6354c1568bbfabdb70a7d55cc462fe937d639cfcc" Sep 10 00:18:21.067879 containerd[1448]: 2025-09-10 00:18:21.035 [INFO][6057] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c926cec19e0b2deb730a9bb6354c1568bbfabdb70a7d55cc462fe937d639cfcc" Sep 10 00:18:21.067879 containerd[1448]: 2025-09-10 00:18:21.054 [INFO][6065] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c926cec19e0b2deb730a9bb6354c1568bbfabdb70a7d55cc462fe937d639cfcc" HandleID="k8s-pod-network.c926cec19e0b2deb730a9bb6354c1568bbfabdb70a7d55cc462fe937d639cfcc" Workload="localhost-k8s-goldmane--54d579b49d--njj6z-eth0" Sep 10 00:18:21.067879 containerd[1448]: 2025-09-10 00:18:21.054 [INFO][6065] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:18:21.067879 containerd[1448]: 2025-09-10 00:18:21.054 [INFO][6065] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:18:21.067879 containerd[1448]: 2025-09-10 00:18:21.062 [WARNING][6065] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c926cec19e0b2deb730a9bb6354c1568bbfabdb70a7d55cc462fe937d639cfcc" HandleID="k8s-pod-network.c926cec19e0b2deb730a9bb6354c1568bbfabdb70a7d55cc462fe937d639cfcc" Workload="localhost-k8s-goldmane--54d579b49d--njj6z-eth0" Sep 10 00:18:21.067879 containerd[1448]: 2025-09-10 00:18:21.063 [INFO][6065] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c926cec19e0b2deb730a9bb6354c1568bbfabdb70a7d55cc462fe937d639cfcc" HandleID="k8s-pod-network.c926cec19e0b2deb730a9bb6354c1568bbfabdb70a7d55cc462fe937d639cfcc" Workload="localhost-k8s-goldmane--54d579b49d--njj6z-eth0" Sep 10 00:18:21.067879 containerd[1448]: 2025-09-10 00:18:21.064 [INFO][6065] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:18:21.067879 containerd[1448]: 2025-09-10 00:18:21.066 [INFO][6057] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c926cec19e0b2deb730a9bb6354c1568bbfabdb70a7d55cc462fe937d639cfcc" Sep 10 00:18:21.068274 containerd[1448]: time="2025-09-10T00:18:21.067918634Z" level=info msg="TearDown network for sandbox \"c926cec19e0b2deb730a9bb6354c1568bbfabdb70a7d55cc462fe937d639cfcc\" successfully" Sep 10 00:18:21.068274 containerd[1448]: time="2025-09-10T00:18:21.067942594Z" level=info msg="StopPodSandbox for \"c926cec19e0b2deb730a9bb6354c1568bbfabdb70a7d55cc462fe937d639cfcc\" returns successfully" Sep 10 00:18:21.068424 containerd[1448]: time="2025-09-10T00:18:21.068399834Z" level=info msg="RemovePodSandbox for \"c926cec19e0b2deb730a9bb6354c1568bbfabdb70a7d55cc462fe937d639cfcc\"" Sep 10 00:18:21.068456 containerd[1448]: time="2025-09-10T00:18:21.068434514Z" level=info msg="Forcibly stopping sandbox \"c926cec19e0b2deb730a9bb6354c1568bbfabdb70a7d55cc462fe937d639cfcc\"" Sep 10 00:18:21.132526 containerd[1448]: 2025-09-10 00:18:21.099 [WARNING][6083] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c926cec19e0b2deb730a9bb6354c1568bbfabdb70a7d55cc462fe937d639cfcc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--njj6z-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"48ece463-fbe3-4a32-8fd2-523723a890ae", ResourceVersion:"1193", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 17, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1701ed35cffea5e1aea19de87473fd52178657c9f71650207637ad87f62b4ad8", Pod:"goldmane-54d579b49d-njj6z", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9949c91eb48", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:18:21.132526 containerd[1448]: 2025-09-10 00:18:21.099 [INFO][6083] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c926cec19e0b2deb730a9bb6354c1568bbfabdb70a7d55cc462fe937d639cfcc" Sep 10 00:18:21.132526 containerd[1448]: 2025-09-10 00:18:21.099 [INFO][6083] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c926cec19e0b2deb730a9bb6354c1568bbfabdb70a7d55cc462fe937d639cfcc" iface="eth0" netns="" Sep 10 00:18:21.132526 containerd[1448]: 2025-09-10 00:18:21.099 [INFO][6083] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c926cec19e0b2deb730a9bb6354c1568bbfabdb70a7d55cc462fe937d639cfcc" Sep 10 00:18:21.132526 containerd[1448]: 2025-09-10 00:18:21.099 [INFO][6083] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c926cec19e0b2deb730a9bb6354c1568bbfabdb70a7d55cc462fe937d639cfcc" Sep 10 00:18:21.132526 containerd[1448]: 2025-09-10 00:18:21.116 [INFO][6092] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c926cec19e0b2deb730a9bb6354c1568bbfabdb70a7d55cc462fe937d639cfcc" HandleID="k8s-pod-network.c926cec19e0b2deb730a9bb6354c1568bbfabdb70a7d55cc462fe937d639cfcc" Workload="localhost-k8s-goldmane--54d579b49d--njj6z-eth0" Sep 10 00:18:21.132526 containerd[1448]: 2025-09-10 00:18:21.117 [INFO][6092] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:18:21.132526 containerd[1448]: 2025-09-10 00:18:21.117 [INFO][6092] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:18:21.132526 containerd[1448]: 2025-09-10 00:18:21.125 [WARNING][6092] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c926cec19e0b2deb730a9bb6354c1568bbfabdb70a7d55cc462fe937d639cfcc" HandleID="k8s-pod-network.c926cec19e0b2deb730a9bb6354c1568bbfabdb70a7d55cc462fe937d639cfcc" Workload="localhost-k8s-goldmane--54d579b49d--njj6z-eth0" Sep 10 00:18:21.132526 containerd[1448]: 2025-09-10 00:18:21.125 [INFO][6092] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c926cec19e0b2deb730a9bb6354c1568bbfabdb70a7d55cc462fe937d639cfcc" HandleID="k8s-pod-network.c926cec19e0b2deb730a9bb6354c1568bbfabdb70a7d55cc462fe937d639cfcc" Workload="localhost-k8s-goldmane--54d579b49d--njj6z-eth0" Sep 10 00:18:21.132526 containerd[1448]: 2025-09-10 00:18:21.127 [INFO][6092] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:18:21.132526 containerd[1448]: 2025-09-10 00:18:21.129 [INFO][6083] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c926cec19e0b2deb730a9bb6354c1568bbfabdb70a7d55cc462fe937d639cfcc" Sep 10 00:18:21.132526 containerd[1448]: time="2025-09-10T00:18:21.131273199Z" level=info msg="TearDown network for sandbox \"c926cec19e0b2deb730a9bb6354c1568bbfabdb70a7d55cc462fe937d639cfcc\" successfully" Sep 10 00:18:21.134376 containerd[1448]: time="2025-09-10T00:18:21.134346439Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c926cec19e0b2deb730a9bb6354c1568bbfabdb70a7d55cc462fe937d639cfcc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 10 00:18:21.134513 containerd[1448]: time="2025-09-10T00:18:21.134496879Z" level=info msg="RemovePodSandbox \"c926cec19e0b2deb730a9bb6354c1568bbfabdb70a7d55cc462fe937d639cfcc\" returns successfully" Sep 10 00:18:24.488589 systemd[1]: Started sshd@17-10.0.0.124:22-10.0.0.1:59214.service - OpenSSH per-connection server daemon (10.0.0.1:59214). Sep 10 00:18:24.545670 sshd[6103]: Accepted publickey for core from 10.0.0.1 port 59214 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:18:24.547232 sshd[6103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:18:24.552303 systemd-logind[1423]: New session 18 of user core. Sep 10 00:18:24.564956 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 10 00:18:24.886432 sshd[6103]: pam_unix(sshd:session): session closed for user core Sep 10 00:18:24.890337 systemd[1]: sshd@17-10.0.0.124:22-10.0.0.1:59214.service: Deactivated successfully. Sep 10 00:18:24.892458 systemd[1]: session-18.scope: Deactivated successfully. Sep 10 00:18:24.893613 systemd-logind[1423]: Session 18 logged out. Waiting for processes to exit. Sep 10 00:18:24.894718 systemd-logind[1423]: Removed session 18. Sep 10 00:18:29.899963 systemd[1]: Started sshd@18-10.0.0.124:22-10.0.0.1:59216.service - OpenSSH per-connection server daemon (10.0.0.1:59216). Sep 10 00:18:29.937797 sshd[6122]: Accepted publickey for core from 10.0.0.1 port 59216 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:18:29.939018 sshd[6122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:18:29.943657 systemd-logind[1423]: New session 19 of user core. Sep 10 00:18:29.957972 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 10 00:18:30.137135 sshd[6122]: pam_unix(sshd:session): session closed for user core Sep 10 00:18:30.143852 systemd[1]: sshd@18-10.0.0.124:22-10.0.0.1:59216.service: Deactivated successfully. Sep 10 00:18:30.147736 systemd[1]: session-19.scope: Deactivated successfully. Sep 10 00:18:30.148912 systemd-logind[1423]: Session 19 logged out. Waiting for processes to exit. Sep 10 00:18:30.150252 systemd-logind[1423]: Removed session 19.