Sep 5 00:31:04.856937 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 5 00:31:04.856958 kernel: Linux version 6.6.103-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Sep 4 22:50:35 -00 2025 Sep 5 00:31:04.856969 kernel: KASLR enabled Sep 5 00:31:04.856975 kernel: efi: EFI v2.7 by EDK II Sep 5 00:31:04.856981 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Sep 5 00:31:04.856986 kernel: random: crng init done Sep 5 00:31:04.856993 kernel: ACPI: Early table checksum verification disabled Sep 5 00:31:04.856999 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Sep 5 00:31:04.857005 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 5 00:31:04.857013 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:31:04.857019 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:31:04.857025 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:31:04.857030 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:31:04.857036 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:31:04.857044 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:31:04.857052 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:31:04.857058 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:31:04.857065 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:31:04.857071 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 5 00:31:04.857077 kernel: NUMA: Failed to initialise from firmware Sep 5 00:31:04.857084 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 5 00:31:04.857090 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Sep 5 00:31:04.857096 kernel: Zone ranges: Sep 5 00:31:04.857102 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 5 00:31:04.857108 kernel: DMA32 empty Sep 5 00:31:04.857115 kernel: Normal empty Sep 5 00:31:04.857122 kernel: Movable zone start for each node Sep 5 00:31:04.857128 kernel: Early memory node ranges Sep 5 00:31:04.857134 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Sep 5 00:31:04.857140 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Sep 5 00:31:04.857146 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Sep 5 00:31:04.857153 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 5 00:31:04.857159 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 5 00:31:04.857165 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 5 00:31:04.857171 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 5 00:31:04.857177 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 5 00:31:04.857184 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 5 00:31:04.857191 kernel: psci: probing for conduit method from ACPI. Sep 5 00:31:04.857198 kernel: psci: PSCIv1.1 detected in firmware. Sep 5 00:31:04.857204 kernel: psci: Using standard PSCI v0.2 function IDs Sep 5 00:31:04.857213 kernel: psci: Trusted OS migration not required Sep 5 00:31:04.857219 kernel: psci: SMC Calling Convention v1.1 Sep 5 00:31:04.857226 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 5 00:31:04.857234 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 5 00:31:04.857241 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 5 00:31:04.857247 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 5 00:31:04.857254 kernel: Detected PIPT I-cache on CPU0 Sep 5 00:31:04.857261 kernel: CPU features: detected: GIC system register CPU interface Sep 5 00:31:04.857267 kernel: CPU features: detected: Hardware dirty bit management Sep 5 00:31:04.857274 kernel: CPU features: detected: Spectre-v4 Sep 5 00:31:04.857280 kernel: CPU features: detected: Spectre-BHB Sep 5 00:31:04.857287 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 5 00:31:04.857294 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 5 00:31:04.857301 kernel: CPU features: detected: ARM erratum 1418040 Sep 5 00:31:04.857308 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 5 00:31:04.857315 kernel: alternatives: applying boot alternatives Sep 5 00:31:04.857323 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=74b18a518d158648275add16e3ab4f37e237ff7b3b2938818abfe7ffe97d585a Sep 5 00:31:04.857330 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 5 00:31:04.857337 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 5 00:31:04.857343 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 5 00:31:04.857350 kernel: Fallback order for Node 0: 0 Sep 5 00:31:04.857357 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Sep 5 00:31:04.857363 kernel: Policy zone: DMA Sep 5 00:31:04.857370 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 5 00:31:04.857377 kernel: software IO TLB: area num 4. Sep 5 00:31:04.857384 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Sep 5 00:31:04.857391 kernel: Memory: 2386404K/2572288K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 185884K reserved, 0K cma-reserved) Sep 5 00:31:04.857398 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 5 00:31:04.857405 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 5 00:31:04.857412 kernel: rcu: RCU event tracing is enabled. Sep 5 00:31:04.857419 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 5 00:31:04.857426 kernel: Trampoline variant of Tasks RCU enabled. Sep 5 00:31:04.857432 kernel: Tracing variant of Tasks RCU enabled. Sep 5 00:31:04.857439 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 5 00:31:04.857446 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 5 00:31:04.857454 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 5 00:31:04.857460 kernel: GICv3: 256 SPIs implemented Sep 5 00:31:04.857467 kernel: GICv3: 0 Extended SPIs implemented Sep 5 00:31:04.857474 kernel: Root IRQ handler: gic_handle_irq Sep 5 00:31:04.857481 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 5 00:31:04.857487 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 5 00:31:04.857494 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 5 00:31:04.857501 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Sep 5 00:31:04.857508 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Sep 5 00:31:04.857515 kernel: GICv3: using LPI property table @0x00000000400f0000 Sep 5 00:31:04.857521 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Sep 5 00:31:04.857528 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 5 00:31:04.857536 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 5 00:31:04.857543 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 5 00:31:04.857550 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 5 00:31:04.857556 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 5 00:31:04.857563 kernel: arm-pv: using stolen time PV Sep 5 00:31:04.857570 kernel: Console: colour dummy device 80x25 Sep 5 00:31:04.857577 kernel: ACPI: Core revision 20230628 Sep 5 00:31:04.857584 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 5 00:31:04.857591 kernel: pid_max: default: 32768 minimum: 301 Sep 5 00:31:04.857598 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 5 00:31:04.857606 kernel: landlock: Up and running. Sep 5 00:31:04.857618 kernel: SELinux: Initializing. Sep 5 00:31:04.857625 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 00:31:04.857631 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 00:31:04.857638 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 5 00:31:04.857646 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 5 00:31:04.857652 kernel: rcu: Hierarchical SRCU implementation. Sep 5 00:31:04.857659 kernel: rcu: Max phase no-delay instances is 400. Sep 5 00:31:04.857666 kernel: Platform MSI: ITS@0x8080000 domain created Sep 5 00:31:04.857674 kernel: PCI/MSI: ITS@0x8080000 domain created Sep 5 00:31:04.857681 kernel: Remapping and enabling EFI services. Sep 5 00:31:04.857688 kernel: smp: Bringing up secondary CPUs ... Sep 5 00:31:04.857695 kernel: Detected PIPT I-cache on CPU1 Sep 5 00:31:04.857701 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 5 00:31:04.857708 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Sep 5 00:31:04.857715 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 5 00:31:04.857731 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 5 00:31:04.857738 kernel: Detected PIPT I-cache on CPU2 Sep 5 00:31:04.857745 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 5 00:31:04.857755 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Sep 5 00:31:04.857762 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 5 00:31:04.857773 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 5 00:31:04.857781 kernel: Detected PIPT I-cache on CPU3 Sep 5 00:31:04.857788 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 5 00:31:04.857796 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Sep 5 00:31:04.857803 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 5 00:31:04.857810 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 5 00:31:04.857817 kernel: smp: Brought up 1 node, 4 CPUs Sep 5 00:31:04.857826 kernel: SMP: Total of 4 processors activated. Sep 5 00:31:04.857833 kernel: CPU features: detected: 32-bit EL0 Support Sep 5 00:31:04.857840 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 5 00:31:04.857847 kernel: CPU features: detected: Common not Private translations Sep 5 00:31:04.857862 kernel: CPU features: detected: CRC32 instructions Sep 5 00:31:04.857870 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 5 00:31:04.857878 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 5 00:31:04.857885 kernel: CPU features: detected: LSE atomic instructions Sep 5 00:31:04.857894 kernel: CPU features: detected: Privileged Access Never Sep 5 00:31:04.857901 kernel: CPU features: detected: RAS Extension Support Sep 5 00:31:04.857908 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 5 00:31:04.857915 kernel: CPU: All CPU(s) started at EL1 Sep 5 00:31:04.857923 kernel: alternatives: applying system-wide alternatives Sep 5 00:31:04.857930 kernel: devtmpfs: initialized Sep 5 00:31:04.857937 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 5 00:31:04.857944 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 5 00:31:04.857951 kernel: pinctrl core: initialized pinctrl subsystem Sep 5 00:31:04.857960 kernel: SMBIOS 3.0.0 present. Sep 5 00:31:04.857967 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Sep 5 00:31:04.857974 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 5 00:31:04.857981 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 5 00:31:04.857988 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 5 00:31:04.857996 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 5 00:31:04.858003 kernel: audit: initializing netlink subsys (disabled) Sep 5 00:31:04.858010 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 Sep 5 00:31:04.858017 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 5 00:31:04.858026 kernel: cpuidle: using governor menu Sep 5 00:31:04.858033 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 5 00:31:04.858040 kernel: ASID allocator initialised with 32768 entries Sep 5 00:31:04.858047 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 5 00:31:04.858054 kernel: Serial: AMBA PL011 UART driver Sep 5 00:31:04.858062 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 5 00:31:04.858069 kernel: Modules: 0 pages in range for non-PLT usage Sep 5 00:31:04.858076 kernel: Modules: 509008 pages in range for PLT usage Sep 5 00:31:04.858084 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 5 00:31:04.858092 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 5 00:31:04.858100 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 5 00:31:04.858107 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 5 00:31:04.858114 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 5 00:31:04.858121 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 5 00:31:04.858128 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 5 00:31:04.858136 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 5 00:31:04.858143 kernel: ACPI: Added _OSI(Module Device) Sep 5 00:31:04.858150 kernel: ACPI: Added _OSI(Processor Device) Sep 5 00:31:04.858158 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 5 00:31:04.858165 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 5 00:31:04.858172 kernel: ACPI: Interpreter enabled Sep 5 00:31:04.858179 kernel: ACPI: Using GIC for interrupt routing Sep 5 00:31:04.858187 kernel: ACPI: MCFG table detected, 1 entries Sep 5 00:31:04.858194 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 5 00:31:04.858201 kernel: printk: console [ttyAMA0] enabled Sep 5 00:31:04.858208 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 5 00:31:04.858339 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 5 00:31:04.858415 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 5 00:31:04.858480 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 5 00:31:04.858544 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 5 00:31:04.858607 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 5 00:31:04.858617 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 5 00:31:04.858624 kernel: PCI host bridge to bus 0000:00 Sep 5 00:31:04.858692 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 5 00:31:04.858766 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 5 00:31:04.858825 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 5 00:31:04.858922 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 5 00:31:04.859006 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Sep 5 00:31:04.859079 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Sep 5 00:31:04.859144 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Sep 5 00:31:04.859213 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Sep 5 00:31:04.859277 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Sep 5 00:31:04.859342 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Sep 5 00:31:04.859405 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Sep 5 00:31:04.859469 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Sep 5 00:31:04.859528 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 5 00:31:04.859584 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 5 00:31:04.859643 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 5 00:31:04.859652 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 5 00:31:04.859660 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 5 00:31:04.859667 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 5 00:31:04.859675 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 5 00:31:04.859682 kernel: iommu: Default domain type: Translated Sep 5 00:31:04.859689 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 5 00:31:04.859696 kernel: efivars: Registered efivars operations Sep 5 00:31:04.859704 kernel: vgaarb: loaded Sep 5 00:31:04.859713 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 5 00:31:04.859727 kernel: VFS: Disk quotas dquot_6.6.0 Sep 5 00:31:04.859735 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 5 00:31:04.859742 kernel: pnp: PnP ACPI init Sep 5 00:31:04.859819 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 5 00:31:04.859830 kernel: pnp: PnP ACPI: found 1 devices Sep 5 00:31:04.859837 kernel: NET: Registered PF_INET protocol family Sep 5 00:31:04.859844 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 5 00:31:04.859863 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 5 00:31:04.859872 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 5 00:31:04.859879 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 5 00:31:04.859886 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 5 00:31:04.859894 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 5 00:31:04.859901 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 00:31:04.859908 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 00:31:04.859916 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 5 00:31:04.859923 kernel: PCI: CLS 0 bytes, default 64 Sep 5 00:31:04.859933 kernel: kvm [1]: HYP mode not available Sep 5 00:31:04.859940 kernel: Initialise system trusted keyrings Sep 5 00:31:04.859947 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 5 00:31:04.859954 kernel: Key type asymmetric registered Sep 5 00:31:04.859962 kernel: Asymmetric key parser 'x509' registered Sep 5 00:31:04.859969 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 5 00:31:04.859976 kernel: io scheduler mq-deadline registered Sep 5 00:31:04.859983 kernel: io scheduler kyber registered Sep 5 00:31:04.859990 kernel: io scheduler bfq registered Sep 5 00:31:04.859999 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 5 00:31:04.860006 kernel: ACPI: button: Power Button [PWRB] Sep 5 00:31:04.860014 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 5 00:31:04.860086 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 5 00:31:04.860096 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 5 00:31:04.860103 kernel: thunder_xcv, ver 1.0 Sep 5 00:31:04.860111 kernel: thunder_bgx, ver 1.0 Sep 5 00:31:04.860118 kernel: nicpf, ver 1.0 Sep 5 00:31:04.860125 kernel: nicvf, ver 1.0 Sep 5 00:31:04.860200 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 5 00:31:04.860262 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-05T00:31:04 UTC (1757032264) Sep 5 00:31:04.860271 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 5 00:31:04.860279 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Sep 5 00:31:04.860286 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 5 00:31:04.860293 kernel: watchdog: Hard watchdog permanently disabled Sep 5 00:31:04.860301 kernel: NET: Registered PF_INET6 protocol family Sep 5 00:31:04.860308 kernel: Segment Routing with IPv6 Sep 5 00:31:04.860318 kernel: In-situ OAM (IOAM) with IPv6 Sep 5 00:31:04.860325 kernel: NET: Registered PF_PACKET protocol family Sep 5 00:31:04.860332 kernel: Key type dns_resolver registered Sep 5 00:31:04.860339 kernel: registered taskstats version 1 Sep 5 00:31:04.860346 kernel: Loading compiled-in X.509 certificates Sep 5 00:31:04.860354 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.103-flatcar: ff0f0c0ea2d5fe320cfcc368cee8225e09a20239' Sep 5 00:31:04.860361 kernel: Key type .fscrypt registered Sep 5 00:31:04.860369 kernel: Key type fscrypt-provisioning registered Sep 5 00:31:04.860376 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 5 00:31:04.860386 kernel: ima: Allocated hash algorithm: sha1 Sep 5 00:31:04.860394 kernel: ima: No architecture policies found Sep 5 00:31:04.860401 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 5 00:31:04.860423 kernel: clk: Disabling unused clocks Sep 5 00:31:04.860431 kernel: Freeing unused kernel memory: 39424K Sep 5 00:31:04.860438 kernel: Run /init as init process Sep 5 00:31:04.860447 kernel: with arguments: Sep 5 00:31:04.860454 kernel: /init Sep 5 00:31:04.860461 kernel: with environment: Sep 5 00:31:04.860469 kernel: HOME=/ Sep 5 00:31:04.860476 kernel: TERM=linux Sep 5 00:31:04.860484 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 5 00:31:04.860494 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 5 00:31:04.860503 systemd[1]: Detected virtualization kvm. Sep 5 00:31:04.860512 systemd[1]: Detected architecture arm64. Sep 5 00:31:04.860519 systemd[1]: Running in initrd. Sep 5 00:31:04.860528 systemd[1]: No hostname configured, using default hostname. Sep 5 00:31:04.860536 systemd[1]: Hostname set to . Sep 5 00:31:04.860544 systemd[1]: Initializing machine ID from VM UUID. Sep 5 00:31:04.860552 systemd[1]: Queued start job for default target initrd.target. Sep 5 00:31:04.860560 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 00:31:04.860568 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 00:31:04.860576 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 5 00:31:04.860584 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 00:31:04.860594 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 5 00:31:04.860602 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 5 00:31:04.860612 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 5 00:31:04.860620 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 5 00:31:04.860628 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 00:31:04.860635 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 00:31:04.860643 systemd[1]: Reached target paths.target - Path Units. Sep 5 00:31:04.860652 systemd[1]: Reached target slices.target - Slice Units. Sep 5 00:31:04.860660 systemd[1]: Reached target swap.target - Swaps. Sep 5 00:31:04.860668 systemd[1]: Reached target timers.target - Timer Units. Sep 5 00:31:04.860676 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 00:31:04.860684 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 00:31:04.860692 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 5 00:31:04.860700 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 5 00:31:04.860708 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 00:31:04.860716 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 00:31:04.860733 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 00:31:04.860741 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 00:31:04.860749 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 5 00:31:04.860757 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 00:31:04.860765 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 5 00:31:04.860773 systemd[1]: Starting systemd-fsck-usr.service... Sep 5 00:31:04.860781 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 00:31:04.860789 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 00:31:04.860798 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:31:04.860806 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 5 00:31:04.860814 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 00:31:04.860822 systemd[1]: Finished systemd-fsck-usr.service. Sep 5 00:31:04.860831 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 5 00:31:04.860873 systemd-journald[237]: Collecting audit messages is disabled. Sep 5 00:31:04.860893 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:31:04.860901 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 00:31:04.860909 systemd-journald[237]: Journal started Sep 5 00:31:04.860930 systemd-journald[237]: Runtime Journal (/run/log/journal/3cad097962c74c22bb4d6f05d7bcd113) is 5.9M, max 47.3M, 41.4M free. Sep 5 00:31:04.860964 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 00:31:04.852141 systemd-modules-load[238]: Inserted module 'overlay' Sep 5 00:31:04.864895 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 00:31:04.864923 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 5 00:31:04.867538 systemd-modules-load[238]: Inserted module 'br_netfilter' Sep 5 00:31:04.868300 kernel: Bridge firewalling registered Sep 5 00:31:04.870180 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 00:31:04.877998 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 00:31:04.880193 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 00:31:04.884003 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 00:31:04.885197 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 00:31:04.887805 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 5 00:31:04.889324 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:31:04.892176 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 00:31:04.896570 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 00:31:04.899381 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 00:31:04.903109 dracut-cmdline[271]: dracut-dracut-053 Sep 5 00:31:04.905559 dracut-cmdline[271]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=74b18a518d158648275add16e3ab4f37e237ff7b3b2938818abfe7ffe97d585a Sep 5 00:31:04.927319 systemd-resolved[282]: Positive Trust Anchors: Sep 5 00:31:04.927338 systemd-resolved[282]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 00:31:04.927371 systemd-resolved[282]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 00:31:04.932142 systemd-resolved[282]: Defaulting to hostname 'linux'. Sep 5 00:31:04.933082 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 00:31:04.935198 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 00:31:04.973882 kernel: SCSI subsystem initialized Sep 5 00:31:04.978871 kernel: Loading iSCSI transport class v2.0-870. Sep 5 00:31:04.985903 kernel: iscsi: registered transport (tcp) Sep 5 00:31:04.998874 kernel: iscsi: registered transport (qla4xxx) Sep 5 00:31:04.998900 kernel: QLogic iSCSI HBA Driver Sep 5 00:31:05.039514 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 5 00:31:05.048018 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 5 00:31:05.062965 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 5 00:31:05.063013 kernel: device-mapper: uevent: version 1.0.3 Sep 5 00:31:05.063880 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 5 00:31:05.109886 kernel: raid6: neonx8 gen() 15760 MB/s Sep 5 00:31:05.126871 kernel: raid6: neonx4 gen() 15673 MB/s Sep 5 00:31:05.143868 kernel: raid6: neonx2 gen() 13230 MB/s Sep 5 00:31:05.160880 kernel: raid6: neonx1 gen() 10517 MB/s Sep 5 00:31:05.177871 kernel: raid6: int64x8 gen() 6960 MB/s Sep 5 00:31:05.194881 kernel: raid6: int64x4 gen() 7338 MB/s Sep 5 00:31:05.211878 kernel: raid6: int64x2 gen() 6130 MB/s Sep 5 00:31:05.228883 kernel: raid6: int64x1 gen() 5056 MB/s Sep 5 00:31:05.228911 kernel: raid6: using algorithm neonx8 gen() 15760 MB/s Sep 5 00:31:05.245884 kernel: raid6: .... xor() 12053 MB/s, rmw enabled Sep 5 00:31:05.245913 kernel: raid6: using neon recovery algorithm Sep 5 00:31:05.250967 kernel: xor: measuring software checksum speed Sep 5 00:31:05.251006 kernel: 8regs : 19783 MB/sec Sep 5 00:31:05.252027 kernel: 32regs : 19650 MB/sec Sep 5 00:31:05.252039 kernel: arm64_neon : 26998 MB/sec Sep 5 00:31:05.252048 kernel: xor: using function: arm64_neon (26998 MB/sec) Sep 5 00:31:05.300901 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 5 00:31:05.312087 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 5 00:31:05.321002 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 00:31:05.332014 systemd-udevd[461]: Using default interface naming scheme 'v255'. Sep 5 00:31:05.335141 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 00:31:05.337503 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 5 00:31:05.352783 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation Sep 5 00:31:05.380077 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 00:31:05.389040 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 00:31:05.430027 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 00:31:05.440012 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 5 00:31:05.457128 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 5 00:31:05.458436 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 00:31:05.460015 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 00:31:05.463217 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 00:31:05.472019 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 5 00:31:05.475560 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 5 00:31:05.482943 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 5 00:31:05.479996 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 5 00:31:05.488156 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 5 00:31:05.488176 kernel: GPT:9289727 != 19775487 Sep 5 00:31:05.488185 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 5 00:31:05.488194 kernel: GPT:9289727 != 19775487 Sep 5 00:31:05.488203 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 5 00:31:05.488212 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 00:31:05.499910 kernel: BTRFS: device fsid 5d680510-9485-4285-abb3-c1615b7945ba devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (509) Sep 5 00:31:05.503885 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (510) Sep 5 00:31:05.509213 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 5 00:31:05.514424 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 5 00:31:05.515645 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 5 00:31:05.521313 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 5 00:31:05.526711 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 5 00:31:05.535161 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 5 00:31:05.536019 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 5 00:31:05.536085 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 00:31:05.540737 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 00:31:05.542934 disk-uuid[547]: Primary Header is updated. Sep 5 00:31:05.542934 disk-uuid[547]: Secondary Entries is updated. Sep 5 00:31:05.542934 disk-uuid[547]: Secondary Header is updated. Sep 5 00:31:05.549329 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 00:31:05.545725 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 00:31:05.545807 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:31:05.547202 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:31:05.549541 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:31:05.554724 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 00:31:05.557877 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 00:31:05.564930 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:31:05.573008 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 00:31:05.598970 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 00:31:06.558924 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 00:31:06.559503 disk-uuid[548]: The operation has completed successfully. Sep 5 00:31:06.582332 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 5 00:31:06.582436 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 5 00:31:06.602010 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 5 00:31:06.604863 sh[576]: Success Sep 5 00:31:06.615885 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 5 00:31:06.642230 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 5 00:31:06.649319 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 5 00:31:06.652424 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 5 00:31:06.660875 kernel: BTRFS info (device dm-0): first mount of filesystem 5d680510-9485-4285-abb3-c1615b7945ba Sep 5 00:31:06.660918 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 5 00:31:06.660929 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 5 00:31:06.662293 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 5 00:31:06.662306 kernel: BTRFS info (device dm-0): using free space tree Sep 5 00:31:06.666198 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 5 00:31:06.667331 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 5 00:31:06.677050 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 5 00:31:06.678501 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 5 00:31:06.684987 kernel: BTRFS info (device vda6): first mount of filesystem 0f8ab5b2-aa34-4ffc-b0b3-dae253182924 Sep 5 00:31:06.685029 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 5 00:31:06.685039 kernel: BTRFS info (device vda6): using free space tree Sep 5 00:31:06.687879 kernel: BTRFS info (device vda6): auto enabling async discard Sep 5 00:31:06.694217 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 5 00:31:06.695874 kernel: BTRFS info (device vda6): last unmount of filesystem 0f8ab5b2-aa34-4ffc-b0b3-dae253182924 Sep 5 00:31:06.701330 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 5 00:31:06.715044 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 5 00:31:06.772758 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 00:31:06.781284 ignition[661]: Ignition 2.19.0 Sep 5 00:31:06.781295 ignition[661]: Stage: fetch-offline Sep 5 00:31:06.783109 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 00:31:06.781329 ignition[661]: no configs at "/usr/lib/ignition/base.d" Sep 5 00:31:06.781338 ignition[661]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:31:06.781496 ignition[661]: parsed url from cmdline: "" Sep 5 00:31:06.781499 ignition[661]: no config URL provided Sep 5 00:31:06.781503 ignition[661]: reading system config file "/usr/lib/ignition/user.ign" Sep 5 00:31:06.781510 ignition[661]: no config at "/usr/lib/ignition/user.ign" Sep 5 00:31:06.781533 ignition[661]: op(1): [started] loading QEMU firmware config module Sep 5 00:31:06.781538 ignition[661]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 5 00:31:06.786596 ignition[661]: op(1): [finished] loading QEMU firmware config module Sep 5 00:31:06.786616 ignition[661]: QEMU firmware config was not found. Ignoring... Sep 5 00:31:06.806244 systemd-networkd[770]: lo: Link UP Sep 5 00:31:06.806258 systemd-networkd[770]: lo: Gained carrier Sep 5 00:31:06.807264 systemd-networkd[770]: Enumeration completed Sep 5 00:31:06.807353 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 00:31:06.808806 systemd[1]: Reached target network.target - Network. Sep 5 00:31:06.810507 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 00:31:06.810510 systemd-networkd[770]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 00:31:06.811217 systemd-networkd[770]: eth0: Link UP Sep 5 00:31:06.811220 systemd-networkd[770]: eth0: Gained carrier Sep 5 00:31:06.811226 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 00:31:06.825900 systemd-networkd[770]: eth0: DHCPv4 address 10.0.0.114/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 5 00:31:06.842515 ignition[661]: parsing config with SHA512: a8a1ce4a0af8724550ead8effc02afcb58b7521425804ae94b4222a8d8d5fcc621620c3563a7966b86b0f11bbf742faa3e4b62ce2f0241a363bf22eb15d01f2e Sep 5 00:31:06.846402 unknown[661]: fetched base config from "system" Sep 5 00:31:06.846412 unknown[661]: fetched user config from "qemu" Sep 5 00:31:06.846812 ignition[661]: fetch-offline: fetch-offline passed Sep 5 00:31:06.848402 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 00:31:06.846892 ignition[661]: Ignition finished successfully Sep 5 00:31:06.849952 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 5 00:31:06.862063 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 5 00:31:06.872910 ignition[775]: Ignition 2.19.0 Sep 5 00:31:06.872919 ignition[775]: Stage: kargs Sep 5 00:31:06.873084 ignition[775]: no configs at "/usr/lib/ignition/base.d" Sep 5 00:31:06.873094 ignition[775]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:31:06.873961 ignition[775]: kargs: kargs passed Sep 5 00:31:06.874009 ignition[775]: Ignition finished successfully Sep 5 00:31:06.876205 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 5 00:31:06.890039 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 5 00:31:06.899587 ignition[785]: Ignition 2.19.0 Sep 5 00:31:06.899597 ignition[785]: Stage: disks Sep 5 00:31:06.899785 ignition[785]: no configs at "/usr/lib/ignition/base.d" Sep 5 00:31:06.899794 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:31:06.900653 ignition[785]: disks: disks passed Sep 5 00:31:06.900715 ignition[785]: Ignition finished successfully Sep 5 00:31:06.903132 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 5 00:31:06.904695 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 5 00:31:06.905901 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 5 00:31:06.907596 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 00:31:06.909143 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 00:31:06.910629 systemd[1]: Reached target basic.target - Basic System. Sep 5 00:31:06.920054 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 5 00:31:06.929844 systemd-fsck[795]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 5 00:31:06.933898 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 5 00:31:06.938978 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 5 00:31:06.981884 kernel: EXT4-fs (vda9): mounted filesystem a958ad86-437c-4ed7-b041-6695bea80f66 r/w with ordered data mode. Quota mode: none. Sep 5 00:31:06.982142 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 5 00:31:06.983232 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 5 00:31:06.995956 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 00:31:06.998085 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 5 00:31:06.998950 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 5 00:31:06.998991 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 5 00:31:06.999014 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 00:31:07.004876 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 5 00:31:07.006573 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 5 00:31:07.010378 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (803) Sep 5 00:31:07.010414 kernel: BTRFS info (device vda6): first mount of filesystem 0f8ab5b2-aa34-4ffc-b0b3-dae253182924 Sep 5 00:31:07.010425 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 5 00:31:07.010441 kernel: BTRFS info (device vda6): using free space tree Sep 5 00:31:07.014889 kernel: BTRFS info (device vda6): auto enabling async discard Sep 5 00:31:07.016175 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 00:31:07.043518 initrd-setup-root[827]: cut: /sysroot/etc/passwd: No such file or directory Sep 5 00:31:07.047641 initrd-setup-root[834]: cut: /sysroot/etc/group: No such file or directory Sep 5 00:31:07.051833 initrd-setup-root[841]: cut: /sysroot/etc/shadow: No such file or directory Sep 5 00:31:07.054676 initrd-setup-root[848]: cut: /sysroot/etc/gshadow: No such file or directory Sep 5 00:31:07.124583 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 5 00:31:07.135964 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 5 00:31:07.137392 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 5 00:31:07.142870 kernel: BTRFS info (device vda6): last unmount of filesystem 0f8ab5b2-aa34-4ffc-b0b3-dae253182924 Sep 5 00:31:07.156917 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 5 00:31:07.160749 ignition[915]: INFO : Ignition 2.19.0 Sep 5 00:31:07.160749 ignition[915]: INFO : Stage: mount Sep 5 00:31:07.162133 ignition[915]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 00:31:07.162133 ignition[915]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:31:07.162133 ignition[915]: INFO : mount: mount passed Sep 5 00:31:07.162133 ignition[915]: INFO : Ignition finished successfully Sep 5 00:31:07.164891 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 5 00:31:07.174976 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 5 00:31:07.660256 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 5 00:31:07.669053 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 00:31:07.675234 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (929) Sep 5 00:31:07.675268 kernel: BTRFS info (device vda6): first mount of filesystem 0f8ab5b2-aa34-4ffc-b0b3-dae253182924 Sep 5 00:31:07.675988 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 5 00:31:07.676004 kernel: BTRFS info (device vda6): using free space tree Sep 5 00:31:07.678882 kernel: BTRFS info (device vda6): auto enabling async discard Sep 5 00:31:07.679601 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 00:31:07.695389 ignition[946]: INFO : Ignition 2.19.0 Sep 5 00:31:07.695389 ignition[946]: INFO : Stage: files Sep 5 00:31:07.696773 ignition[946]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 00:31:07.696773 ignition[946]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:31:07.696773 ignition[946]: DEBUG : files: compiled without relabeling support, skipping Sep 5 00:31:07.699791 ignition[946]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 5 00:31:07.699791 ignition[946]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 5 00:31:07.699791 ignition[946]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 5 00:31:07.699791 ignition[946]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 5 00:31:07.704101 ignition[946]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 5 00:31:07.704101 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 5 00:31:07.704101 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 5 00:31:07.699832 unknown[946]: wrote ssh authorized keys file for user: core Sep 5 00:31:08.099025 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 5 00:31:08.208039 systemd-networkd[770]: eth0: Gained IPv6LL Sep 5 00:31:08.600354 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 5 00:31:08.602127 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 5 00:31:08.602127 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 5 00:31:08.602127 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 5 00:31:08.602127 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 5 00:31:08.602127 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 00:31:08.602127 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 00:31:08.602127 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 00:31:08.602127 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 00:31:08.602127 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 00:31:08.602127 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 00:31:08.602127 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 5 00:31:08.602127 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 5 00:31:08.602127 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 5 00:31:08.602127 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 5 00:31:08.979956 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 5 00:31:09.396870 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 5 00:31:09.396870 ignition[946]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 5 00:31:09.399756 ignition[946]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 00:31:09.399756 ignition[946]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 00:31:09.399756 ignition[946]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 5 00:31:09.399756 ignition[946]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 5 00:31:09.399756 ignition[946]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 5 00:31:09.399756 ignition[946]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 5 00:31:09.399756 ignition[946]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 5 00:31:09.399756 ignition[946]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 5 00:31:09.415591 ignition[946]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 5 00:31:09.419254 ignition[946]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 5 00:31:09.421498 ignition[946]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 5 00:31:09.421498 ignition[946]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 5 00:31:09.421498 ignition[946]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 5 00:31:09.421498 ignition[946]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 5 00:31:09.421498 ignition[946]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 5 00:31:09.421498 ignition[946]: INFO : files: files passed Sep 5 00:31:09.421498 ignition[946]: INFO : Ignition finished successfully Sep 5 00:31:09.423072 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 5 00:31:09.436081 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 5 00:31:09.439036 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 5 00:31:09.441040 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 5 00:31:09.441884 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 5 00:31:09.446238 initrd-setup-root-after-ignition[974]: grep: /sysroot/oem/oem-release: No such file or directory Sep 5 00:31:09.448497 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 00:31:09.450021 initrd-setup-root-after-ignition[980]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 00:31:09.451164 initrd-setup-root-after-ignition[976]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 5 00:31:09.451955 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 00:31:09.453332 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 5 00:31:09.464007 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 5 00:31:09.482686 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 5 00:31:09.482799 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 5 00:31:09.485189 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 5 00:31:09.486534 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 5 00:31:09.487947 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 5 00:31:09.488671 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 5 00:31:09.503852 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 00:31:09.505935 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 5 00:31:09.518297 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 5 00:31:09.519292 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 00:31:09.520910 systemd[1]: Stopped target timers.target - Timer Units. Sep 5 00:31:09.522340 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 5 00:31:09.522457 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 00:31:09.524538 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 5 00:31:09.526129 systemd[1]: Stopped target basic.target - Basic System. Sep 5 00:31:09.527365 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 5 00:31:09.528721 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 00:31:09.530250 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 5 00:31:09.531753 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 5 00:31:09.533291 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 00:31:09.534887 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 5 00:31:09.536535 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 5 00:31:09.537884 systemd[1]: Stopped target swap.target - Swaps. Sep 5 00:31:09.539088 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 5 00:31:09.539203 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 5 00:31:09.541007 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 5 00:31:09.542593 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 00:31:09.544047 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 5 00:31:09.544142 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 00:31:09.545729 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 5 00:31:09.545838 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 5 00:31:09.548032 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 5 00:31:09.548145 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 00:31:09.549609 systemd[1]: Stopped target paths.target - Path Units. Sep 5 00:31:09.550774 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 5 00:31:09.554971 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 00:31:09.555952 systemd[1]: Stopped target slices.target - Slice Units. Sep 5 00:31:09.557609 systemd[1]: Stopped target sockets.target - Socket Units. Sep 5 00:31:09.558798 systemd[1]: iscsid.socket: Deactivated successfully. Sep 5 00:31:09.558896 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 00:31:09.560102 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 5 00:31:09.560184 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 00:31:09.561337 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 5 00:31:09.561442 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 00:31:09.562830 systemd[1]: ignition-files.service: Deactivated successfully. Sep 5 00:31:09.562947 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 5 00:31:09.571056 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 5 00:31:09.571738 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 5 00:31:09.571880 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 00:31:09.577080 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 5 00:31:09.577760 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 5 00:31:09.577896 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 00:31:09.579405 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 5 00:31:09.579507 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 00:31:09.583562 ignition[1001]: INFO : Ignition 2.19.0 Sep 5 00:31:09.583562 ignition[1001]: INFO : Stage: umount Sep 5 00:31:09.583562 ignition[1001]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 00:31:09.583562 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:31:09.585177 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 5 00:31:09.587356 ignition[1001]: INFO : umount: umount passed Sep 5 00:31:09.587356 ignition[1001]: INFO : Ignition finished successfully Sep 5 00:31:09.587650 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 5 00:31:09.589329 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 5 00:31:09.589403 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 5 00:31:09.591325 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 5 00:31:09.592397 systemd[1]: Stopped target network.target - Network. Sep 5 00:31:09.593721 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 5 00:31:09.593784 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 5 00:31:09.595790 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 5 00:31:09.595846 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 5 00:31:09.597271 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 5 00:31:09.597314 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 5 00:31:09.599048 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 5 00:31:09.599093 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 5 00:31:09.600752 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 5 00:31:09.602021 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 5 00:31:09.604257 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 5 00:31:09.604351 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 5 00:31:09.606617 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 5 00:31:09.606710 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 00:31:09.607918 systemd-networkd[770]: eth0: DHCPv6 lease lost Sep 5 00:31:09.609546 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 5 00:31:09.609645 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 5 00:31:09.611031 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 5 00:31:09.611066 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 5 00:31:09.618982 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 5 00:31:09.619646 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 5 00:31:09.619717 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 00:31:09.621284 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 5 00:31:09.621328 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:31:09.622780 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 5 00:31:09.622820 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 5 00:31:09.624575 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 00:31:09.634306 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 5 00:31:09.634418 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 5 00:31:09.639465 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 5 00:31:09.639610 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 00:31:09.641472 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 5 00:31:09.641514 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 5 00:31:09.643601 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 5 00:31:09.643640 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 00:31:09.645124 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 5 00:31:09.645174 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 5 00:31:09.647391 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 5 00:31:09.647439 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 5 00:31:09.649579 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 5 00:31:09.649625 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 00:31:09.662115 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 5 00:31:09.662972 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 5 00:31:09.663048 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 00:31:09.665171 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 00:31:09.665221 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:31:09.667034 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 5 00:31:09.667933 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 5 00:31:09.669391 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 5 00:31:09.669473 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 5 00:31:09.671531 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 5 00:31:09.672394 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 5 00:31:09.672460 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 5 00:31:09.674479 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 5 00:31:09.683601 systemd[1]: Switching root. Sep 5 00:31:09.708944 systemd-journald[237]: Journal stopped Sep 5 00:31:10.614640 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Sep 5 00:31:10.614707 kernel: SELinux: policy capability network_peer_controls=1 Sep 5 00:31:10.614721 kernel: SELinux: policy capability open_perms=1 Sep 5 00:31:10.614734 kernel: SELinux: policy capability extended_socket_class=1 Sep 5 00:31:10.614744 kernel: SELinux: policy capability always_check_network=0 Sep 5 00:31:10.614753 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 5 00:31:10.614763 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 5 00:31:10.614773 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 5 00:31:10.614782 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 5 00:31:10.614792 kernel: audit: type=1403 audit(1757032270.051:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 5 00:31:10.614803 systemd[1]: Successfully loaded SELinux policy in 31.018ms. Sep 5 00:31:10.614825 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.332ms. Sep 5 00:31:10.614837 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 5 00:31:10.614849 systemd[1]: Detected virtualization kvm. Sep 5 00:31:10.614880 systemd[1]: Detected architecture arm64. Sep 5 00:31:10.614892 systemd[1]: Detected first boot. Sep 5 00:31:10.614902 systemd[1]: Initializing machine ID from VM UUID. Sep 5 00:31:10.614913 zram_generator::config[1046]: No configuration found. Sep 5 00:31:10.614931 systemd[1]: Populated /etc with preset unit settings. Sep 5 00:31:10.614941 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 5 00:31:10.614954 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 5 00:31:10.614965 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 5 00:31:10.614976 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 5 00:31:10.614987 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 5 00:31:10.614998 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 5 00:31:10.615008 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 5 00:31:10.615018 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 5 00:31:10.615029 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 5 00:31:10.615042 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 5 00:31:10.615053 systemd[1]: Created slice user.slice - User and Session Slice. Sep 5 00:31:10.615064 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 00:31:10.615075 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 00:31:10.615085 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 5 00:31:10.615096 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 5 00:31:10.615107 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 5 00:31:10.615117 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 00:31:10.615128 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 5 00:31:10.615140 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 00:31:10.615150 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 5 00:31:10.615161 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 5 00:31:10.615173 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 5 00:31:10.615183 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 5 00:31:10.615194 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 00:31:10.615204 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 00:31:10.615215 systemd[1]: Reached target slices.target - Slice Units. Sep 5 00:31:10.615228 systemd[1]: Reached target swap.target - Swaps. Sep 5 00:31:10.615239 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 5 00:31:10.615250 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 5 00:31:10.615274 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 00:31:10.615285 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 00:31:10.615296 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 00:31:10.615307 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 5 00:31:10.615318 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 5 00:31:10.615329 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 5 00:31:10.615341 systemd[1]: Mounting media.mount - External Media Directory... Sep 5 00:31:10.615354 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 5 00:31:10.615365 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 5 00:31:10.615375 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 5 00:31:10.615387 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 5 00:31:10.615401 systemd[1]: Reached target machines.target - Containers. Sep 5 00:31:10.615413 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 5 00:31:10.615423 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 00:31:10.615435 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 00:31:10.615446 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 5 00:31:10.615457 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 00:31:10.615467 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 00:31:10.615478 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 00:31:10.615489 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 5 00:31:10.615499 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 00:31:10.615510 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 5 00:31:10.615520 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 5 00:31:10.615532 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 5 00:31:10.615542 kernel: fuse: init (API version 7.39) Sep 5 00:31:10.615553 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 5 00:31:10.615563 systemd[1]: Stopped systemd-fsck-usr.service. Sep 5 00:31:10.615649 kernel: loop: module loaded Sep 5 00:31:10.615664 kernel: ACPI: bus type drm_connector registered Sep 5 00:31:10.615681 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 00:31:10.615693 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 00:31:10.615704 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 5 00:31:10.615718 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 5 00:31:10.615753 systemd-journald[1117]: Collecting audit messages is disabled. Sep 5 00:31:10.615776 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 00:31:10.615787 systemd-journald[1117]: Journal started Sep 5 00:31:10.615808 systemd-journald[1117]: Runtime Journal (/run/log/journal/3cad097962c74c22bb4d6f05d7bcd113) is 5.9M, max 47.3M, 41.4M free. Sep 5 00:31:10.615848 systemd[1]: verity-setup.service: Deactivated successfully. Sep 5 00:31:10.448332 systemd[1]: Queued start job for default target multi-user.target. Sep 5 00:31:10.464780 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 5 00:31:10.465143 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 5 00:31:10.616998 systemd[1]: Stopped verity-setup.service. Sep 5 00:31:10.620388 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 00:31:10.621036 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 5 00:31:10.621949 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 5 00:31:10.622957 systemd[1]: Mounted media.mount - External Media Directory. Sep 5 00:31:10.623784 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 5 00:31:10.624927 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 5 00:31:10.625847 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 5 00:31:10.627907 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 5 00:31:10.629019 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 00:31:10.631166 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 5 00:31:10.631330 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 5 00:31:10.632627 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 00:31:10.632785 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 00:31:10.634000 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 00:31:10.634129 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 00:31:10.635193 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 00:31:10.635330 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 00:31:10.636649 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 5 00:31:10.636799 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 5 00:31:10.638020 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 00:31:10.638153 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 00:31:10.639315 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 00:31:10.642084 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 5 00:31:10.643271 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 5 00:31:10.655369 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 5 00:31:10.662969 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 5 00:31:10.664716 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 5 00:31:10.665641 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 5 00:31:10.665678 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 00:31:10.667425 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 5 00:31:10.669438 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 5 00:31:10.671296 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 5 00:31:10.672223 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 00:31:10.673605 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 5 00:31:10.675357 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 5 00:31:10.676326 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 00:31:10.679044 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 5 00:31:10.679939 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 00:31:10.680998 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 00:31:10.684363 systemd-journald[1117]: Time spent on flushing to /var/log/journal/3cad097962c74c22bb4d6f05d7bcd113 is 32.099ms for 854 entries. Sep 5 00:31:10.684363 systemd-journald[1117]: System Journal (/var/log/journal/3cad097962c74c22bb4d6f05d7bcd113) is 8.0M, max 195.6M, 187.6M free. Sep 5 00:31:10.728089 systemd-journald[1117]: Received client request to flush runtime journal. Sep 5 00:31:10.728512 kernel: loop0: detected capacity change from 0 to 114328 Sep 5 00:31:10.729064 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 5 00:31:10.685019 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 5 00:31:10.688153 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 5 00:31:10.690296 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 00:31:10.691582 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 5 00:31:10.692941 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 5 00:31:10.696010 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 5 00:31:10.697804 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 5 00:31:10.703795 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 5 00:31:10.705997 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 5 00:31:10.716016 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 5 00:31:10.727890 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:31:10.729513 udevadm[1166]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 5 00:31:10.731264 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 5 00:31:10.739103 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 5 00:31:10.746028 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 00:31:10.747752 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 5 00:31:10.749433 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 5 00:31:10.753015 kernel: loop1: detected capacity change from 0 to 203944 Sep 5 00:31:10.767299 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Sep 5 00:31:10.767317 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Sep 5 00:31:10.771196 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 00:31:10.786889 kernel: loop2: detected capacity change from 0 to 114432 Sep 5 00:31:10.815928 kernel: loop3: detected capacity change from 0 to 114328 Sep 5 00:31:10.820925 kernel: loop4: detected capacity change from 0 to 203944 Sep 5 00:31:10.826873 kernel: loop5: detected capacity change from 0 to 114432 Sep 5 00:31:10.829211 (sd-merge)[1182]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 5 00:31:10.829573 (sd-merge)[1182]: Merged extensions into '/usr'. Sep 5 00:31:10.834203 systemd[1]: Reloading requested from client PID 1157 ('systemd-sysext') (unit systemd-sysext.service)... Sep 5 00:31:10.834841 systemd[1]: Reloading... Sep 5 00:31:10.889915 zram_generator::config[1206]: No configuration found. Sep 5 00:31:10.945845 ldconfig[1152]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 5 00:31:10.992241 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 00:31:11.028104 systemd[1]: Reloading finished in 192 ms. Sep 5 00:31:11.056971 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 5 00:31:11.058185 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 5 00:31:11.068049 systemd[1]: Starting ensure-sysext.service... Sep 5 00:31:11.069586 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 00:31:11.075521 systemd[1]: Reloading requested from client PID 1242 ('systemctl') (unit ensure-sysext.service)... Sep 5 00:31:11.075536 systemd[1]: Reloading... Sep 5 00:31:11.086910 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 5 00:31:11.087160 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 5 00:31:11.087786 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 5 00:31:11.088489 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. Sep 5 00:31:11.088605 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. Sep 5 00:31:11.092965 systemd-tmpfiles[1243]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 00:31:11.093089 systemd-tmpfiles[1243]: Skipping /boot Sep 5 00:31:11.102052 systemd-tmpfiles[1243]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 00:31:11.102161 systemd-tmpfiles[1243]: Skipping /boot Sep 5 00:31:11.130886 zram_generator::config[1276]: No configuration found. Sep 5 00:31:11.211828 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 00:31:11.247468 systemd[1]: Reloading finished in 171 ms. Sep 5 00:31:11.262748 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 5 00:31:11.273235 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 00:31:11.280835 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 5 00:31:11.283005 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 5 00:31:11.285014 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 5 00:31:11.288236 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 00:31:11.292768 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 00:31:11.297524 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 5 00:31:11.301006 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 00:31:11.304614 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 00:31:11.308177 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 00:31:11.314059 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 00:31:11.315109 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 00:31:11.316957 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 5 00:31:11.319559 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 5 00:31:11.323297 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 00:31:11.323779 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 00:31:11.325636 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 00:31:11.325788 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 00:31:11.327543 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 00:31:11.327587 systemd-udevd[1315]: Using default interface naming scheme 'v255'. Sep 5 00:31:11.327712 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 00:31:11.336402 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 00:31:11.345224 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 00:31:11.347428 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 00:31:11.350313 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 00:31:11.352427 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 00:31:11.352956 augenrules[1341]: No rules Sep 5 00:31:11.353575 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 5 00:31:11.355035 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 00:31:11.357466 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 5 00:31:11.359147 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 5 00:31:11.362936 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 5 00:31:11.364413 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 5 00:31:11.368074 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 00:31:11.368970 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 00:31:11.370224 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 00:31:11.370339 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 00:31:11.382284 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 5 00:31:11.390008 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 00:31:11.390167 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 00:31:11.394200 systemd[1]: Finished ensure-sysext.service. Sep 5 00:31:11.400136 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 5 00:31:11.400368 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 00:31:11.409938 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1345) Sep 5 00:31:11.410113 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 00:31:11.413023 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 00:31:11.417129 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 00:31:11.418181 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 00:31:11.420152 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 00:31:11.426309 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 5 00:31:11.429889 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 5 00:31:11.430293 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 00:31:11.430426 systemd-resolved[1311]: Positive Trust Anchors: Sep 5 00:31:11.430431 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 00:31:11.430437 systemd-resolved[1311]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 00:31:11.430470 systemd-resolved[1311]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 00:31:11.431731 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 00:31:11.431900 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 00:31:11.433470 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 00:31:11.433612 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 00:31:11.436414 systemd-resolved[1311]: Defaulting to hostname 'linux'. Sep 5 00:31:11.443040 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 00:31:11.447674 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 00:31:11.448748 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 00:31:11.448803 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 00:31:11.456949 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 5 00:31:11.465044 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 5 00:31:11.476030 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 5 00:31:11.486843 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 5 00:31:11.488265 systemd[1]: Reached target time-set.target - System Time Set. Sep 5 00:31:11.494469 systemd-networkd[1384]: lo: Link UP Sep 5 00:31:11.494476 systemd-networkd[1384]: lo: Gained carrier Sep 5 00:31:11.496244 systemd-networkd[1384]: Enumeration completed Sep 5 00:31:11.496355 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 00:31:11.496979 systemd-networkd[1384]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 00:31:11.496989 systemd-networkd[1384]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 00:31:11.497410 systemd[1]: Reached target network.target - Network. Sep 5 00:31:11.498359 systemd-networkd[1384]: eth0: Link UP Sep 5 00:31:11.498370 systemd-networkd[1384]: eth0: Gained carrier Sep 5 00:31:11.498385 systemd-networkd[1384]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 00:31:11.507048 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 5 00:31:11.513968 systemd-networkd[1384]: eth0: DHCPv4 address 10.0.0.114/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 5 00:31:11.514569 systemd-timesyncd[1385]: Network configuration changed, trying to establish connection. Sep 5 00:31:11.517066 systemd-timesyncd[1385]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 5 00:31:11.517121 systemd-timesyncd[1385]: Initial clock synchronization to Fri 2025-09-05 00:31:11.370703 UTC. Sep 5 00:31:11.531105 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:31:11.539060 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 5 00:31:11.541311 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 5 00:31:11.552956 lvm[1402]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 5 00:31:11.564902 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:31:11.585284 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 5 00:31:11.586476 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 00:31:11.588056 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 00:31:11.588972 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 5 00:31:11.589909 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 5 00:31:11.591037 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 5 00:31:11.591957 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 5 00:31:11.592911 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 5 00:31:11.593812 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 5 00:31:11.593849 systemd[1]: Reached target paths.target - Path Units. Sep 5 00:31:11.594521 systemd[1]: Reached target timers.target - Timer Units. Sep 5 00:31:11.596126 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 5 00:31:11.598255 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 5 00:31:11.605812 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 5 00:31:11.607909 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 5 00:31:11.609192 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 5 00:31:11.610124 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 00:31:11.610840 systemd[1]: Reached target basic.target - Basic System. Sep 5 00:31:11.611557 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 5 00:31:11.611586 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 5 00:31:11.612583 systemd[1]: Starting containerd.service - containerd container runtime... Sep 5 00:31:11.614521 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 5 00:31:11.615587 lvm[1409]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 5 00:31:11.619002 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 5 00:31:11.620819 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 5 00:31:11.622110 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 5 00:31:11.624066 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 5 00:31:11.628115 jq[1412]: false Sep 5 00:31:11.628323 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 5 00:31:11.630360 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 5 00:31:11.636057 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 5 00:31:11.636749 extend-filesystems[1413]: Found loop3 Sep 5 00:31:11.636749 extend-filesystems[1413]: Found loop4 Sep 5 00:31:11.636749 extend-filesystems[1413]: Found loop5 Sep 5 00:31:11.636749 extend-filesystems[1413]: Found vda Sep 5 00:31:11.636749 extend-filesystems[1413]: Found vda1 Sep 5 00:31:11.636749 extend-filesystems[1413]: Found vda2 Sep 5 00:31:11.636749 extend-filesystems[1413]: Found vda3 Sep 5 00:31:11.640126 dbus-daemon[1411]: [system] SELinux support is enabled Sep 5 00:31:11.642399 extend-filesystems[1413]: Found usr Sep 5 00:31:11.642399 extend-filesystems[1413]: Found vda4 Sep 5 00:31:11.642399 extend-filesystems[1413]: Found vda6 Sep 5 00:31:11.642399 extend-filesystems[1413]: Found vda7 Sep 5 00:31:11.642399 extend-filesystems[1413]: Found vda9 Sep 5 00:31:11.642399 extend-filesystems[1413]: Checking size of /dev/vda9 Sep 5 00:31:11.642252 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 5 00:31:11.644394 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 5 00:31:11.644782 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 5 00:31:11.645797 systemd[1]: Starting update-engine.service - Update Engine... Sep 5 00:31:11.648017 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 5 00:31:11.649329 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 5 00:31:11.652101 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 5 00:31:11.657357 jq[1427]: true Sep 5 00:31:11.660485 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 5 00:31:11.661900 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 5 00:31:11.662164 systemd[1]: motdgen.service: Deactivated successfully. Sep 5 00:31:11.662306 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 5 00:31:11.669883 update_engine[1425]: I20250905 00:31:11.668813 1425 main.cc:92] Flatcar Update Engine starting Sep 5 00:31:11.679283 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 5 00:31:11.679488 extend-filesystems[1413]: Resized partition /dev/vda9 Sep 5 00:31:11.680265 update_engine[1425]: I20250905 00:31:11.670678 1425 update_check_scheduler.cc:74] Next update check in 9m36s Sep 5 00:31:11.676616 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 5 00:31:11.681423 extend-filesystems[1436]: resize2fs 1.47.1 (20-May-2024) Sep 5 00:31:11.676783 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 5 00:31:11.687927 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1360) Sep 5 00:31:11.694534 systemd[1]: Started update-engine.service - Update Engine. Sep 5 00:31:11.695125 systemd-logind[1422]: Watching system buttons on /dev/input/event0 (Power Button) Sep 5 00:31:11.695301 systemd-logind[1422]: New seat seat0. Sep 5 00:31:11.696067 systemd[1]: Started systemd-logind.service - User Login Management. Sep 5 00:31:11.700426 jq[1437]: true Sep 5 00:31:11.700872 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 5 00:31:11.704357 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 5 00:31:11.704555 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 5 00:31:11.705784 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 5 00:31:11.706067 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 5 00:31:11.708699 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 5 00:31:11.714190 tar[1433]: linux-arm64/helm Sep 5 00:31:11.715194 (ntainerd)[1446]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 5 00:31:11.717272 extend-filesystems[1436]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 5 00:31:11.717272 extend-filesystems[1436]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 5 00:31:11.717272 extend-filesystems[1436]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 5 00:31:11.720228 extend-filesystems[1413]: Resized filesystem in /dev/vda9 Sep 5 00:31:11.722039 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 5 00:31:11.722245 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 5 00:31:11.754647 bash[1467]: Updated "/home/core/.ssh/authorized_keys" Sep 5 00:31:11.756424 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 5 00:31:11.760141 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 5 00:31:11.783756 locksmithd[1450]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 5 00:31:11.859986 containerd[1446]: time="2025-09-05T00:31:11.859833520Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 5 00:31:11.888952 containerd[1446]: time="2025-09-05T00:31:11.888869280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 5 00:31:11.891005 containerd[1446]: time="2025-09-05T00:31:11.890309320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.103-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 5 00:31:11.891005 containerd[1446]: time="2025-09-05T00:31:11.890343320Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 5 00:31:11.891005 containerd[1446]: time="2025-09-05T00:31:11.890359360Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 5 00:31:11.891005 containerd[1446]: time="2025-09-05T00:31:11.890517880Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 5 00:31:11.891005 containerd[1446]: time="2025-09-05T00:31:11.890534080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 5 00:31:11.891005 containerd[1446]: time="2025-09-05T00:31:11.890586640Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 00:31:11.891005 containerd[1446]: time="2025-09-05T00:31:11.890598920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 5 00:31:11.891005 containerd[1446]: time="2025-09-05T00:31:11.890762280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 00:31:11.891005 containerd[1446]: time="2025-09-05T00:31:11.890778760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 5 00:31:11.891005 containerd[1446]: time="2025-09-05T00:31:11.890791680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 00:31:11.891005 containerd[1446]: time="2025-09-05T00:31:11.890800840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 5 00:31:11.891257 containerd[1446]: time="2025-09-05T00:31:11.890886600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 5 00:31:11.891257 containerd[1446]: time="2025-09-05T00:31:11.891070280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 5 00:31:11.891257 containerd[1446]: time="2025-09-05T00:31:11.891162760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 00:31:11.891257 containerd[1446]: time="2025-09-05T00:31:11.891177320Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 5 00:31:11.891257 containerd[1446]: time="2025-09-05T00:31:11.891253920Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 5 00:31:11.891347 containerd[1446]: time="2025-09-05T00:31:11.891299640Z" level=info msg="metadata content store policy set" policy=shared Sep 5 00:31:11.896499 containerd[1446]: time="2025-09-05T00:31:11.896471160Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 5 00:31:11.896565 containerd[1446]: time="2025-09-05T00:31:11.896518040Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 5 00:31:11.896565 containerd[1446]: time="2025-09-05T00:31:11.896533840Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 5 00:31:11.896565 containerd[1446]: time="2025-09-05T00:31:11.896548920Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 5 00:31:11.896565 containerd[1446]: time="2025-09-05T00:31:11.896562120Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 5 00:31:11.896796 containerd[1446]: time="2025-09-05T00:31:11.896697400Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 5 00:31:11.896961 containerd[1446]: time="2025-09-05T00:31:11.896936040Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 5 00:31:11.897053 containerd[1446]: time="2025-09-05T00:31:11.897035280Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 5 00:31:11.897081 containerd[1446]: time="2025-09-05T00:31:11.897055560Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 5 00:31:11.897081 containerd[1446]: time="2025-09-05T00:31:11.897068080Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 5 00:31:11.897135 containerd[1446]: time="2025-09-05T00:31:11.897081800Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 5 00:31:11.897135 containerd[1446]: time="2025-09-05T00:31:11.897095160Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 5 00:31:11.897135 containerd[1446]: time="2025-09-05T00:31:11.897108280Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 5 00:31:11.897135 containerd[1446]: time="2025-09-05T00:31:11.897122000Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 5 00:31:11.897206 containerd[1446]: time="2025-09-05T00:31:11.897136120Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 5 00:31:11.897206 containerd[1446]: time="2025-09-05T00:31:11.897148720Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 5 00:31:11.897206 containerd[1446]: time="2025-09-05T00:31:11.897160840Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 5 00:31:11.897206 containerd[1446]: time="2025-09-05T00:31:11.897172840Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 5 00:31:11.897206 containerd[1446]: time="2025-09-05T00:31:11.897193520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 5 00:31:11.897288 containerd[1446]: time="2025-09-05T00:31:11.897207880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 5 00:31:11.897288 containerd[1446]: time="2025-09-05T00:31:11.897220800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 5 00:31:11.897288 containerd[1446]: time="2025-09-05T00:31:11.897232800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 5 00:31:11.897288 containerd[1446]: time="2025-09-05T00:31:11.897245400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 5 00:31:11.897288 containerd[1446]: time="2025-09-05T00:31:11.897265920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 5 00:31:11.897288 containerd[1446]: time="2025-09-05T00:31:11.897278040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 5 00:31:11.897393 containerd[1446]: time="2025-09-05T00:31:11.897291400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 5 00:31:11.897393 containerd[1446]: time="2025-09-05T00:31:11.897304520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 5 00:31:11.897393 containerd[1446]: time="2025-09-05T00:31:11.897318040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 5 00:31:11.897393 containerd[1446]: time="2025-09-05T00:31:11.897330760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 5 00:31:11.897393 containerd[1446]: time="2025-09-05T00:31:11.897343880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 5 00:31:11.897393 containerd[1446]: time="2025-09-05T00:31:11.897356320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 5 00:31:11.897393 containerd[1446]: time="2025-09-05T00:31:11.897376720Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 5 00:31:11.897509 containerd[1446]: time="2025-09-05T00:31:11.897396520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 5 00:31:11.897509 containerd[1446]: time="2025-09-05T00:31:11.897408560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 5 00:31:11.897509 containerd[1446]: time="2025-09-05T00:31:11.897419120Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 5 00:31:11.898999 containerd[1446]: time="2025-09-05T00:31:11.898193600Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 5 00:31:11.898999 containerd[1446]: time="2025-09-05T00:31:11.898231760Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 5 00:31:11.898999 containerd[1446]: time="2025-09-05T00:31:11.898245000Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 5 00:31:11.898999 containerd[1446]: time="2025-09-05T00:31:11.898257440Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 5 00:31:11.898999 containerd[1446]: time="2025-09-05T00:31:11.898268000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 5 00:31:11.898999 containerd[1446]: time="2025-09-05T00:31:11.898330520Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 5 00:31:11.898999 containerd[1446]: time="2025-09-05T00:31:11.898345160Z" level=info msg="NRI interface is disabled by configuration." Sep 5 00:31:11.898999 containerd[1446]: time="2025-09-05T00:31:11.898355880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 5 00:31:11.899188 containerd[1446]: time="2025-09-05T00:31:11.898640560Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 5 00:31:11.899188 containerd[1446]: time="2025-09-05T00:31:11.898710640Z" level=info msg="Connect containerd service" Sep 5 00:31:11.899188 containerd[1446]: time="2025-09-05T00:31:11.898741880Z" level=info msg="using legacy CRI server" Sep 5 00:31:11.899188 containerd[1446]: time="2025-09-05T00:31:11.898749360Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 5 00:31:11.899188 containerd[1446]: time="2025-09-05T00:31:11.898830240Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 5 00:31:11.901379 containerd[1446]: time="2025-09-05T00:31:11.901351600Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 5 00:31:11.901695 containerd[1446]: time="2025-09-05T00:31:11.901641320Z" level=info msg="Start subscribing containerd event" Sep 5 00:31:11.902332 containerd[1446]: time="2025-09-05T00:31:11.902309920Z" level=info msg="Start recovering state" Sep 5 00:31:11.902531 containerd[1446]: time="2025-09-05T00:31:11.902514840Z" level=info msg="Start event monitor" Sep 5 00:31:11.903915 containerd[1446]: time="2025-09-05T00:31:11.902627520Z" level=info msg="Start snapshots syncer" Sep 5 00:31:11.903915 containerd[1446]: time="2025-09-05T00:31:11.902642640Z" level=info msg="Start cni network conf syncer for default" Sep 5 00:31:11.903915 containerd[1446]: time="2025-09-05T00:31:11.902650000Z" level=info msg="Start streaming server" Sep 5 00:31:11.903915 containerd[1446]: time="2025-09-05T00:31:11.902149680Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 5 00:31:11.903915 containerd[1446]: time="2025-09-05T00:31:11.902885600Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 5 00:31:11.903915 containerd[1446]: time="2025-09-05T00:31:11.902951440Z" level=info msg="containerd successfully booted in 0.043959s" Sep 5 00:31:11.903028 systemd[1]: Started containerd.service - containerd container runtime. Sep 5 00:31:11.907771 sshd_keygen[1432]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 5 00:31:11.926890 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 5 00:31:11.943171 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 5 00:31:11.949740 systemd[1]: issuegen.service: Deactivated successfully. Sep 5 00:31:11.949952 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 5 00:31:11.952409 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 5 00:31:11.963961 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 5 00:31:11.973166 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 5 00:31:11.977159 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 5 00:31:11.978612 systemd[1]: Reached target getty.target - Login Prompts. Sep 5 00:31:12.060267 tar[1433]: linux-arm64/LICENSE Sep 5 00:31:12.060348 tar[1433]: linux-arm64/README.md Sep 5 00:31:12.078100 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 5 00:31:12.623989 systemd-networkd[1384]: eth0: Gained IPv6LL Sep 5 00:31:12.626948 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 5 00:31:12.628464 systemd[1]: Reached target network-online.target - Network is Online. Sep 5 00:31:12.638085 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 5 00:31:12.640223 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:31:12.642067 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 5 00:31:12.655698 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 5 00:31:12.656473 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 5 00:31:12.657892 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 5 00:31:12.660707 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 5 00:31:13.181478 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:31:13.182790 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 5 00:31:13.185587 (kubelet)[1524]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 00:31:13.186950 systemd[1]: Startup finished in 509ms (kernel) + 5.366s (initrd) + 3.166s (userspace) = 9.042s. Sep 5 00:31:13.551169 kubelet[1524]: E0905 00:31:13.551052 1524 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 00:31:13.553412 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 00:31:13.553555 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 00:31:17.464803 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 5 00:31:17.466053 systemd[1]: Started sshd@0-10.0.0.114:22-10.0.0.1:55178.service - OpenSSH per-connection server daemon (10.0.0.1:55178). Sep 5 00:31:17.511629 sshd[1538]: Accepted publickey for core from 10.0.0.1 port 55178 ssh2: RSA SHA256:AGG14PALIK3Z6LO0MtFBMGyAQes3xIvrcXGuLmkN81k Sep 5 00:31:17.513242 sshd[1538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:31:17.521450 systemd-logind[1422]: New session 1 of user core. Sep 5 00:31:17.522482 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 5 00:31:17.539088 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 5 00:31:17.548191 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 5 00:31:17.552425 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 5 00:31:17.558806 (systemd)[1542]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 5 00:31:17.629928 systemd[1542]: Queued start job for default target default.target. Sep 5 00:31:17.646803 systemd[1542]: Created slice app.slice - User Application Slice. Sep 5 00:31:17.646833 systemd[1542]: Reached target paths.target - Paths. Sep 5 00:31:17.646845 systemd[1542]: Reached target timers.target - Timers. Sep 5 00:31:17.648114 systemd[1542]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 5 00:31:17.658016 systemd[1542]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 5 00:31:17.658080 systemd[1542]: Reached target sockets.target - Sockets. Sep 5 00:31:17.658092 systemd[1542]: Reached target basic.target - Basic System. Sep 5 00:31:17.658128 systemd[1542]: Reached target default.target - Main User Target. Sep 5 00:31:17.658153 systemd[1542]: Startup finished in 94ms. Sep 5 00:31:17.658423 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 5 00:31:17.659671 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 5 00:31:17.720585 systemd[1]: Started sshd@1-10.0.0.114:22-10.0.0.1:55194.service - OpenSSH per-connection server daemon (10.0.0.1:55194). Sep 5 00:31:17.759059 sshd[1553]: Accepted publickey for core from 10.0.0.1 port 55194 ssh2: RSA SHA256:AGG14PALIK3Z6LO0MtFBMGyAQes3xIvrcXGuLmkN81k Sep 5 00:31:17.760293 sshd[1553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:31:17.765081 systemd-logind[1422]: New session 2 of user core. Sep 5 00:31:17.779035 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 5 00:31:17.830084 sshd[1553]: pam_unix(sshd:session): session closed for user core Sep 5 00:31:17.849185 systemd[1]: sshd@1-10.0.0.114:22-10.0.0.1:55194.service: Deactivated successfully. Sep 5 00:31:17.852331 systemd[1]: session-2.scope: Deactivated successfully. Sep 5 00:31:17.853927 systemd-logind[1422]: Session 2 logged out. Waiting for processes to exit. Sep 5 00:31:17.866185 systemd[1]: Started sshd@2-10.0.0.114:22-10.0.0.1:55210.service - OpenSSH per-connection server daemon (10.0.0.1:55210). Sep 5 00:31:17.867540 systemd-logind[1422]: Removed session 2. Sep 5 00:31:17.902176 sshd[1560]: Accepted publickey for core from 10.0.0.1 port 55210 ssh2: RSA SHA256:AGG14PALIK3Z6LO0MtFBMGyAQes3xIvrcXGuLmkN81k Sep 5 00:31:17.903408 sshd[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:31:17.907305 systemd-logind[1422]: New session 3 of user core. Sep 5 00:31:17.917992 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 5 00:31:17.966529 sshd[1560]: pam_unix(sshd:session): session closed for user core Sep 5 00:31:17.980213 systemd[1]: sshd@2-10.0.0.114:22-10.0.0.1:55210.service: Deactivated successfully. Sep 5 00:31:17.981655 systemd[1]: session-3.scope: Deactivated successfully. Sep 5 00:31:17.983816 systemd-logind[1422]: Session 3 logged out. Waiting for processes to exit. Sep 5 00:31:17.997221 systemd[1]: Started sshd@3-10.0.0.114:22-10.0.0.1:55212.service - OpenSSH per-connection server daemon (10.0.0.1:55212). Sep 5 00:31:17.998181 systemd-logind[1422]: Removed session 3. Sep 5 00:31:18.032627 sshd[1567]: Accepted publickey for core from 10.0.0.1 port 55212 ssh2: RSA SHA256:AGG14PALIK3Z6LO0MtFBMGyAQes3xIvrcXGuLmkN81k Sep 5 00:31:18.033846 sshd[1567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:31:18.037320 systemd-logind[1422]: New session 4 of user core. Sep 5 00:31:18.044997 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 5 00:31:18.095825 sshd[1567]: pam_unix(sshd:session): session closed for user core Sep 5 00:31:18.108143 systemd[1]: sshd@3-10.0.0.114:22-10.0.0.1:55212.service: Deactivated successfully. Sep 5 00:31:18.109517 systemd[1]: session-4.scope: Deactivated successfully. Sep 5 00:31:18.111903 systemd-logind[1422]: Session 4 logged out. Waiting for processes to exit. Sep 5 00:31:18.113023 systemd[1]: Started sshd@4-10.0.0.114:22-10.0.0.1:55226.service - OpenSSH per-connection server daemon (10.0.0.1:55226). Sep 5 00:31:18.113607 systemd-logind[1422]: Removed session 4. Sep 5 00:31:18.151601 sshd[1574]: Accepted publickey for core from 10.0.0.1 port 55226 ssh2: RSA SHA256:AGG14PALIK3Z6LO0MtFBMGyAQes3xIvrcXGuLmkN81k Sep 5 00:31:18.152777 sshd[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:31:18.156093 systemd-logind[1422]: New session 5 of user core. Sep 5 00:31:18.167991 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 5 00:31:18.222671 sudo[1577]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 5 00:31:18.222980 sudo[1577]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:31:18.236736 sudo[1577]: pam_unix(sudo:session): session closed for user root Sep 5 00:31:18.238472 sshd[1574]: pam_unix(sshd:session): session closed for user core Sep 5 00:31:18.245177 systemd[1]: sshd@4-10.0.0.114:22-10.0.0.1:55226.service: Deactivated successfully. Sep 5 00:31:18.247551 systemd[1]: session-5.scope: Deactivated successfully. Sep 5 00:31:18.249311 systemd-logind[1422]: Session 5 logged out. Waiting for processes to exit. Sep 5 00:31:18.250695 systemd[1]: Started sshd@5-10.0.0.114:22-10.0.0.1:55234.service - OpenSSH per-connection server daemon (10.0.0.1:55234). Sep 5 00:31:18.251523 systemd-logind[1422]: Removed session 5. Sep 5 00:31:18.290926 sshd[1582]: Accepted publickey for core from 10.0.0.1 port 55234 ssh2: RSA SHA256:AGG14PALIK3Z6LO0MtFBMGyAQes3xIvrcXGuLmkN81k Sep 5 00:31:18.292133 sshd[1582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:31:18.295512 systemd-logind[1422]: New session 6 of user core. Sep 5 00:31:18.304990 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 5 00:31:18.355660 sudo[1586]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 5 00:31:18.355984 sudo[1586]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:31:18.359107 sudo[1586]: pam_unix(sudo:session): session closed for user root Sep 5 00:31:18.363579 sudo[1585]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 5 00:31:18.363843 sudo[1585]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:31:18.382079 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 5 00:31:18.383377 auditctl[1589]: No rules Sep 5 00:31:18.384227 systemd[1]: audit-rules.service: Deactivated successfully. Sep 5 00:31:18.385903 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 5 00:31:18.387481 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 5 00:31:18.410415 augenrules[1607]: No rules Sep 5 00:31:18.411653 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 5 00:31:18.412582 sudo[1585]: pam_unix(sudo:session): session closed for user root Sep 5 00:31:18.414008 sshd[1582]: pam_unix(sshd:session): session closed for user core Sep 5 00:31:18.423054 systemd[1]: sshd@5-10.0.0.114:22-10.0.0.1:55234.service: Deactivated successfully. Sep 5 00:31:18.424452 systemd[1]: session-6.scope: Deactivated successfully. Sep 5 00:31:18.425640 systemd-logind[1422]: Session 6 logged out. Waiting for processes to exit. Sep 5 00:31:18.426730 systemd[1]: Started sshd@6-10.0.0.114:22-10.0.0.1:55246.service - OpenSSH per-connection server daemon (10.0.0.1:55246). Sep 5 00:31:18.427497 systemd-logind[1422]: Removed session 6. Sep 5 00:31:18.465251 sshd[1615]: Accepted publickey for core from 10.0.0.1 port 55246 ssh2: RSA SHA256:AGG14PALIK3Z6LO0MtFBMGyAQes3xIvrcXGuLmkN81k Sep 5 00:31:18.466437 sshd[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:31:18.470336 systemd-logind[1422]: New session 7 of user core. Sep 5 00:31:18.479993 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 5 00:31:18.530347 sudo[1618]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 5 00:31:18.530930 sudo[1618]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:31:18.790095 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 5 00:31:18.790220 (dockerd)[1636]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 5 00:31:18.999884 dockerd[1636]: time="2025-09-05T00:31:18.998312036Z" level=info msg="Starting up" Sep 5 00:31:19.162448 dockerd[1636]: time="2025-09-05T00:31:19.162290730Z" level=info msg="Loading containers: start." Sep 5 00:31:19.241871 kernel: Initializing XFRM netlink socket Sep 5 00:31:19.299838 systemd-networkd[1384]: docker0: Link UP Sep 5 00:31:19.315139 dockerd[1636]: time="2025-09-05T00:31:19.315076540Z" level=info msg="Loading containers: done." Sep 5 00:31:19.326106 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck672366492-merged.mount: Deactivated successfully. Sep 5 00:31:19.327443 dockerd[1636]: time="2025-09-05T00:31:19.327380007Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 5 00:31:19.327531 dockerd[1636]: time="2025-09-05T00:31:19.327491395Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 5 00:31:19.327628 dockerd[1636]: time="2025-09-05T00:31:19.327608549Z" level=info msg="Daemon has completed initialization" Sep 5 00:31:19.357553 dockerd[1636]: time="2025-09-05T00:31:19.357419900Z" level=info msg="API listen on /run/docker.sock" Sep 5 00:31:19.357733 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 5 00:31:19.840952 containerd[1446]: time="2025-09-05T00:31:19.840911575Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 5 00:31:20.433708 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount153233407.mount: Deactivated successfully. Sep 5 00:31:21.354184 containerd[1446]: time="2025-09-05T00:31:21.354135944Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:31:21.355456 containerd[1446]: time="2025-09-05T00:31:21.355423764Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.12: active requests=0, bytes read=25652443" Sep 5 00:31:21.356638 containerd[1446]: time="2025-09-05T00:31:21.356596734Z" level=info msg="ImageCreate event name:\"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:31:21.359540 containerd[1446]: time="2025-09-05T00:31:21.359502158Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:31:21.360541 containerd[1446]: time="2025-09-05T00:31:21.360509579Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.12\" with image id \"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\", size \"25649241\" in 1.5195525s" Sep 5 00:31:21.360769 containerd[1446]: time="2025-09-05T00:31:21.360626697Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\"" Sep 5 00:31:21.362103 containerd[1446]: time="2025-09-05T00:31:21.362078991Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 5 00:31:22.344359 containerd[1446]: time="2025-09-05T00:31:22.344311723Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:31:22.345214 containerd[1446]: time="2025-09-05T00:31:22.345134201Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.12: active requests=0, bytes read=22460311" Sep 5 00:31:22.346295 containerd[1446]: time="2025-09-05T00:31:22.345845215Z" level=info msg="ImageCreate event name:\"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:31:22.348678 containerd[1446]: time="2025-09-05T00:31:22.348643016Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:31:22.349955 containerd[1446]: time="2025-09-05T00:31:22.349912420Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.12\" with image id \"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\", size \"23997423\" in 987.802584ms" Sep 5 00:31:22.349955 containerd[1446]: time="2025-09-05T00:31:22.349951910Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\"" Sep 5 00:31:22.350439 containerd[1446]: time="2025-09-05T00:31:22.350327301Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 5 00:31:23.382139 containerd[1446]: time="2025-09-05T00:31:23.382074939Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:31:23.383409 containerd[1446]: time="2025-09-05T00:31:23.383369334Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.12: active requests=0, bytes read=17125905" Sep 5 00:31:23.384486 containerd[1446]: time="2025-09-05T00:31:23.384449059Z" level=info msg="ImageCreate event name:\"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:31:23.386895 containerd[1446]: time="2025-09-05T00:31:23.386869052Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:31:23.388102 containerd[1446]: time="2025-09-05T00:31:23.388072458Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.12\" with image id \"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\", size \"18663035\" in 1.037714263s" Sep 5 00:31:23.388166 containerd[1446]: time="2025-09-05T00:31:23.388105987Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\"" Sep 5 00:31:23.388825 containerd[1446]: time="2025-09-05T00:31:23.388636272Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 5 00:31:23.803939 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 5 00:31:23.813051 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:31:23.929043 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:31:23.933082 (kubelet)[1855]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 00:31:23.967440 kubelet[1855]: E0905 00:31:23.966899 1855 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 00:31:23.969702 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 00:31:23.969838 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 00:31:24.372985 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2525744030.mount: Deactivated successfully. Sep 5 00:31:24.940829 containerd[1446]: time="2025-09-05T00:31:24.940774735Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:31:24.952580 containerd[1446]: time="2025-09-05T00:31:24.952539786Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.12: active requests=0, bytes read=26916097" Sep 5 00:31:24.960496 containerd[1446]: time="2025-09-05T00:31:24.960447600Z" level=info msg="ImageCreate event name:\"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:31:24.962333 containerd[1446]: time="2025-09-05T00:31:24.962285662Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:31:24.963486 containerd[1446]: time="2025-09-05T00:31:24.962998246Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.12\" with image id \"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\", repo tag \"registry.k8s.io/kube-proxy:v1.31.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\", size \"26915114\" in 1.574316494s" Sep 5 00:31:24.963486 containerd[1446]: time="2025-09-05T00:31:24.963028040Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\"" Sep 5 00:31:24.963486 containerd[1446]: time="2025-09-05T00:31:24.963427071Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 5 00:31:25.571371 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1631406232.mount: Deactivated successfully. Sep 5 00:31:26.118869 containerd[1446]: time="2025-09-05T00:31:26.118796672Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:31:26.119219 containerd[1446]: time="2025-09-05T00:31:26.119183993Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Sep 5 00:31:26.120834 containerd[1446]: time="2025-09-05T00:31:26.120801539Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:31:26.123320 containerd[1446]: time="2025-09-05T00:31:26.123274506Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:31:26.125806 containerd[1446]: time="2025-09-05T00:31:26.125670439Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.162215281s" Sep 5 00:31:26.125806 containerd[1446]: time="2025-09-05T00:31:26.125707979Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 5 00:31:26.126195 containerd[1446]: time="2025-09-05T00:31:26.126169463Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 5 00:31:26.552665 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount165618717.mount: Deactivated successfully. Sep 5 00:31:26.557781 containerd[1446]: time="2025-09-05T00:31:26.557733046Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:31:26.558252 containerd[1446]: time="2025-09-05T00:31:26.558220434Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 5 00:31:26.559086 containerd[1446]: time="2025-09-05T00:31:26.559056207Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:31:26.561180 containerd[1446]: time="2025-09-05T00:31:26.561146317Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:31:26.561939 containerd[1446]: time="2025-09-05T00:31:26.561904498Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 435.699367ms" Sep 5 00:31:26.562006 containerd[1446]: time="2025-09-05T00:31:26.561938930Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 5 00:31:26.562661 containerd[1446]: time="2025-09-05T00:31:26.562634066Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 5 00:31:27.051836 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1399405892.mount: Deactivated successfully. Sep 5 00:31:28.395994 containerd[1446]: time="2025-09-05T00:31:28.395937137Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66537163" Sep 5 00:31:28.397888 containerd[1446]: time="2025-09-05T00:31:28.396000052Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:31:28.400092 containerd[1446]: time="2025-09-05T00:31:28.400053613Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:31:28.401454 containerd[1446]: time="2025-09-05T00:31:28.401418077Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 1.838751607s" Sep 5 00:31:28.401509 containerd[1446]: time="2025-09-05T00:31:28.401459900Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Sep 5 00:31:28.402526 containerd[1446]: time="2025-09-05T00:31:28.402479091Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:31:33.825318 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:31:33.834052 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:31:33.856109 systemd[1]: Reloading requested from client PID 2012 ('systemctl') (unit session-7.scope)... Sep 5 00:31:33.856132 systemd[1]: Reloading... Sep 5 00:31:33.921882 zram_generator::config[2051]: No configuration found. Sep 5 00:31:34.064673 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 00:31:34.117654 systemd[1]: Reloading finished in 261 ms. Sep 5 00:31:34.157525 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:31:34.160071 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:31:34.162189 systemd[1]: kubelet.service: Deactivated successfully. Sep 5 00:31:34.163953 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:31:34.165416 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:31:34.269010 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:31:34.273723 (kubelet)[2098]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 00:31:34.313419 kubelet[2098]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:31:34.313419 kubelet[2098]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 5 00:31:34.313419 kubelet[2098]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:31:34.313782 kubelet[2098]: I0905 00:31:34.313631 2098 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 00:31:35.253025 kubelet[2098]: I0905 00:31:35.252972 2098 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 5 00:31:35.253025 kubelet[2098]: I0905 00:31:35.253013 2098 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 00:31:35.253324 kubelet[2098]: I0905 00:31:35.253296 2098 server.go:934] "Client rotation is on, will bootstrap in background" Sep 5 00:31:35.272728 kubelet[2098]: E0905 00:31:35.272690 2098 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.114:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.114:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:31:35.273954 kubelet[2098]: I0905 00:31:35.273931 2098 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 00:31:35.280046 kubelet[2098]: E0905 00:31:35.279937 2098 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 5 00:31:35.280046 kubelet[2098]: I0905 00:31:35.279966 2098 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 5 00:31:35.283828 kubelet[2098]: I0905 00:31:35.283505 2098 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 00:31:35.284322 kubelet[2098]: I0905 00:31:35.284301 2098 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 5 00:31:35.284560 kubelet[2098]: I0905 00:31:35.284524 2098 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 00:31:35.284793 kubelet[2098]: I0905 00:31:35.284619 2098 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 5 00:31:35.285101 kubelet[2098]: I0905 00:31:35.285085 2098 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 00:31:35.285164 kubelet[2098]: I0905 00:31:35.285155 2098 container_manager_linux.go:300] "Creating device plugin manager" Sep 5 00:31:35.285461 kubelet[2098]: I0905 00:31:35.285445 2098 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:31:35.287590 kubelet[2098]: I0905 00:31:35.287567 2098 kubelet.go:408] "Attempting to sync node with API server" Sep 5 00:31:35.287690 kubelet[2098]: I0905 00:31:35.287679 2098 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 00:31:35.288023 kubelet[2098]: I0905 00:31:35.287747 2098 kubelet.go:314] "Adding apiserver pod source" Sep 5 00:31:35.288023 kubelet[2098]: I0905 00:31:35.287844 2098 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 00:31:35.292785 kubelet[2098]: W0905 00:31:35.292670 2098 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.114:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused Sep 5 00:31:35.292881 kubelet[2098]: E0905 00:31:35.292799 2098 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.114:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.114:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:31:35.293057 kubelet[2098]: I0905 00:31:35.293038 2098 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 5 00:31:35.293371 kubelet[2098]: W0905 00:31:35.293324 2098 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.114:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused Sep 5 00:31:35.293427 kubelet[2098]: E0905 00:31:35.293387 2098 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.114:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.114:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:31:35.293947 kubelet[2098]: I0905 00:31:35.293885 2098 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 5 00:31:35.296877 kubelet[2098]: W0905 00:31:35.294105 2098 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 5 00:31:35.296877 kubelet[2098]: I0905 00:31:35.295946 2098 server.go:1274] "Started kubelet" Sep 5 00:31:35.296877 kubelet[2098]: I0905 00:31:35.296140 2098 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 00:31:35.297969 kubelet[2098]: I0905 00:31:35.297841 2098 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 00:31:35.298394 kubelet[2098]: I0905 00:31:35.298376 2098 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 00:31:35.301432 kubelet[2098]: I0905 00:31:35.299744 2098 server.go:449] "Adding debug handlers to kubelet server" Sep 5 00:31:35.301432 kubelet[2098]: I0905 00:31:35.299947 2098 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 00:31:35.301432 kubelet[2098]: E0905 00:31:35.299411 2098 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.114:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.114:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18623b903c009755 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-05 00:31:35.295915861 +0000 UTC m=+1.018771994,LastTimestamp:2025-09-05 00:31:35.295915861 +0000 UTC m=+1.018771994,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 5 00:31:35.301432 kubelet[2098]: I0905 00:31:35.300585 2098 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 00:31:35.301432 kubelet[2098]: I0905 00:31:35.300596 2098 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 5 00:31:35.301432 kubelet[2098]: I0905 00:31:35.300695 2098 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 5 00:31:35.301432 kubelet[2098]: I0905 00:31:35.300748 2098 reconciler.go:26] "Reconciler: start to sync state" Sep 5 00:31:35.301432 kubelet[2098]: E0905 00:31:35.300762 2098 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:31:35.301432 kubelet[2098]: W0905 00:31:35.301110 2098 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.114:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused Sep 5 00:31:35.301697 kubelet[2098]: E0905 00:31:35.301152 2098 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.114:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.114:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:31:35.301892 kubelet[2098]: I0905 00:31:35.301849 2098 factory.go:221] Registration of the systemd container factory successfully Sep 5 00:31:35.302000 kubelet[2098]: I0905 00:31:35.301978 2098 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 00:31:35.302240 kubelet[2098]: E0905 00:31:35.302223 2098 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 5 00:31:35.302500 kubelet[2098]: E0905 00:31:35.302335 2098 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.114:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.114:6443: connect: connection refused" interval="200ms" Sep 5 00:31:35.303008 kubelet[2098]: I0905 00:31:35.302937 2098 factory.go:221] Registration of the containerd container factory successfully Sep 5 00:31:35.314368 kubelet[2098]: I0905 00:31:35.314316 2098 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 5 00:31:35.315757 kubelet[2098]: I0905 00:31:35.315740 2098 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 5 00:31:35.315757 kubelet[2098]: I0905 00:31:35.315753 2098 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 5 00:31:35.315962 kubelet[2098]: I0905 00:31:35.315771 2098 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:31:35.316892 kubelet[2098]: I0905 00:31:35.316364 2098 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 5 00:31:35.316892 kubelet[2098]: I0905 00:31:35.316386 2098 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 5 00:31:35.316892 kubelet[2098]: I0905 00:31:35.316402 2098 kubelet.go:2321] "Starting kubelet main sync loop" Sep 5 00:31:35.316892 kubelet[2098]: E0905 00:31:35.316440 2098 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 00:31:35.401449 kubelet[2098]: E0905 00:31:35.401410 2098 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:31:35.408309 kubelet[2098]: I0905 00:31:35.408277 2098 policy_none.go:49] "None policy: Start" Sep 5 00:31:35.408897 kubelet[2098]: W0905 00:31:35.408824 2098 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.114:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused Sep 5 00:31:35.409062 kubelet[2098]: E0905 00:31:35.409039 2098 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.114:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.114:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:31:35.409184 kubelet[2098]: I0905 00:31:35.409059 2098 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 5 00:31:35.409265 kubelet[2098]: I0905 00:31:35.409256 2098 state_mem.go:35] "Initializing new in-memory state store" Sep 5 00:31:35.414603 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 5 00:31:35.417012 kubelet[2098]: E0905 00:31:35.416987 2098 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 5 00:31:35.434395 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 5 00:31:35.448384 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 5 00:31:35.449643 kubelet[2098]: I0905 00:31:35.449616 2098 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 5 00:31:35.449839 kubelet[2098]: I0905 00:31:35.449824 2098 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 00:31:35.449887 kubelet[2098]: I0905 00:31:35.449843 2098 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 00:31:35.450495 kubelet[2098]: I0905 00:31:35.450477 2098 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 00:31:35.452190 kubelet[2098]: E0905 00:31:35.452157 2098 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 5 00:31:35.503109 kubelet[2098]: E0905 00:31:35.502996 2098 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.114:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.114:6443: connect: connection refused" interval="400ms" Sep 5 00:31:35.551330 kubelet[2098]: I0905 00:31:35.551248 2098 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 5 00:31:35.551714 kubelet[2098]: E0905 00:31:35.551689 2098 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.114:6443/api/v1/nodes\": dial tcp 10.0.0.114:6443: connect: connection refused" node="localhost" Sep 5 00:31:35.625295 systemd[1]: Created slice kubepods-burstable-pod47d4d9a1a7f643d3f40bf448785c159a.slice - libcontainer container kubepods-burstable-pod47d4d9a1a7f643d3f40bf448785c159a.slice. Sep 5 00:31:35.661769 systemd[1]: Created slice kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice - libcontainer container kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice. Sep 5 00:31:35.664888 systemd[1]: Created slice kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice - libcontainer container kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice. Sep 5 00:31:35.702241 kubelet[2098]: I0905 00:31:35.702188 2098 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/47d4d9a1a7f643d3f40bf448785c159a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"47d4d9a1a7f643d3f40bf448785c159a\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:31:35.702241 kubelet[2098]: I0905 00:31:35.702233 2098 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/47d4d9a1a7f643d3f40bf448785c159a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"47d4d9a1a7f643d3f40bf448785c159a\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:31:35.702241 kubelet[2098]: I0905 00:31:35.702252 2098 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:31:35.702412 kubelet[2098]: I0905 00:31:35.702267 2098 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:31:35.702412 kubelet[2098]: I0905 00:31:35.702282 2098 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:31:35.702412 kubelet[2098]: I0905 00:31:35.702296 2098 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/47d4d9a1a7f643d3f40bf448785c159a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"47d4d9a1a7f643d3f40bf448785c159a\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:31:35.702412 kubelet[2098]: I0905 00:31:35.702310 2098 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:31:35.702412 kubelet[2098]: I0905 00:31:35.702324 2098 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:31:35.702512 kubelet[2098]: I0905 00:31:35.702338 2098 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 5 00:31:35.753726 kubelet[2098]: I0905 00:31:35.753652 2098 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 5 00:31:35.754245 kubelet[2098]: E0905 00:31:35.754191 2098 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.114:6443/api/v1/nodes\": dial tcp 10.0.0.114:6443: connect: connection refused" node="localhost" Sep 5 00:31:35.903659 kubelet[2098]: E0905 00:31:35.903609 2098 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.114:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.114:6443: connect: connection refused" interval="800ms" Sep 5 00:31:35.958982 kubelet[2098]: E0905 00:31:35.958951 2098 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:31:35.959675 containerd[1446]: time="2025-09-05T00:31:35.959559809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:47d4d9a1a7f643d3f40bf448785c159a,Namespace:kube-system,Attempt:0,}" Sep 5 00:31:35.964816 kubelet[2098]: E0905 00:31:35.964770 2098 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:31:35.965145 containerd[1446]: time="2025-09-05T00:31:35.965106892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,}" Sep 5 00:31:35.967384 kubelet[2098]: E0905 00:31:35.967359 2098 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:31:35.971062 containerd[1446]: time="2025-09-05T00:31:35.970838791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,}" Sep 5 00:31:36.115945 kubelet[2098]: W0905 00:31:36.115842 2098 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.114:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused Sep 5 00:31:36.115945 kubelet[2098]: E0905 00:31:36.115941 2098 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.114:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.114:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:31:36.156285 kubelet[2098]: I0905 00:31:36.156252 2098 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 5 00:31:36.156631 kubelet[2098]: E0905 00:31:36.156597 2098 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.114:6443/api/v1/nodes\": dial tcp 10.0.0.114:6443: connect: connection refused" node="localhost" Sep 5 00:31:36.457311 kubelet[2098]: W0905 00:31:36.457132 2098 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.114:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused Sep 5 00:31:36.457311 kubelet[2098]: E0905 00:31:36.457202 2098 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.114:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.114:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:31:36.465406 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount224363025.mount: Deactivated successfully. Sep 5 00:31:36.470046 containerd[1446]: time="2025-09-05T00:31:36.470004766Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:31:36.471797 containerd[1446]: time="2025-09-05T00:31:36.471736675Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 5 00:31:36.473992 containerd[1446]: time="2025-09-05T00:31:36.472397390Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:31:36.473992 containerd[1446]: time="2025-09-05T00:31:36.473215916Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:31:36.473992 containerd[1446]: time="2025-09-05T00:31:36.473940147Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:31:36.474635 containerd[1446]: time="2025-09-05T00:31:36.474603855Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 5 00:31:36.475251 containerd[1446]: time="2025-09-05T00:31:36.475157098Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Sep 5 00:31:36.477452 containerd[1446]: time="2025-09-05T00:31:36.477418577Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:31:36.480111 kubelet[2098]: W0905 00:31:36.480057 2098 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.114:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused Sep 5 00:31:36.480159 kubelet[2098]: E0905 00:31:36.480138 2098 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.114:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.114:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:31:36.480332 containerd[1446]: time="2025-09-05T00:31:36.480302963Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 509.389207ms" Sep 5 00:31:36.482754 containerd[1446]: time="2025-09-05T00:31:36.482583764Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 517.416596ms" Sep 5 00:31:36.483277 containerd[1446]: time="2025-09-05T00:31:36.483249468Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 523.537932ms" Sep 5 00:31:36.570104 containerd[1446]: time="2025-09-05T00:31:36.570009289Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:31:36.570104 containerd[1446]: time="2025-09-05T00:31:36.570066777Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:31:36.570104 containerd[1446]: time="2025-09-05T00:31:36.570078993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:31:36.570344 containerd[1446]: time="2025-09-05T00:31:36.570154047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:31:36.572397 containerd[1446]: time="2025-09-05T00:31:36.572057223Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:31:36.572397 containerd[1446]: time="2025-09-05T00:31:36.572100858Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:31:36.572397 containerd[1446]: time="2025-09-05T00:31:36.572111198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:31:36.572397 containerd[1446]: time="2025-09-05T00:31:36.572174434Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:31:36.573149 containerd[1446]: time="2025-09-05T00:31:36.573070850Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:31:36.573274 containerd[1446]: time="2025-09-05T00:31:36.573243594Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:31:36.573473 containerd[1446]: time="2025-09-05T00:31:36.573434382Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:31:36.573899 containerd[1446]: time="2025-09-05T00:31:36.573724058Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:31:36.590134 systemd[1]: Started cri-containerd-5a50da16a8b2910b609cde4467b9f63180c65f048dca840f352d5bf61504d1d0.scope - libcontainer container 5a50da16a8b2910b609cde4467b9f63180c65f048dca840f352d5bf61504d1d0. Sep 5 00:31:36.594729 systemd[1]: Started cri-containerd-711c068eed886f5a0ea6af36669f9458b713d24b8608d0d77ceaf5d72100ece0.scope - libcontainer container 711c068eed886f5a0ea6af36669f9458b713d24b8608d0d77ceaf5d72100ece0. Sep 5 00:31:36.596586 systemd[1]: Started cri-containerd-84f0308fe1f22af170188d47e87c06d8a426cb2b64f5d5c2898e0fbd34f8f809.scope - libcontainer container 84f0308fe1f22af170188d47e87c06d8a426cb2b64f5d5c2898e0fbd34f8f809. Sep 5 00:31:36.630074 containerd[1446]: time="2025-09-05T00:31:36.630027915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"5a50da16a8b2910b609cde4467b9f63180c65f048dca840f352d5bf61504d1d0\"" Sep 5 00:31:36.631323 kubelet[2098]: E0905 00:31:36.631295 2098 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:31:36.632449 containerd[1446]: time="2025-09-05T00:31:36.632391914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"84f0308fe1f22af170188d47e87c06d8a426cb2b64f5d5c2898e0fbd34f8f809\"" Sep 5 00:31:36.633012 kubelet[2098]: E0905 00:31:36.632982 2098 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:31:36.633473 containerd[1446]: time="2025-09-05T00:31:36.633428736Z" level=info msg="CreateContainer within sandbox \"5a50da16a8b2910b609cde4467b9f63180c65f048dca840f352d5bf61504d1d0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 5 00:31:36.634528 containerd[1446]: time="2025-09-05T00:31:36.634496498Z" level=info msg="CreateContainer within sandbox \"84f0308fe1f22af170188d47e87c06d8a426cb2b64f5d5c2898e0fbd34f8f809\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 5 00:31:36.636032 containerd[1446]: time="2025-09-05T00:31:36.636004922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:47d4d9a1a7f643d3f40bf448785c159a,Namespace:kube-system,Attempt:0,} returns sandbox id \"711c068eed886f5a0ea6af36669f9458b713d24b8608d0d77ceaf5d72100ece0\"" Sep 5 00:31:36.637946 kubelet[2098]: E0905 00:31:36.637677 2098 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:31:36.641147 containerd[1446]: time="2025-09-05T00:31:36.641108389Z" level=info msg="CreateContainer within sandbox \"711c068eed886f5a0ea6af36669f9458b713d24b8608d0d77ceaf5d72100ece0\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 5 00:31:36.654196 containerd[1446]: time="2025-09-05T00:31:36.654151843Z" level=info msg="CreateContainer within sandbox \"5a50da16a8b2910b609cde4467b9f63180c65f048dca840f352d5bf61504d1d0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"01c12ec44cfb51838a2687b5150080c8e122537aff78323971d267308dd84564\"" Sep 5 00:31:36.654845 containerd[1446]: time="2025-09-05T00:31:36.654815511Z" level=info msg="StartContainer for \"01c12ec44cfb51838a2687b5150080c8e122537aff78323971d267308dd84564\"" Sep 5 00:31:36.658893 containerd[1446]: time="2025-09-05T00:31:36.657915398Z" level=info msg="CreateContainer within sandbox \"711c068eed886f5a0ea6af36669f9458b713d24b8608d0d77ceaf5d72100ece0\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b270e50f0f109cd440a2600d10b68d77053bf02e0e2b84507cddd1af40495cb1\"" Sep 5 00:31:36.658893 containerd[1446]: time="2025-09-05T00:31:36.658239886Z" level=info msg="CreateContainer within sandbox \"84f0308fe1f22af170188d47e87c06d8a426cb2b64f5d5c2898e0fbd34f8f809\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5d30d804f61d14127bdfeae9093005b3707a2b07e950f49498094ea7e5c576e8\"" Sep 5 00:31:36.659379 containerd[1446]: time="2025-09-05T00:31:36.659351642Z" level=info msg="StartContainer for \"b270e50f0f109cd440a2600d10b68d77053bf02e0e2b84507cddd1af40495cb1\"" Sep 5 00:31:36.659429 containerd[1446]: time="2025-09-05T00:31:36.659388411Z" level=info msg="StartContainer for \"5d30d804f61d14127bdfeae9093005b3707a2b07e950f49498094ea7e5c576e8\"" Sep 5 00:31:36.686095 systemd[1]: Started cri-containerd-01c12ec44cfb51838a2687b5150080c8e122537aff78323971d267308dd84564.scope - libcontainer container 01c12ec44cfb51838a2687b5150080c8e122537aff78323971d267308dd84564. Sep 5 00:31:36.690050 systemd[1]: Started cri-containerd-5d30d804f61d14127bdfeae9093005b3707a2b07e950f49498094ea7e5c576e8.scope - libcontainer container 5d30d804f61d14127bdfeae9093005b3707a2b07e950f49498094ea7e5c576e8. Sep 5 00:31:36.691607 systemd[1]: Started cri-containerd-b270e50f0f109cd440a2600d10b68d77053bf02e0e2b84507cddd1af40495cb1.scope - libcontainer container b270e50f0f109cd440a2600d10b68d77053bf02e0e2b84507cddd1af40495cb1. Sep 5 00:31:36.705025 kubelet[2098]: E0905 00:31:36.704984 2098 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.114:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.114:6443: connect: connection refused" interval="1.6s" Sep 5 00:31:36.730493 containerd[1446]: time="2025-09-05T00:31:36.730368503Z" level=info msg="StartContainer for \"5d30d804f61d14127bdfeae9093005b3707a2b07e950f49498094ea7e5c576e8\" returns successfully" Sep 5 00:31:36.730493 containerd[1446]: time="2025-09-05T00:31:36.730384911Z" level=info msg="StartContainer for \"01c12ec44cfb51838a2687b5150080c8e122537aff78323971d267308dd84564\" returns successfully" Sep 5 00:31:36.730703 containerd[1446]: time="2025-09-05T00:31:36.730524559Z" level=info msg="StartContainer for \"b270e50f0f109cd440a2600d10b68d77053bf02e0e2b84507cddd1af40495cb1\" returns successfully" Sep 5 00:31:36.958025 kubelet[2098]: I0905 00:31:36.957989 2098 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 5 00:31:37.322641 kubelet[2098]: E0905 00:31:37.322534 2098 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:31:37.325352 kubelet[2098]: E0905 00:31:37.325329 2098 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:31:37.327520 kubelet[2098]: E0905 00:31:37.327504 2098 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:31:38.329723 kubelet[2098]: E0905 00:31:38.329693 2098 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:31:38.329723 kubelet[2098]: E0905 00:31:38.329724 2098 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:31:38.457317 kubelet[2098]: E0905 00:31:38.457271 2098 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 5 00:31:38.521281 kubelet[2098]: I0905 00:31:38.521236 2098 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 5 00:31:38.521281 kubelet[2098]: E0905 00:31:38.521282 2098 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 5 00:31:38.535397 kubelet[2098]: E0905 00:31:38.535354 2098 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:31:38.636991 kubelet[2098]: E0905 00:31:38.636483 2098 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:31:38.737124 kubelet[2098]: E0905 00:31:38.737086 2098 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:31:38.837957 kubelet[2098]: E0905 00:31:38.837903 2098 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:31:38.938528 kubelet[2098]: E0905 00:31:38.938417 2098 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:31:39.039084 kubelet[2098]: E0905 00:31:39.039037 2098 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:31:39.139360 kubelet[2098]: E0905 00:31:39.139319 2098 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:31:39.294501 kubelet[2098]: I0905 00:31:39.294391 2098 apiserver.go:52] "Watching apiserver" Sep 5 00:31:39.300835 kubelet[2098]: I0905 00:31:39.300800 2098 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 5 00:31:39.334981 kubelet[2098]: E0905 00:31:39.334932 2098 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 5 00:31:39.335337 kubelet[2098]: E0905 00:31:39.335106 2098 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:31:40.714009 systemd[1]: Reloading requested from client PID 2384 ('systemctl') (unit session-7.scope)... Sep 5 00:31:40.714027 systemd[1]: Reloading... Sep 5 00:31:40.779056 zram_generator::config[2423]: No configuration found. Sep 5 00:31:40.862909 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 00:31:40.931437 systemd[1]: Reloading finished in 217 ms. Sep 5 00:31:40.976066 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:31:40.999322 systemd[1]: kubelet.service: Deactivated successfully. Sep 5 00:31:40.999653 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:31:40.999697 systemd[1]: kubelet.service: Consumed 1.359s CPU time, 130.0M memory peak, 0B memory swap peak. Sep 5 00:31:41.016131 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:31:41.121764 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:31:41.125746 (kubelet)[2465]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 00:31:41.170934 kubelet[2465]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:31:41.170934 kubelet[2465]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 5 00:31:41.170934 kubelet[2465]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:31:41.172052 kubelet[2465]: I0905 00:31:41.170971 2465 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 00:31:41.183046 kubelet[2465]: I0905 00:31:41.180195 2465 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 5 00:31:41.183046 kubelet[2465]: I0905 00:31:41.180223 2465 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 00:31:41.183046 kubelet[2465]: I0905 00:31:41.180532 2465 server.go:934] "Client rotation is on, will bootstrap in background" Sep 5 00:31:41.183046 kubelet[2465]: I0905 00:31:41.182167 2465 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 5 00:31:41.185072 kubelet[2465]: I0905 00:31:41.184996 2465 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 00:31:41.188570 kubelet[2465]: E0905 00:31:41.188489 2465 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 5 00:31:41.188725 kubelet[2465]: I0905 00:31:41.188705 2465 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 5 00:31:41.191938 kubelet[2465]: I0905 00:31:41.191913 2465 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 00:31:41.192265 kubelet[2465]: I0905 00:31:41.192245 2465 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 5 00:31:41.192471 kubelet[2465]: I0905 00:31:41.192443 2465 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 00:31:41.192790 kubelet[2465]: I0905 00:31:41.192524 2465 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 5 00:31:41.192944 kubelet[2465]: I0905 00:31:41.192929 2465 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 00:31:41.193015 kubelet[2465]: I0905 00:31:41.193005 2465 container_manager_linux.go:300] "Creating device plugin manager" Sep 5 00:31:41.193153 kubelet[2465]: I0905 00:31:41.193084 2465 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:31:41.193367 kubelet[2465]: I0905 00:31:41.193355 2465 kubelet.go:408] "Attempting to sync node with API server" Sep 5 00:31:41.193940 kubelet[2465]: I0905 00:31:41.193923 2465 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 00:31:41.194045 kubelet[2465]: I0905 00:31:41.194033 2465 kubelet.go:314] "Adding apiserver pod source" Sep 5 00:31:41.194099 kubelet[2465]: I0905 00:31:41.194091 2465 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 00:31:41.194962 kubelet[2465]: I0905 00:31:41.194944 2465 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 5 00:31:41.195561 kubelet[2465]: I0905 00:31:41.195537 2465 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 5 00:31:41.196136 kubelet[2465]: I0905 00:31:41.196116 2465 server.go:1274] "Started kubelet" Sep 5 00:31:41.196539 kubelet[2465]: I0905 00:31:41.196497 2465 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 00:31:41.197064 kubelet[2465]: I0905 00:31:41.197012 2465 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 00:31:41.197933 kubelet[2465]: I0905 00:31:41.197659 2465 server.go:449] "Adding debug handlers to kubelet server" Sep 5 00:31:41.199031 kubelet[2465]: I0905 00:31:41.199009 2465 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 00:31:41.201061 kubelet[2465]: I0905 00:31:41.201024 2465 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 00:31:41.206049 kubelet[2465]: I0905 00:31:41.206030 2465 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 5 00:31:41.206288 kubelet[2465]: I0905 00:31:41.206214 2465 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 00:31:41.206660 kubelet[2465]: E0905 00:31:41.206634 2465 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:31:41.208081 kubelet[2465]: I0905 00:31:41.208061 2465 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 5 00:31:41.208213 kubelet[2465]: I0905 00:31:41.208185 2465 factory.go:221] Registration of the systemd container factory successfully Sep 5 00:31:41.208314 kubelet[2465]: I0905 00:31:41.208293 2465 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 00:31:41.208770 kubelet[2465]: I0905 00:31:41.208752 2465 reconciler.go:26] "Reconciler: start to sync state" Sep 5 00:31:41.222036 kubelet[2465]: I0905 00:31:41.222010 2465 factory.go:221] Registration of the containerd container factory successfully Sep 5 00:31:41.225943 kubelet[2465]: I0905 00:31:41.225135 2465 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 5 00:31:41.227970 kubelet[2465]: I0905 00:31:41.227900 2465 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 5 00:31:41.228062 kubelet[2465]: I0905 00:31:41.228050 2465 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 5 00:31:41.228127 kubelet[2465]: I0905 00:31:41.228117 2465 kubelet.go:2321] "Starting kubelet main sync loop" Sep 5 00:31:41.228221 kubelet[2465]: E0905 00:31:41.228205 2465 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 00:31:41.230485 kubelet[2465]: E0905 00:31:41.230459 2465 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 5 00:31:41.258667 kubelet[2465]: I0905 00:31:41.258639 2465 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 5 00:31:41.258822 kubelet[2465]: I0905 00:31:41.258807 2465 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 5 00:31:41.258934 kubelet[2465]: I0905 00:31:41.258924 2465 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:31:41.259134 kubelet[2465]: I0905 00:31:41.259116 2465 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 5 00:31:41.259209 kubelet[2465]: I0905 00:31:41.259183 2465 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 5 00:31:41.259263 kubelet[2465]: I0905 00:31:41.259255 2465 policy_none.go:49] "None policy: Start" Sep 5 00:31:41.260106 kubelet[2465]: I0905 00:31:41.260086 2465 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 5 00:31:41.260150 kubelet[2465]: I0905 00:31:41.260118 2465 state_mem.go:35] "Initializing new in-memory state store" Sep 5 00:31:41.260382 kubelet[2465]: I0905 00:31:41.260366 2465 state_mem.go:75] "Updated machine memory state" Sep 5 00:31:41.264209 kubelet[2465]: I0905 00:31:41.264189 2465 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 5 00:31:41.264356 kubelet[2465]: I0905 00:31:41.264341 2465 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 00:31:41.264391 kubelet[2465]: I0905 00:31:41.264357 2465 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 00:31:41.264905 kubelet[2465]: I0905 00:31:41.264820 2465 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 00:31:41.367957 kubelet[2465]: I0905 00:31:41.367922 2465 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 5 00:31:41.376572 kubelet[2465]: I0905 00:31:41.376544 2465 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 5 00:31:41.376685 kubelet[2465]: I0905 00:31:41.376621 2465 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 5 00:31:41.411177 kubelet[2465]: I0905 00:31:41.411139 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:31:41.411177 kubelet[2465]: I0905 00:31:41.411179 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:31:41.411302 kubelet[2465]: I0905 00:31:41.411201 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:31:41.411302 kubelet[2465]: I0905 00:31:41.411216 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/47d4d9a1a7f643d3f40bf448785c159a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"47d4d9a1a7f643d3f40bf448785c159a\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:31:41.411302 kubelet[2465]: I0905 00:31:41.411234 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/47d4d9a1a7f643d3f40bf448785c159a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"47d4d9a1a7f643d3f40bf448785c159a\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:31:41.411302 kubelet[2465]: I0905 00:31:41.411249 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/47d4d9a1a7f643d3f40bf448785c159a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"47d4d9a1a7f643d3f40bf448785c159a\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:31:41.411302 kubelet[2465]: I0905 00:31:41.411264 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:31:41.411433 kubelet[2465]: I0905 00:31:41.411279 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:31:41.411433 kubelet[2465]: I0905 00:31:41.411304 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 5 00:31:41.637295 kubelet[2465]: E0905 00:31:41.637007 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:31:41.637295 kubelet[2465]: E0905 00:31:41.637141 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:31:41.637295 kubelet[2465]: E0905 00:31:41.637220 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:31:42.194544 kubelet[2465]: I0905 00:31:42.194468 2465 apiserver.go:52] "Watching apiserver" Sep 5 00:31:42.208732 kubelet[2465]: I0905 00:31:42.208683 2465 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 5 00:31:42.246340 kubelet[2465]: E0905 00:31:42.245982 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:31:42.246340 kubelet[2465]: E0905 00:31:42.246195 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:31:42.252984 kubelet[2465]: E0905 00:31:42.252744 2465 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 5 00:31:42.252984 kubelet[2465]: E0905 00:31:42.252916 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:31:42.268565 kubelet[2465]: I0905 00:31:42.267992 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.267977463 podStartE2EDuration="1.267977463s" podCreationTimestamp="2025-09-05 00:31:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:31:42.26726045 +0000 UTC m=+1.138310704" watchObservedRunningTime="2025-09-05 00:31:42.267977463 +0000 UTC m=+1.139027717" Sep 5 00:31:42.291668 kubelet[2465]: I0905 00:31:42.291597 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.291579694 podStartE2EDuration="1.291579694s" podCreationTimestamp="2025-09-05 00:31:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:31:42.279495892 +0000 UTC m=+1.150546146" watchObservedRunningTime="2025-09-05 00:31:42.291579694 +0000 UTC m=+1.162629948" Sep 5 00:31:42.291894 kubelet[2465]: I0905 00:31:42.291796 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.2917888180000001 podStartE2EDuration="1.291788818s" podCreationTimestamp="2025-09-05 00:31:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:31:42.290605381 +0000 UTC m=+1.161655595" watchObservedRunningTime="2025-09-05 00:31:42.291788818 +0000 UTC m=+1.162839112" Sep 5 00:31:43.247615 kubelet[2465]: E0905 00:31:43.247581 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:31:47.476948 kubelet[2465]: E0905 00:31:47.476604 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:31:47.546134 kubelet[2465]: E0905 00:31:47.546066 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:31:47.641091 kubelet[2465]: I0905 00:31:47.641050 2465 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 5 00:31:47.641874 containerd[1446]: time="2025-09-05T00:31:47.641372909Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 5 00:31:47.642401 kubelet[2465]: I0905 00:31:47.641571 2465 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 5 00:31:48.256892 kubelet[2465]: E0905 00:31:48.256827 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:31:48.256892 kubelet[2465]: E0905 00:31:48.256878 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:31:48.273973 systemd[1]: Created slice kubepods-besteffort-poddf8b0307_7fd7_436f_a02b_2782861fd061.slice - libcontainer container kubepods-besteffort-poddf8b0307_7fd7_436f_a02b_2782861fd061.slice. Sep 5 00:31:48.353606 kubelet[2465]: I0905 00:31:48.353547 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/df8b0307-7fd7-436f-a02b-2782861fd061-xtables-lock\") pod \"kube-proxy-5jbd8\" (UID: \"df8b0307-7fd7-436f-a02b-2782861fd061\") " pod="kube-system/kube-proxy-5jbd8" Sep 5 00:31:48.353606 kubelet[2465]: I0905 00:31:48.353594 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrc5n\" (UniqueName: \"kubernetes.io/projected/df8b0307-7fd7-436f-a02b-2782861fd061-kube-api-access-rrc5n\") pod \"kube-proxy-5jbd8\" (UID: \"df8b0307-7fd7-436f-a02b-2782861fd061\") " pod="kube-system/kube-proxy-5jbd8" Sep 5 00:31:48.353606 kubelet[2465]: I0905 00:31:48.353617 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/df8b0307-7fd7-436f-a02b-2782861fd061-kube-proxy\") pod \"kube-proxy-5jbd8\" (UID: \"df8b0307-7fd7-436f-a02b-2782861fd061\") " pod="kube-system/kube-proxy-5jbd8" Sep 5 00:31:48.353789 kubelet[2465]: I0905 00:31:48.353637 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/df8b0307-7fd7-436f-a02b-2782861fd061-lib-modules\") pod \"kube-proxy-5jbd8\" (UID: \"df8b0307-7fd7-436f-a02b-2782861fd061\") " pod="kube-system/kube-proxy-5jbd8" Sep 5 00:31:48.364440 kubelet[2465]: E0905 00:31:48.364405 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:31:48.464811 kubelet[2465]: E0905 00:31:48.464759 2465 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 5 00:31:48.464811 kubelet[2465]: E0905 00:31:48.464795 2465 projected.go:194] Error preparing data for projected volume kube-api-access-rrc5n for pod kube-system/kube-proxy-5jbd8: configmap "kube-root-ca.crt" not found Sep 5 00:31:48.464989 kubelet[2465]: E0905 00:31:48.464846 2465 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/df8b0307-7fd7-436f-a02b-2782861fd061-kube-api-access-rrc5n podName:df8b0307-7fd7-436f-a02b-2782861fd061 nodeName:}" failed. No retries permitted until 2025-09-05 00:31:48.964825428 +0000 UTC m=+7.835875682 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rrc5n" (UniqueName: "kubernetes.io/projected/df8b0307-7fd7-436f-a02b-2782861fd061-kube-api-access-rrc5n") pod "kube-proxy-5jbd8" (UID: "df8b0307-7fd7-436f-a02b-2782861fd061") : configmap "kube-root-ca.crt" not found Sep 5 00:31:48.684682 systemd[1]: Created slice kubepods-besteffort-pod2ca0f530_341d_4a9b_91d7_2530c099f0bb.slice - libcontainer container kubepods-besteffort-pod2ca0f530_341d_4a9b_91d7_2530c099f0bb.slice. Sep 5 00:31:48.755599 kubelet[2465]: I0905 00:31:48.755557 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2ca0f530-341d-4a9b-91d7-2530c099f0bb-var-lib-calico\") pod \"tigera-operator-58fc44c59b-xlpg5\" (UID: \"2ca0f530-341d-4a9b-91d7-2530c099f0bb\") " pod="tigera-operator/tigera-operator-58fc44c59b-xlpg5" Sep 5 00:31:48.755599 kubelet[2465]: I0905 00:31:48.755593 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnsc8\" (UniqueName: \"kubernetes.io/projected/2ca0f530-341d-4a9b-91d7-2530c099f0bb-kube-api-access-tnsc8\") pod \"tigera-operator-58fc44c59b-xlpg5\" (UID: \"2ca0f530-341d-4a9b-91d7-2530c099f0bb\") " pod="tigera-operator/tigera-operator-58fc44c59b-xlpg5" Sep 5 00:31:48.987813 containerd[1446]: time="2025-09-05T00:31:48.987707378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-xlpg5,Uid:2ca0f530-341d-4a9b-91d7-2530c099f0bb,Namespace:tigera-operator,Attempt:0,}" Sep 5 00:31:49.006876 containerd[1446]: time="2025-09-05T00:31:49.006719588Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:31:49.006876 containerd[1446]: time="2025-09-05T00:31:49.006818784Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:31:49.006876 containerd[1446]: time="2025-09-05T00:31:49.006852596Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:31:49.007437 containerd[1446]: time="2025-09-05T00:31:49.006948156Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:31:49.028057 systemd[1]: Started cri-containerd-bc276e27f396bfb9a00dda82efb43d27fc042e745c0736430312b0ce448526b7.scope - libcontainer container bc276e27f396bfb9a00dda82efb43d27fc042e745c0736430312b0ce448526b7. Sep 5 00:31:49.052974 containerd[1446]: time="2025-09-05T00:31:49.052917934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-xlpg5,Uid:2ca0f530-341d-4a9b-91d7-2530c099f0bb,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"bc276e27f396bfb9a00dda82efb43d27fc042e745c0736430312b0ce448526b7\"" Sep 5 00:31:49.054549 containerd[1446]: time="2025-09-05T00:31:49.054520028Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 5 00:31:49.180178 kubelet[2465]: E0905 00:31:49.179938 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:31:49.180364 containerd[1446]: time="2025-09-05T00:31:49.180308348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5jbd8,Uid:df8b0307-7fd7-436f-a02b-2782861fd061,Namespace:kube-system,Attempt:0,}" Sep 5 00:31:49.199677 containerd[1446]: time="2025-09-05T00:31:49.199579557Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:31:49.200193 containerd[1446]: time="2025-09-05T00:31:49.200105316Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:31:49.200193 containerd[1446]: time="2025-09-05T00:31:49.200131334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:31:49.200317 containerd[1446]: time="2025-09-05T00:31:49.200258427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:31:49.222024 systemd[1]: Started cri-containerd-21bef344379d64385b32150887ca12dd3ed1278741f955995e62c19142b28025.scope - libcontainer container 21bef344379d64385b32150887ca12dd3ed1278741f955995e62c19142b28025. Sep 5 00:31:49.239443 containerd[1446]: time="2025-09-05T00:31:49.239349425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5jbd8,Uid:df8b0307-7fd7-436f-a02b-2782861fd061,Namespace:kube-system,Attempt:0,} returns sandbox id \"21bef344379d64385b32150887ca12dd3ed1278741f955995e62c19142b28025\"" Sep 5 00:31:49.240110 kubelet[2465]: E0905 00:31:49.240068 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:31:49.243067 containerd[1446]: time="2025-09-05T00:31:49.242944045Z" level=info msg="CreateContainer within sandbox \"21bef344379d64385b32150887ca12dd3ed1278741f955995e62c19142b28025\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 5 00:31:49.254525 containerd[1446]: time="2025-09-05T00:31:49.254473638Z" level=info msg="CreateContainer within sandbox \"21bef344379d64385b32150887ca12dd3ed1278741f955995e62c19142b28025\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"49cc2f7e8188568dbab1f40a0afbd06c39c3ea50b0226505ee8db8b2dc5ed4a2\"" Sep 5 00:31:49.255202 containerd[1446]: time="2025-09-05T00:31:49.255142077Z" level=info msg="StartContainer for \"49cc2f7e8188568dbab1f40a0afbd06c39c3ea50b0226505ee8db8b2dc5ed4a2\"" Sep 5 00:31:49.263086 kubelet[2465]: E0905 00:31:49.262903 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:31:49.287021 systemd[1]: Started cri-containerd-49cc2f7e8188568dbab1f40a0afbd06c39c3ea50b0226505ee8db8b2dc5ed4a2.scope - libcontainer container 49cc2f7e8188568dbab1f40a0afbd06c39c3ea50b0226505ee8db8b2dc5ed4a2. Sep 5 00:31:49.309023 containerd[1446]: time="2025-09-05T00:31:49.308975089Z" level=info msg="StartContainer for \"49cc2f7e8188568dbab1f40a0afbd06c39c3ea50b0226505ee8db8b2dc5ed4a2\" returns successfully" Sep 5 00:31:50.154547 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3860654078.mount: Deactivated successfully. Sep 5 00:31:50.266896 kubelet[2465]: E0905 00:31:50.266308 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:31:50.456250 containerd[1446]: time="2025-09-05T00:31:50.456144546Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:31:50.457311 containerd[1446]: time="2025-09-05T00:31:50.457139002Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 5 00:31:50.458251 containerd[1446]: time="2025-09-05T00:31:50.457994808Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:31:50.460076 containerd[1446]: time="2025-09-05T00:31:50.460046992Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:31:50.467739 containerd[1446]: time="2025-09-05T00:31:50.467704681Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 1.413149921s" Sep 5 00:31:50.467739 containerd[1446]: time="2025-09-05T00:31:50.467740772Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 5 00:31:50.470725 containerd[1446]: time="2025-09-05T00:31:50.470674981Z" level=info msg="CreateContainer within sandbox \"bc276e27f396bfb9a00dda82efb43d27fc042e745c0736430312b0ce448526b7\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 5 00:31:50.520489 containerd[1446]: time="2025-09-05T00:31:50.520389267Z" level=info msg="CreateContainer within sandbox \"bc276e27f396bfb9a00dda82efb43d27fc042e745c0736430312b0ce448526b7\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"73e8abf4bd39b86393eea1a1c32d090ad1760d203079bf70015333b85f781c85\"" Sep 5 00:31:50.520792 containerd[1446]: time="2025-09-05T00:31:50.520758336Z" level=info msg="StartContainer for \"73e8abf4bd39b86393eea1a1c32d090ad1760d203079bf70015333b85f781c85\"" Sep 5 00:31:50.549042 systemd[1]: Started cri-containerd-73e8abf4bd39b86393eea1a1c32d090ad1760d203079bf70015333b85f781c85.scope - libcontainer container 73e8abf4bd39b86393eea1a1c32d090ad1760d203079bf70015333b85f781c85. Sep 5 00:31:50.577116 containerd[1446]: time="2025-09-05T00:31:50.576935531Z" level=info msg="StartContainer for \"73e8abf4bd39b86393eea1a1c32d090ad1760d203079bf70015333b85f781c85\" returns successfully" Sep 5 00:31:51.271754 kubelet[2465]: E0905 00:31:51.269613 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:31:51.279414 kubelet[2465]: I0905 00:31:51.278796 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5jbd8" podStartSLOduration=3.278763209 podStartE2EDuration="3.278763209s" podCreationTimestamp="2025-09-05 00:31:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:31:50.277199002 +0000 UTC m=+9.148249216" watchObservedRunningTime="2025-09-05 00:31:51.278763209 +0000 UTC m=+10.149813423" Sep 5 00:31:51.279414 kubelet[2465]: I0905 00:31:51.278926 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-xlpg5" podStartSLOduration=1.864621383 podStartE2EDuration="3.278920853s" podCreationTimestamp="2025-09-05 00:31:48 +0000 UTC" firstStartedPulling="2025-09-05 00:31:49.054048784 +0000 UTC m=+7.925099038" lastFinishedPulling="2025-09-05 00:31:50.468348254 +0000 UTC m=+9.339398508" observedRunningTime="2025-09-05 00:31:51.278562438 +0000 UTC m=+10.149612692" watchObservedRunningTime="2025-09-05 00:31:51.278920853 +0000 UTC m=+10.149971107" Sep 5 00:31:55.776719 sudo[1618]: pam_unix(sudo:session): session closed for user root Sep 5 00:31:55.779987 sshd[1615]: pam_unix(sshd:session): session closed for user core Sep 5 00:31:55.783673 systemd[1]: sshd@6-10.0.0.114:22-10.0.0.1:55246.service: Deactivated successfully. Sep 5 00:31:55.789523 systemd[1]: session-7.scope: Deactivated successfully. Sep 5 00:31:55.789695 systemd[1]: session-7.scope: Consumed 7.114s CPU time, 153.5M memory peak, 0B memory swap peak. Sep 5 00:31:55.790967 systemd-logind[1422]: Session 7 logged out. Waiting for processes to exit. Sep 5 00:31:55.794733 systemd-logind[1422]: Removed session 7. Sep 5 00:31:57.146446 update_engine[1425]: I20250905 00:31:57.146364 1425 update_attempter.cc:509] Updating boot flags... Sep 5 00:31:57.207382 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2874) Sep 5 00:31:57.267885 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2876) Sep 5 00:32:00.144405 systemd[1]: Created slice kubepods-besteffort-poda6b852fc_611d_4ac2_836f_5d2b20edcf18.slice - libcontainer container kubepods-besteffort-poda6b852fc_611d_4ac2_836f_5d2b20edcf18.slice. Sep 5 00:32:00.235372 kubelet[2465]: I0905 00:32:00.235306 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a6b852fc-611d-4ac2-836f-5d2b20edcf18-tigera-ca-bundle\") pod \"calico-typha-59d4cb6bc-ljqv6\" (UID: \"a6b852fc-611d-4ac2-836f-5d2b20edcf18\") " pod="calico-system/calico-typha-59d4cb6bc-ljqv6" Sep 5 00:32:00.235785 kubelet[2465]: I0905 00:32:00.235405 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a6b852fc-611d-4ac2-836f-5d2b20edcf18-typha-certs\") pod \"calico-typha-59d4cb6bc-ljqv6\" (UID: \"a6b852fc-611d-4ac2-836f-5d2b20edcf18\") " pod="calico-system/calico-typha-59d4cb6bc-ljqv6" Sep 5 00:32:00.235785 kubelet[2465]: I0905 00:32:00.235442 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcxgk\" (UniqueName: \"kubernetes.io/projected/a6b852fc-611d-4ac2-836f-5d2b20edcf18-kube-api-access-pcxgk\") pod \"calico-typha-59d4cb6bc-ljqv6\" (UID: \"a6b852fc-611d-4ac2-836f-5d2b20edcf18\") " pod="calico-system/calico-typha-59d4cb6bc-ljqv6" Sep 5 00:32:00.463912 kubelet[2465]: E0905 00:32:00.463774 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:32:00.465115 containerd[1446]: time="2025-09-05T00:32:00.465073245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-59d4cb6bc-ljqv6,Uid:a6b852fc-611d-4ac2-836f-5d2b20edcf18,Namespace:calico-system,Attempt:0,}" Sep 5 00:32:00.476063 systemd[1]: Created slice kubepods-besteffort-pod129ac792_7a09_4e7c_8da5_76e45ec58664.slice - libcontainer container kubepods-besteffort-pod129ac792_7a09_4e7c_8da5_76e45ec58664.slice. Sep 5 00:32:00.495327 containerd[1446]: time="2025-09-05T00:32:00.495159862Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:32:00.495327 containerd[1446]: time="2025-09-05T00:32:00.495230592Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:32:00.495327 containerd[1446]: time="2025-09-05T00:32:00.495243027Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:32:00.495776 containerd[1446]: time="2025-09-05T00:32:00.495561736Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:32:00.515041 systemd[1]: Started cri-containerd-4f1bfaafaadc000e95fb0783cbd80ebaafdb1e18f8a04a0bf37744ac93fda1de.scope - libcontainer container 4f1bfaafaadc000e95fb0783cbd80ebaafdb1e18f8a04a0bf37744ac93fda1de. Sep 5 00:32:00.542823 containerd[1446]: time="2025-09-05T00:32:00.542785477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-59d4cb6bc-ljqv6,Uid:a6b852fc-611d-4ac2-836f-5d2b20edcf18,Namespace:calico-system,Attempt:0,} returns sandbox id \"4f1bfaafaadc000e95fb0783cbd80ebaafdb1e18f8a04a0bf37744ac93fda1de\"" Sep 5 00:32:00.543815 kubelet[2465]: E0905 00:32:00.543780 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:32:00.545443 containerd[1446]: time="2025-09-05T00:32:00.545415351Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 5 00:32:00.638435 kubelet[2465]: I0905 00:32:00.638398 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/129ac792-7a09-4e7c-8da5-76e45ec58664-lib-modules\") pod \"calico-node-gv7d9\" (UID: \"129ac792-7a09-4e7c-8da5-76e45ec58664\") " pod="calico-system/calico-node-gv7d9" Sep 5 00:32:00.638435 kubelet[2465]: I0905 00:32:00.638447 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/129ac792-7a09-4e7c-8da5-76e45ec58664-cni-log-dir\") pod \"calico-node-gv7d9\" (UID: \"129ac792-7a09-4e7c-8da5-76e45ec58664\") " pod="calico-system/calico-node-gv7d9" Sep 5 00:32:00.638606 kubelet[2465]: I0905 00:32:00.638468 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/129ac792-7a09-4e7c-8da5-76e45ec58664-policysync\") pod \"calico-node-gv7d9\" (UID: \"129ac792-7a09-4e7c-8da5-76e45ec58664\") " pod="calico-system/calico-node-gv7d9" Sep 5 00:32:00.638606 kubelet[2465]: I0905 00:32:00.638486 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/129ac792-7a09-4e7c-8da5-76e45ec58664-tigera-ca-bundle\") pod \"calico-node-gv7d9\" (UID: \"129ac792-7a09-4e7c-8da5-76e45ec58664\") " pod="calico-system/calico-node-gv7d9" Sep 5 00:32:00.638606 kubelet[2465]: I0905 00:32:00.638502 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/129ac792-7a09-4e7c-8da5-76e45ec58664-xtables-lock\") pod \"calico-node-gv7d9\" (UID: \"129ac792-7a09-4e7c-8da5-76e45ec58664\") " pod="calico-system/calico-node-gv7d9" Sep 5 00:32:00.638606 kubelet[2465]: I0905 00:32:00.638531 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fq5lw\" (UniqueName: \"kubernetes.io/projected/129ac792-7a09-4e7c-8da5-76e45ec58664-kube-api-access-fq5lw\") pod \"calico-node-gv7d9\" (UID: \"129ac792-7a09-4e7c-8da5-76e45ec58664\") " pod="calico-system/calico-node-gv7d9" Sep 5 00:32:00.638606 kubelet[2465]: I0905 00:32:00.638552 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/129ac792-7a09-4e7c-8da5-76e45ec58664-cni-bin-dir\") pod \"calico-node-gv7d9\" (UID: \"129ac792-7a09-4e7c-8da5-76e45ec58664\") " pod="calico-system/calico-node-gv7d9" Sep 5 00:32:00.638720 kubelet[2465]: I0905 00:32:00.638568 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/129ac792-7a09-4e7c-8da5-76e45ec58664-cni-net-dir\") pod \"calico-node-gv7d9\" (UID: \"129ac792-7a09-4e7c-8da5-76e45ec58664\") " pod="calico-system/calico-node-gv7d9" Sep 5 00:32:00.638720 kubelet[2465]: I0905 00:32:00.638585 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/129ac792-7a09-4e7c-8da5-76e45ec58664-var-run-calico\") pod \"calico-node-gv7d9\" (UID: \"129ac792-7a09-4e7c-8da5-76e45ec58664\") " pod="calico-system/calico-node-gv7d9" Sep 5 00:32:00.638720 kubelet[2465]: I0905 00:32:00.638608 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/129ac792-7a09-4e7c-8da5-76e45ec58664-flexvol-driver-host\") pod \"calico-node-gv7d9\" (UID: \"129ac792-7a09-4e7c-8da5-76e45ec58664\") " pod="calico-system/calico-node-gv7d9" Sep 5 00:32:00.638720 kubelet[2465]: I0905 00:32:00.638629 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/129ac792-7a09-4e7c-8da5-76e45ec58664-node-certs\") pod \"calico-node-gv7d9\" (UID: \"129ac792-7a09-4e7c-8da5-76e45ec58664\") " pod="calico-system/calico-node-gv7d9" Sep 5 00:32:00.638720 kubelet[2465]: I0905 00:32:00.638645 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/129ac792-7a09-4e7c-8da5-76e45ec58664-var-lib-calico\") pod \"calico-node-gv7d9\" (UID: \"129ac792-7a09-4e7c-8da5-76e45ec58664\") " pod="calico-system/calico-node-gv7d9" Sep 5 00:32:00.709885 kubelet[2465]: E0905 00:32:00.709625 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7wt2m" podUID="7cca8ed9-cabd-4207-ad70-24ca48f24180" Sep 5 00:32:00.739687 kubelet[2465]: I0905 00:32:00.739565 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7cca8ed9-cabd-4207-ad70-24ca48f24180-kubelet-dir\") pod \"csi-node-driver-7wt2m\" (UID: \"7cca8ed9-cabd-4207-ad70-24ca48f24180\") " pod="calico-system/csi-node-driver-7wt2m" Sep 5 00:32:00.739783 kubelet[2465]: I0905 00:32:00.739695 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7cca8ed9-cabd-4207-ad70-24ca48f24180-socket-dir\") pod \"csi-node-driver-7wt2m\" (UID: \"7cca8ed9-cabd-4207-ad70-24ca48f24180\") " pod="calico-system/csi-node-driver-7wt2m" Sep 5 00:32:00.739783 kubelet[2465]: I0905 00:32:00.739737 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7cca8ed9-cabd-4207-ad70-24ca48f24180-varrun\") pod \"csi-node-driver-7wt2m\" (UID: \"7cca8ed9-cabd-4207-ad70-24ca48f24180\") " pod="calico-system/csi-node-driver-7wt2m" Sep 5 00:32:00.739848 kubelet[2465]: I0905 00:32:00.739785 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psfsx\" (UniqueName: \"kubernetes.io/projected/7cca8ed9-cabd-4207-ad70-24ca48f24180-kube-api-access-psfsx\") pod \"csi-node-driver-7wt2m\" (UID: \"7cca8ed9-cabd-4207-ad70-24ca48f24180\") " pod="calico-system/csi-node-driver-7wt2m" Sep 5 00:32:00.739848 kubelet[2465]: I0905 00:32:00.739832 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7cca8ed9-cabd-4207-ad70-24ca48f24180-registration-dir\") pod \"csi-node-driver-7wt2m\" (UID: \"7cca8ed9-cabd-4207-ad70-24ca48f24180\") " pod="calico-system/csi-node-driver-7wt2m" Sep 5 00:32:00.745796 kubelet[2465]: E0905 00:32:00.745706 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:00.745796 kubelet[2465]: W0905 00:32:00.745728 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:00.745796 kubelet[2465]: E0905 00:32:00.745755 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:00.746036 kubelet[2465]: E0905 00:32:00.745953 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:00.746036 kubelet[2465]: W0905 00:32:00.745962 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:00.746036 kubelet[2465]: E0905 00:32:00.745972 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:00.750795 kubelet[2465]: E0905 00:32:00.750769 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:00.750795 kubelet[2465]: W0905 00:32:00.750789 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:00.750903 kubelet[2465]: E0905 00:32:00.750806 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:00.762300 kubelet[2465]: E0905 00:32:00.762280 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:00.762300 kubelet[2465]: W0905 00:32:00.762296 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:00.762409 kubelet[2465]: E0905 00:32:00.762314 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:00.784532 containerd[1446]: time="2025-09-05T00:32:00.784179083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gv7d9,Uid:129ac792-7a09-4e7c-8da5-76e45ec58664,Namespace:calico-system,Attempt:0,}" Sep 5 00:32:00.818760 containerd[1446]: time="2025-09-05T00:32:00.818510148Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:32:00.818760 containerd[1446]: time="2025-09-05T00:32:00.818571682Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:32:00.818760 containerd[1446]: time="2025-09-05T00:32:00.818583398Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:32:00.818760 containerd[1446]: time="2025-09-05T00:32:00.818672041Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:32:00.844729 kubelet[2465]: E0905 00:32:00.840969 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:00.844729 kubelet[2465]: W0905 00:32:00.840987 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:00.844729 kubelet[2465]: E0905 00:32:00.841005 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:00.844729 kubelet[2465]: E0905 00:32:00.841214 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:00.844729 kubelet[2465]: W0905 00:32:00.841224 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:00.844729 kubelet[2465]: E0905 00:32:00.841235 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:00.844729 kubelet[2465]: E0905 00:32:00.843667 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:00.844729 kubelet[2465]: W0905 00:32:00.843680 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:00.844729 kubelet[2465]: E0905 00:32:00.843700 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:00.844729 kubelet[2465]: E0905 00:32:00.843965 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:00.845054 kubelet[2465]: W0905 00:32:00.843974 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:00.845054 kubelet[2465]: E0905 00:32:00.844033 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:00.847234 kubelet[2465]: E0905 00:32:00.845203 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:00.847234 kubelet[2465]: W0905 00:32:00.845216 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:00.847234 kubelet[2465]: E0905 00:32:00.845904 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:00.851770 kubelet[2465]: E0905 00:32:00.851230 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:00.851770 kubelet[2465]: W0905 00:32:00.851252 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:00.851770 kubelet[2465]: E0905 00:32:00.851314 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:00.851770 kubelet[2465]: E0905 00:32:00.851734 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:00.854271 kubelet[2465]: W0905 00:32:00.851746 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:00.854440 kubelet[2465]: E0905 00:32:00.854375 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:00.854721 kubelet[2465]: E0905 00:32:00.854570 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:00.854721 kubelet[2465]: W0905 00:32:00.854619 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:00.854721 kubelet[2465]: E0905 00:32:00.854703 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:00.856041 kubelet[2465]: E0905 00:32:00.855922 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:00.856041 kubelet[2465]: W0905 00:32:00.855959 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:00.856207 kubelet[2465]: E0905 00:32:00.856027 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:00.856365 kubelet[2465]: E0905 00:32:00.856352 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:00.856498 kubelet[2465]: W0905 00:32:00.856435 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:00.856498 kubelet[2465]: E0905 00:32:00.856474 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:00.856981 kubelet[2465]: E0905 00:32:00.856800 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:00.856981 kubelet[2465]: W0905 00:32:00.856813 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:00.856981 kubelet[2465]: E0905 00:32:00.856887 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:00.857674 kubelet[2465]: E0905 00:32:00.857547 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:00.857674 kubelet[2465]: W0905 00:32:00.857563 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:00.857674 kubelet[2465]: E0905 00:32:00.857610 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:00.858494 systemd[1]: Started cri-containerd-bb355bf4315f3951ace3a2f9ed645365469e0c19a2826e13e891ecf59609e614.scope - libcontainer container bb355bf4315f3951ace3a2f9ed645365469e0c19a2826e13e891ecf59609e614. Sep 5 00:32:00.859591 kubelet[2465]: E0905 00:32:00.859220 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:00.859591 kubelet[2465]: W0905 00:32:00.859236 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:00.859591 kubelet[2465]: E0905 00:32:00.859290 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:00.859884 kubelet[2465]: E0905 00:32:00.859780 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:00.859884 kubelet[2465]: W0905 00:32:00.859812 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:00.859884 kubelet[2465]: E0905 00:32:00.859851 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:00.860674 kubelet[2465]: E0905 00:32:00.860535 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:00.860674 kubelet[2465]: W0905 00:32:00.860559 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:00.860674 kubelet[2465]: E0905 00:32:00.860605 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:00.860961 kubelet[2465]: E0905 00:32:00.860853 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:00.860961 kubelet[2465]: W0905 00:32:00.860885 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:00.861074 kubelet[2465]: E0905 00:32:00.861062 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:00.861137 kubelet[2465]: W0905 00:32:00.861125 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:00.861596 kubelet[2465]: E0905 00:32:00.861529 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:00.861788 kubelet[2465]: E0905 00:32:00.861707 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:00.862145 kubelet[2465]: W0905 00:32:00.861994 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:00.862145 kubelet[2465]: E0905 00:32:00.862044 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:00.862684 kubelet[2465]: E0905 00:32:00.862546 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:00.862963 kubelet[2465]: E0905 00:32:00.862936 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:00.862963 kubelet[2465]: W0905 00:32:00.862956 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:00.863039 kubelet[2465]: E0905 00:32:00.862993 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:00.863886 kubelet[2465]: E0905 00:32:00.863170 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:00.863886 kubelet[2465]: W0905 00:32:00.863182 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:00.863886 kubelet[2465]: E0905 00:32:00.863206 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:00.863886 kubelet[2465]: E0905 00:32:00.863367 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:00.863886 kubelet[2465]: W0905 00:32:00.863376 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:00.863886 kubelet[2465]: E0905 00:32:00.863396 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:00.863886 kubelet[2465]: E0905 00:32:00.863651 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:00.863886 kubelet[2465]: W0905 00:32:00.863663 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:00.863886 kubelet[2465]: E0905 00:32:00.863692 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:00.863886 kubelet[2465]: E0905 00:32:00.863838 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:00.864161 kubelet[2465]: W0905 00:32:00.863847 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:00.864161 kubelet[2465]: E0905 00:32:00.863952 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:00.864161 kubelet[2465]: E0905 00:32:00.864031 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:00.864161 kubelet[2465]: W0905 00:32:00.864039 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:00.864161 kubelet[2465]: E0905 00:32:00.864048 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:00.864492 kubelet[2465]: E0905 00:32:00.864473 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:00.864492 kubelet[2465]: W0905 00:32:00.864490 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:00.864576 kubelet[2465]: E0905 00:32:00.864501 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:00.878848 kubelet[2465]: E0905 00:32:00.878768 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:00.878848 kubelet[2465]: W0905 00:32:00.878791 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:00.878848 kubelet[2465]: E0905 00:32:00.878810 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:00.884370 containerd[1446]: time="2025-09-05T00:32:00.884333689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gv7d9,Uid:129ac792-7a09-4e7c-8da5-76e45ec58664,Namespace:calico-system,Attempt:0,} returns sandbox id \"bb355bf4315f3951ace3a2f9ed645365469e0c19a2826e13e891ecf59609e614\"" Sep 5 00:32:02.162356 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1473423869.mount: Deactivated successfully. Sep 5 00:32:02.229584 kubelet[2465]: E0905 00:32:02.229155 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7wt2m" podUID="7cca8ed9-cabd-4207-ad70-24ca48f24180" Sep 5 00:32:02.475431 containerd[1446]: time="2025-09-05T00:32:02.475321706Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:32:02.476051 containerd[1446]: time="2025-09-05T00:32:02.475784618Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33105775" Sep 5 00:32:02.477256 containerd[1446]: time="2025-09-05T00:32:02.477227455Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:32:02.479098 containerd[1446]: time="2025-09-05T00:32:02.479063868Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:32:02.479942 containerd[1446]: time="2025-09-05T00:32:02.479905803Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 1.934454427s" Sep 5 00:32:02.479986 containerd[1446]: time="2025-09-05T00:32:02.479941829Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 5 00:32:02.481293 containerd[1446]: time="2025-09-05T00:32:02.481197854Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 5 00:32:02.490799 containerd[1446]: time="2025-09-05T00:32:02.490758344Z" level=info msg="CreateContainer within sandbox \"4f1bfaafaadc000e95fb0783cbd80ebaafdb1e18f8a04a0bf37744ac93fda1de\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 5 00:32:02.567519 containerd[1446]: time="2025-09-05T00:32:02.567430161Z" level=info msg="CreateContainer within sandbox \"4f1bfaafaadc000e95fb0783cbd80ebaafdb1e18f8a04a0bf37744ac93fda1de\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"675446ab48b21ca094bb5e9324f589ea2ce344844ae126d83d75efe642af3557\"" Sep 5 00:32:02.568032 containerd[1446]: time="2025-09-05T00:32:02.567932099Z" level=info msg="StartContainer for \"675446ab48b21ca094bb5e9324f589ea2ce344844ae126d83d75efe642af3557\"" Sep 5 00:32:02.601036 systemd[1]: Started cri-containerd-675446ab48b21ca094bb5e9324f589ea2ce344844ae126d83d75efe642af3557.scope - libcontainer container 675446ab48b21ca094bb5e9324f589ea2ce344844ae126d83d75efe642af3557. Sep 5 00:32:02.631648 containerd[1446]: time="2025-09-05T00:32:02.631604553Z" level=info msg="StartContainer for \"675446ab48b21ca094bb5e9324f589ea2ce344844ae126d83d75efe642af3557\" returns successfully" Sep 5 00:32:03.296074 kubelet[2465]: E0905 00:32:03.296026 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:32:03.306191 kubelet[2465]: I0905 00:32:03.306135 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-59d4cb6bc-ljqv6" podStartSLOduration=1.370073066 podStartE2EDuration="3.306121252s" podCreationTimestamp="2025-09-05 00:32:00 +0000 UTC" firstStartedPulling="2025-09-05 00:32:00.544999682 +0000 UTC m=+19.416049936" lastFinishedPulling="2025-09-05 00:32:02.481047868 +0000 UTC m=+21.352098122" observedRunningTime="2025-09-05 00:32:03.305580916 +0000 UTC m=+22.176631170" watchObservedRunningTime="2025-09-05 00:32:03.306121252 +0000 UTC m=+22.177171506" Sep 5 00:32:03.358111 kubelet[2465]: E0905 00:32:03.358082 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:03.358111 kubelet[2465]: W0905 00:32:03.358105 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:03.358234 kubelet[2465]: E0905 00:32:03.358123 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:03.358281 kubelet[2465]: E0905 00:32:03.358268 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:03.358281 kubelet[2465]: W0905 00:32:03.358279 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:03.358331 kubelet[2465]: E0905 00:32:03.358287 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:03.358429 kubelet[2465]: E0905 00:32:03.358416 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:03.358429 kubelet[2465]: W0905 00:32:03.358426 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:03.358506 kubelet[2465]: E0905 00:32:03.358433 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:03.358585 kubelet[2465]: E0905 00:32:03.358573 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:03.358585 kubelet[2465]: W0905 00:32:03.358583 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:03.358650 kubelet[2465]: E0905 00:32:03.358593 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:03.358742 kubelet[2465]: E0905 00:32:03.358730 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:03.358742 kubelet[2465]: W0905 00:32:03.358740 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:03.358801 kubelet[2465]: E0905 00:32:03.358747 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:03.358917 kubelet[2465]: E0905 00:32:03.358906 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:03.358917 kubelet[2465]: W0905 00:32:03.358917 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:03.358974 kubelet[2465]: E0905 00:32:03.358925 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:03.359061 kubelet[2465]: E0905 00:32:03.359049 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:03.359061 kubelet[2465]: W0905 00:32:03.359059 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:03.359120 kubelet[2465]: E0905 00:32:03.359066 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:03.359235 kubelet[2465]: E0905 00:32:03.359196 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:03.359235 kubelet[2465]: W0905 00:32:03.359203 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:03.359235 kubelet[2465]: E0905 00:32:03.359210 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:03.359626 kubelet[2465]: E0905 00:32:03.359376 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:03.359626 kubelet[2465]: W0905 00:32:03.359389 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:03.359626 kubelet[2465]: E0905 00:32:03.359398 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:03.359626 kubelet[2465]: E0905 00:32:03.359566 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:03.359626 kubelet[2465]: W0905 00:32:03.359574 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:03.359626 kubelet[2465]: E0905 00:32:03.359583 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:03.360151 kubelet[2465]: E0905 00:32:03.359762 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:03.360151 kubelet[2465]: W0905 00:32:03.359770 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:03.360151 kubelet[2465]: E0905 00:32:03.359778 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:03.360151 kubelet[2465]: E0905 00:32:03.359947 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:03.360151 kubelet[2465]: W0905 00:32:03.359963 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:03.360151 kubelet[2465]: E0905 00:32:03.359972 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:03.360151 kubelet[2465]: E0905 00:32:03.360131 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:03.360151 kubelet[2465]: W0905 00:32:03.360140 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:03.360151 kubelet[2465]: E0905 00:32:03.360148 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:03.360435 kubelet[2465]: E0905 00:32:03.360290 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:03.360435 kubelet[2465]: W0905 00:32:03.360298 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:03.360435 kubelet[2465]: E0905 00:32:03.360305 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:03.360435 kubelet[2465]: E0905 00:32:03.360433 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:03.360567 kubelet[2465]: W0905 00:32:03.360440 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:03.360567 kubelet[2465]: E0905 00:32:03.360447 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:03.371325 kubelet[2465]: E0905 00:32:03.371304 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:03.371325 kubelet[2465]: W0905 00:32:03.371322 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:03.371439 kubelet[2465]: E0905 00:32:03.371337 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:03.371555 kubelet[2465]: E0905 00:32:03.371542 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:03.371555 kubelet[2465]: W0905 00:32:03.371554 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:03.371616 kubelet[2465]: E0905 00:32:03.371571 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:03.371824 kubelet[2465]: E0905 00:32:03.371807 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:03.371853 kubelet[2465]: W0905 00:32:03.371824 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:03.371947 kubelet[2465]: E0905 00:32:03.371932 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:03.372069 kubelet[2465]: E0905 00:32:03.372055 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:03.372097 kubelet[2465]: W0905 00:32:03.372068 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:03.372097 kubelet[2465]: E0905 00:32:03.372088 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:03.372350 kubelet[2465]: E0905 00:32:03.372337 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:03.372350 kubelet[2465]: W0905 00:32:03.372349 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:03.372420 kubelet[2465]: E0905 00:32:03.372368 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:03.372624 kubelet[2465]: E0905 00:32:03.372592 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:03.372624 kubelet[2465]: W0905 00:32:03.372605 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:03.372624 kubelet[2465]: E0905 00:32:03.372618 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:03.373326 kubelet[2465]: E0905 00:32:03.373219 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:03.373326 kubelet[2465]: W0905 00:32:03.373237 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:03.373454 kubelet[2465]: E0905 00:32:03.373424 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:03.373530 kubelet[2465]: E0905 00:32:03.373517 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:03.373530 kubelet[2465]: W0905 00:32:03.373528 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:03.373643 kubelet[2465]: E0905 00:32:03.373609 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:03.373746 kubelet[2465]: E0905 00:32:03.373732 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:03.373746 kubelet[2465]: W0905 00:32:03.373743 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:03.373820 kubelet[2465]: E0905 00:32:03.373803 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:03.373955 kubelet[2465]: E0905 00:32:03.373944 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:03.373955 kubelet[2465]: W0905 00:32:03.373954 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:03.374012 kubelet[2465]: E0905 00:32:03.373971 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:03.374395 kubelet[2465]: E0905 00:32:03.374382 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:03.374424 kubelet[2465]: W0905 00:32:03.374395 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:03.374424 kubelet[2465]: E0905 00:32:03.374409 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:03.374594 kubelet[2465]: E0905 00:32:03.374584 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:03.374594 kubelet[2465]: W0905 00:32:03.374593 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:03.374646 kubelet[2465]: E0905 00:32:03.374602 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:03.374765 kubelet[2465]: E0905 00:32:03.374754 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:03.374765 kubelet[2465]: W0905 00:32:03.374765 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:03.374873 kubelet[2465]: E0905 00:32:03.374777 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:03.375089 kubelet[2465]: E0905 00:32:03.375076 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:03.375089 kubelet[2465]: W0905 00:32:03.375089 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:03.375153 kubelet[2465]: E0905 00:32:03.375118 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:03.375236 kubelet[2465]: E0905 00:32:03.375225 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:03.375236 kubelet[2465]: W0905 00:32:03.375235 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:03.375308 kubelet[2465]: E0905 00:32:03.375296 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:03.375376 kubelet[2465]: E0905 00:32:03.375367 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:03.375405 kubelet[2465]: W0905 00:32:03.375376 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:03.375405 kubelet[2465]: E0905 00:32:03.375386 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:03.375594 kubelet[2465]: E0905 00:32:03.375583 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:03.375594 kubelet[2465]: W0905 00:32:03.375593 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:03.375661 kubelet[2465]: E0905 00:32:03.375601 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:03.375881 kubelet[2465]: E0905 00:32:03.375869 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:03.375881 kubelet[2465]: W0905 00:32:03.375880 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:03.375936 kubelet[2465]: E0905 00:32:03.375890 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:04.229225 kubelet[2465]: E0905 00:32:04.229178 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7wt2m" podUID="7cca8ed9-cabd-4207-ad70-24ca48f24180" Sep 5 00:32:04.298068 kubelet[2465]: I0905 00:32:04.297436 2465 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 00:32:04.298068 kubelet[2465]: E0905 00:32:04.297745 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:32:04.368422 kubelet[2465]: E0905 00:32:04.368390 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:04.368422 kubelet[2465]: W0905 00:32:04.368416 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:04.368579 kubelet[2465]: E0905 00:32:04.368437 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:04.368644 kubelet[2465]: E0905 00:32:04.368633 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:04.368644 kubelet[2465]: W0905 00:32:04.368643 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:04.368706 kubelet[2465]: E0905 00:32:04.368652 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:04.368809 kubelet[2465]: E0905 00:32:04.368799 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:04.368809 kubelet[2465]: W0905 00:32:04.368809 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:04.368879 kubelet[2465]: E0905 00:32:04.368817 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:04.368975 kubelet[2465]: E0905 00:32:04.368965 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:04.369005 kubelet[2465]: W0905 00:32:04.368977 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:04.369005 kubelet[2465]: E0905 00:32:04.368985 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:04.369152 kubelet[2465]: E0905 00:32:04.369142 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:04.369152 kubelet[2465]: W0905 00:32:04.369152 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:04.369203 kubelet[2465]: E0905 00:32:04.369161 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:04.369304 kubelet[2465]: E0905 00:32:04.369295 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:04.369304 kubelet[2465]: W0905 00:32:04.369304 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:04.369363 kubelet[2465]: E0905 00:32:04.369311 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:04.369440 kubelet[2465]: E0905 00:32:04.369430 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:04.369440 kubelet[2465]: W0905 00:32:04.369439 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:04.369490 kubelet[2465]: E0905 00:32:04.369446 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:04.369573 kubelet[2465]: E0905 00:32:04.369564 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:04.369606 kubelet[2465]: W0905 00:32:04.369573 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:04.369606 kubelet[2465]: E0905 00:32:04.369581 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:04.369735 kubelet[2465]: E0905 00:32:04.369725 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:04.369735 kubelet[2465]: W0905 00:32:04.369735 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:04.369789 kubelet[2465]: E0905 00:32:04.369742 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:04.369883 kubelet[2465]: E0905 00:32:04.369872 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:04.369883 kubelet[2465]: W0905 00:32:04.369883 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:04.369949 kubelet[2465]: E0905 00:32:04.369891 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:04.370025 kubelet[2465]: E0905 00:32:04.370016 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:04.370025 kubelet[2465]: W0905 00:32:04.370025 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:04.370087 kubelet[2465]: E0905 00:32:04.370032 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:04.370173 kubelet[2465]: E0905 00:32:04.370162 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:04.370173 kubelet[2465]: W0905 00:32:04.370172 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:04.370232 kubelet[2465]: E0905 00:32:04.370180 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:04.370324 kubelet[2465]: E0905 00:32:04.370314 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:04.370324 kubelet[2465]: W0905 00:32:04.370324 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:04.370381 kubelet[2465]: E0905 00:32:04.370331 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:04.370479 kubelet[2465]: E0905 00:32:04.370468 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:04.370516 kubelet[2465]: W0905 00:32:04.370479 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:04.370516 kubelet[2465]: E0905 00:32:04.370487 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:04.370621 kubelet[2465]: E0905 00:32:04.370611 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:04.370621 kubelet[2465]: W0905 00:32:04.370620 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:04.370671 kubelet[2465]: E0905 00:32:04.370628 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:04.381489 kubelet[2465]: E0905 00:32:04.381338 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:04.381489 kubelet[2465]: W0905 00:32:04.381412 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:04.381489 kubelet[2465]: E0905 00:32:04.381429 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:04.381937 kubelet[2465]: E0905 00:32:04.381834 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:04.381937 kubelet[2465]: W0905 00:32:04.381847 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:04.381937 kubelet[2465]: E0905 00:32:04.381876 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:04.382206 kubelet[2465]: E0905 00:32:04.382193 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:04.382370 kubelet[2465]: W0905 00:32:04.382273 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:04.382370 kubelet[2465]: E0905 00:32:04.382298 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:04.382629 kubelet[2465]: E0905 00:32:04.382614 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:04.382702 kubelet[2465]: W0905 00:32:04.382689 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:04.382762 kubelet[2465]: E0905 00:32:04.382751 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:04.382984 kubelet[2465]: E0905 00:32:04.382973 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:04.383107 kubelet[2465]: W0905 00:32:04.383055 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:04.383194 kubelet[2465]: E0905 00:32:04.383174 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:04.383453 kubelet[2465]: E0905 00:32:04.383441 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:04.383512 kubelet[2465]: W0905 00:32:04.383501 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:04.383622 kubelet[2465]: E0905 00:32:04.383590 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:04.383820 kubelet[2465]: E0905 00:32:04.383807 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:04.383960 kubelet[2465]: W0905 00:32:04.383907 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:04.383960 kubelet[2465]: E0905 00:32:04.383938 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:04.384333 kubelet[2465]: E0905 00:32:04.384207 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:04.384333 kubelet[2465]: W0905 00:32:04.384219 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:04.384333 kubelet[2465]: E0905 00:32:04.384238 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:04.384485 kubelet[2465]: E0905 00:32:04.384474 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:04.384550 kubelet[2465]: W0905 00:32:04.384538 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:04.384647 kubelet[2465]: E0905 00:32:04.384635 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:04.384880 kubelet[2465]: E0905 00:32:04.384846 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:04.384880 kubelet[2465]: W0905 00:32:04.384868 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:04.384963 kubelet[2465]: E0905 00:32:04.384896 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:04.385202 kubelet[2465]: E0905 00:32:04.385042 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:04.385202 kubelet[2465]: W0905 00:32:04.385052 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:04.385202 kubelet[2465]: E0905 00:32:04.385093 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:04.385302 kubelet[2465]: E0905 00:32:04.385212 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:04.385302 kubelet[2465]: W0905 00:32:04.385220 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:04.385302 kubelet[2465]: E0905 00:32:04.385230 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:04.385389 kubelet[2465]: E0905 00:32:04.385373 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:04.385389 kubelet[2465]: W0905 00:32:04.385382 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:04.385389 kubelet[2465]: E0905 00:32:04.385395 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:04.385571 kubelet[2465]: E0905 00:32:04.385560 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:04.385571 kubelet[2465]: W0905 00:32:04.385570 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:04.385628 kubelet[2465]: E0905 00:32:04.385583 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:04.386165 kubelet[2465]: E0905 00:32:04.385931 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:04.386165 kubelet[2465]: W0905 00:32:04.385946 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:04.386165 kubelet[2465]: E0905 00:32:04.385962 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:04.386355 kubelet[2465]: E0905 00:32:04.386340 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:04.386417 kubelet[2465]: W0905 00:32:04.386405 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:04.386531 kubelet[2465]: E0905 00:32:04.386508 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:04.386710 kubelet[2465]: E0905 00:32:04.386696 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:04.386842 kubelet[2465]: W0905 00:32:04.386818 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:04.386980 kubelet[2465]: E0905 00:32:04.386960 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:04.387210 kubelet[2465]: E0905 00:32:04.387161 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:32:04.387210 kubelet[2465]: W0905 00:32:04.387173 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:32:04.387210 kubelet[2465]: E0905 00:32:04.387184 2465 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:32:05.003570 containerd[1446]: time="2025-09-05T00:32:05.003149650Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:32:05.003570 containerd[1446]: time="2025-09-05T00:32:05.003519619Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4266814" Sep 5 00:32:05.004415 containerd[1446]: time="2025-09-05T00:32:05.004386720Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:32:05.006557 containerd[1446]: time="2025-09-05T00:32:05.006361730Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:32:05.007112 containerd[1446]: time="2025-09-05T00:32:05.007005617Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 2.525778973s" Sep 5 00:32:05.007112 containerd[1446]: time="2025-09-05T00:32:05.007039327Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 5 00:32:05.008852 containerd[1446]: time="2025-09-05T00:32:05.008752775Z" level=info msg="CreateContainer within sandbox \"bb355bf4315f3951ace3a2f9ed645365469e0c19a2826e13e891ecf59609e614\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 5 00:32:05.020296 containerd[1446]: time="2025-09-05T00:32:05.020258535Z" level=info msg="CreateContainer within sandbox \"bb355bf4315f3951ace3a2f9ed645365469e0c19a2826e13e891ecf59609e614\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"98445b727131e166fb6626a87f0b97bab824cae5232f688715e876e105fdc380\"" Sep 5 00:32:05.021727 containerd[1446]: time="2025-09-05T00:32:05.021702503Z" level=info msg="StartContainer for \"98445b727131e166fb6626a87f0b97bab824cae5232f688715e876e105fdc380\"" Sep 5 00:32:05.058026 systemd[1]: Started cri-containerd-98445b727131e166fb6626a87f0b97bab824cae5232f688715e876e105fdc380.scope - libcontainer container 98445b727131e166fb6626a87f0b97bab824cae5232f688715e876e105fdc380. Sep 5 00:32:05.089465 containerd[1446]: time="2025-09-05T00:32:05.089399582Z" level=info msg="StartContainer for \"98445b727131e166fb6626a87f0b97bab824cae5232f688715e876e105fdc380\" returns successfully" Sep 5 00:32:05.096588 systemd[1]: cri-containerd-98445b727131e166fb6626a87f0b97bab824cae5232f688715e876e105fdc380.scope: Deactivated successfully. Sep 5 00:32:05.118087 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-98445b727131e166fb6626a87f0b97bab824cae5232f688715e876e105fdc380-rootfs.mount: Deactivated successfully. Sep 5 00:32:05.138157 containerd[1446]: time="2025-09-05T00:32:05.125109825Z" level=info msg="shim disconnected" id=98445b727131e166fb6626a87f0b97bab824cae5232f688715e876e105fdc380 namespace=k8s.io Sep 5 00:32:05.138287 containerd[1446]: time="2025-09-05T00:32:05.138160843Z" level=warning msg="cleaning up after shim disconnected" id=98445b727131e166fb6626a87f0b97bab824cae5232f688715e876e105fdc380 namespace=k8s.io Sep 5 00:32:05.138287 containerd[1446]: time="2025-09-05T00:32:05.138176678Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:32:05.304587 containerd[1446]: time="2025-09-05T00:32:05.304551293Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 5 00:32:06.229144 kubelet[2465]: E0905 00:32:06.229089 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7wt2m" podUID="7cca8ed9-cabd-4207-ad70-24ca48f24180" Sep 5 00:32:08.139064 containerd[1446]: time="2025-09-05T00:32:08.139011043Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:32:08.140033 containerd[1446]: time="2025-09-05T00:32:08.139978724Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 5 00:32:08.142086 containerd[1446]: time="2025-09-05T00:32:08.141066417Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:32:08.142947 containerd[1446]: time="2025-09-05T00:32:08.142912722Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:32:08.143819 containerd[1446]: time="2025-09-05T00:32:08.143735079Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 2.839146077s" Sep 5 00:32:08.143819 containerd[1446]: time="2025-09-05T00:32:08.143767071Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 5 00:32:08.146782 containerd[1446]: time="2025-09-05T00:32:08.146751616Z" level=info msg="CreateContainer within sandbox \"bb355bf4315f3951ace3a2f9ed645365469e0c19a2826e13e891ecf59609e614\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 5 00:32:08.162317 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount542169433.mount: Deactivated successfully. Sep 5 00:32:08.164874 containerd[1446]: time="2025-09-05T00:32:08.164822764Z" level=info msg="CreateContainer within sandbox \"bb355bf4315f3951ace3a2f9ed645365469e0c19a2826e13e891ecf59609e614\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ce1e0f955652e13c367f83c795b1d8f5735d1694489adefeb231b01066a8bf5d\"" Sep 5 00:32:08.165337 containerd[1446]: time="2025-09-05T00:32:08.165260696Z" level=info msg="StartContainer for \"ce1e0f955652e13c367f83c795b1d8f5735d1694489adefeb231b01066a8bf5d\"" Sep 5 00:32:08.191021 systemd[1]: Started cri-containerd-ce1e0f955652e13c367f83c795b1d8f5735d1694489adefeb231b01066a8bf5d.scope - libcontainer container ce1e0f955652e13c367f83c795b1d8f5735d1694489adefeb231b01066a8bf5d. Sep 5 00:32:08.213022 containerd[1446]: time="2025-09-05T00:32:08.212970623Z" level=info msg="StartContainer for \"ce1e0f955652e13c367f83c795b1d8f5735d1694489adefeb231b01066a8bf5d\" returns successfully" Sep 5 00:32:08.228573 kubelet[2465]: E0905 00:32:08.228528 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7wt2m" podUID="7cca8ed9-cabd-4207-ad70-24ca48f24180" Sep 5 00:32:08.793073 systemd[1]: cri-containerd-ce1e0f955652e13c367f83c795b1d8f5735d1694489adefeb231b01066a8bf5d.scope: Deactivated successfully. Sep 5 00:32:08.809833 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ce1e0f955652e13c367f83c795b1d8f5735d1694489adefeb231b01066a8bf5d-rootfs.mount: Deactivated successfully. Sep 5 00:32:08.875090 containerd[1446]: time="2025-09-05T00:32:08.874850370Z" level=info msg="shim disconnected" id=ce1e0f955652e13c367f83c795b1d8f5735d1694489adefeb231b01066a8bf5d namespace=k8s.io Sep 5 00:32:08.875090 containerd[1446]: time="2025-09-05T00:32:08.874930070Z" level=warning msg="cleaning up after shim disconnected" id=ce1e0f955652e13c367f83c795b1d8f5735d1694489adefeb231b01066a8bf5d namespace=k8s.io Sep 5 00:32:08.875090 containerd[1446]: time="2025-09-05T00:32:08.874938988Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:32:08.887230 kubelet[2465]: I0905 00:32:08.886613 2465 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 5 00:32:08.950218 systemd[1]: Created slice kubepods-besteffort-podb4104721_2685_4b1f_94b6_007263e183aa.slice - libcontainer container kubepods-besteffort-podb4104721_2685_4b1f_94b6_007263e183aa.slice. Sep 5 00:32:08.961799 systemd[1]: Created slice kubepods-besteffort-pod19de0c52_9a52_436c_b9e4_3190b3fe0247.slice - libcontainer container kubepods-besteffort-pod19de0c52_9a52_436c_b9e4_3190b3fe0247.slice. Sep 5 00:32:08.972207 systemd[1]: Created slice kubepods-burstable-pod35c515fe_5ffe_4ef1_99c2_70d6c9f6b20d.slice - libcontainer container kubepods-burstable-pod35c515fe_5ffe_4ef1_99c2_70d6c9f6b20d.slice. Sep 5 00:32:08.982305 systemd[1]: Created slice kubepods-besteffort-pod90fd82e1_4b15_4224_8bf1_2c50a55938a1.slice - libcontainer container kubepods-besteffort-pod90fd82e1_4b15_4224_8bf1_2c50a55938a1.slice. Sep 5 00:32:08.990602 systemd[1]: Created slice kubepods-besteffort-podeeb330c6_49f9_4171_ba33_9d61da205aa6.slice - libcontainer container kubepods-besteffort-podeeb330c6_49f9_4171_ba33_9d61da205aa6.slice. Sep 5 00:32:08.996230 systemd[1]: Created slice kubepods-besteffort-podd3723c38_1afc_40d9_9c9f_ed74aafda062.slice - libcontainer container kubepods-besteffort-podd3723c38_1afc_40d9_9c9f_ed74aafda062.slice. Sep 5 00:32:09.001787 systemd[1]: Created slice kubepods-burstable-pod192e7eea_9c31_4354_894f_59feed59071c.slice - libcontainer container kubepods-burstable-pod192e7eea_9c31_4354_894f_59feed59071c.slice. Sep 5 00:32:09.011433 kubelet[2465]: I0905 00:32:09.011393 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6z4g\" (UniqueName: \"kubernetes.io/projected/eeb330c6-49f9-4171-ba33-9d61da205aa6-kube-api-access-h6z4g\") pod \"whisker-6cd887f554-kvqvk\" (UID: \"eeb330c6-49f9-4171-ba33-9d61da205aa6\") " pod="calico-system/whisker-6cd887f554-kvqvk" Sep 5 00:32:09.011433 kubelet[2465]: I0905 00:32:09.011437 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/90fd82e1-4b15-4224-8bf1-2c50a55938a1-calico-apiserver-certs\") pod \"calico-apiserver-7b94b9658b-ngd47\" (UID: \"90fd82e1-4b15-4224-8bf1-2c50a55938a1\") " pod="calico-apiserver/calico-apiserver-7b94b9658b-ngd47" Sep 5 00:32:09.011584 kubelet[2465]: I0905 00:32:09.011458 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19de0c52-9a52-436c-b9e4-3190b3fe0247-tigera-ca-bundle\") pod \"calico-kube-controllers-869c68467b-v22j2\" (UID: \"19de0c52-9a52-436c-b9e4-3190b3fe0247\") " pod="calico-system/calico-kube-controllers-869c68467b-v22j2" Sep 5 00:32:09.011584 kubelet[2465]: I0905 00:32:09.011476 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b4104721-2685-4b1f-94b6-007263e183aa-calico-apiserver-certs\") pod \"calico-apiserver-7b94b9658b-jdnpd\" (UID: \"b4104721-2685-4b1f-94b6-007263e183aa\") " pod="calico-apiserver/calico-apiserver-7b94b9658b-jdnpd" Sep 5 00:32:09.011584 kubelet[2465]: I0905 00:32:09.011492 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s87km\" (UniqueName: \"kubernetes.io/projected/b4104721-2685-4b1f-94b6-007263e183aa-kube-api-access-s87km\") pod \"calico-apiserver-7b94b9658b-jdnpd\" (UID: \"b4104721-2685-4b1f-94b6-007263e183aa\") " pod="calico-apiserver/calico-apiserver-7b94b9658b-jdnpd" Sep 5 00:32:09.011584 kubelet[2465]: I0905 00:32:09.011510 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3723c38-1afc-40d9-9c9f-ed74aafda062-config\") pod \"goldmane-7988f88666-s4h56\" (UID: \"d3723c38-1afc-40d9-9c9f-ed74aafda062\") " pod="calico-system/goldmane-7988f88666-s4h56" Sep 5 00:32:09.011584 kubelet[2465]: I0905 00:32:09.011527 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3723c38-1afc-40d9-9c9f-ed74aafda062-goldmane-ca-bundle\") pod \"goldmane-7988f88666-s4h56\" (UID: \"d3723c38-1afc-40d9-9c9f-ed74aafda062\") " pod="calico-system/goldmane-7988f88666-s4h56" Sep 5 00:32:09.011708 kubelet[2465]: I0905 00:32:09.011542 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/192e7eea-9c31-4354-894f-59feed59071c-config-volume\") pod \"coredns-7c65d6cfc9-c8wqh\" (UID: \"192e7eea-9c31-4354-894f-59feed59071c\") " pod="kube-system/coredns-7c65d6cfc9-c8wqh" Sep 5 00:32:09.011708 kubelet[2465]: I0905 00:32:09.011571 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/eeb330c6-49f9-4171-ba33-9d61da205aa6-whisker-backend-key-pair\") pod \"whisker-6cd887f554-kvqvk\" (UID: \"eeb330c6-49f9-4171-ba33-9d61da205aa6\") " pod="calico-system/whisker-6cd887f554-kvqvk" Sep 5 00:32:09.011708 kubelet[2465]: I0905 00:32:09.011592 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67x8n\" (UniqueName: \"kubernetes.io/projected/90fd82e1-4b15-4224-8bf1-2c50a55938a1-kube-api-access-67x8n\") pod \"calico-apiserver-7b94b9658b-ngd47\" (UID: \"90fd82e1-4b15-4224-8bf1-2c50a55938a1\") " pod="calico-apiserver/calico-apiserver-7b94b9658b-ngd47" Sep 5 00:32:09.011708 kubelet[2465]: I0905 00:32:09.011609 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpp8w\" (UniqueName: \"kubernetes.io/projected/19de0c52-9a52-436c-b9e4-3190b3fe0247-kube-api-access-hpp8w\") pod \"calico-kube-controllers-869c68467b-v22j2\" (UID: \"19de0c52-9a52-436c-b9e4-3190b3fe0247\") " pod="calico-system/calico-kube-controllers-869c68467b-v22j2" Sep 5 00:32:09.011708 kubelet[2465]: I0905 00:32:09.011625 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/35c515fe-5ffe-4ef1-99c2-70d6c9f6b20d-config-volume\") pod \"coredns-7c65d6cfc9-vx8zc\" (UID: \"35c515fe-5ffe-4ef1-99c2-70d6c9f6b20d\") " pod="kube-system/coredns-7c65d6cfc9-vx8zc" Sep 5 00:32:09.011836 kubelet[2465]: I0905 00:32:09.011642 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xqwg\" (UniqueName: \"kubernetes.io/projected/35c515fe-5ffe-4ef1-99c2-70d6c9f6b20d-kube-api-access-2xqwg\") pod \"coredns-7c65d6cfc9-vx8zc\" (UID: \"35c515fe-5ffe-4ef1-99c2-70d6c9f6b20d\") " pod="kube-system/coredns-7c65d6cfc9-vx8zc" Sep 5 00:32:09.011836 kubelet[2465]: I0905 00:32:09.011657 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eeb330c6-49f9-4171-ba33-9d61da205aa6-whisker-ca-bundle\") pod \"whisker-6cd887f554-kvqvk\" (UID: \"eeb330c6-49f9-4171-ba33-9d61da205aa6\") " pod="calico-system/whisker-6cd887f554-kvqvk" Sep 5 00:32:09.011836 kubelet[2465]: I0905 00:32:09.011674 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/d3723c38-1afc-40d9-9c9f-ed74aafda062-goldmane-key-pair\") pod \"goldmane-7988f88666-s4h56\" (UID: \"d3723c38-1afc-40d9-9c9f-ed74aafda062\") " pod="calico-system/goldmane-7988f88666-s4h56" Sep 5 00:32:09.011836 kubelet[2465]: I0905 00:32:09.011691 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmtcn\" (UniqueName: \"kubernetes.io/projected/d3723c38-1afc-40d9-9c9f-ed74aafda062-kube-api-access-vmtcn\") pod \"goldmane-7988f88666-s4h56\" (UID: \"d3723c38-1afc-40d9-9c9f-ed74aafda062\") " pod="calico-system/goldmane-7988f88666-s4h56" Sep 5 00:32:09.011836 kubelet[2465]: I0905 00:32:09.011706 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmkgw\" (UniqueName: \"kubernetes.io/projected/192e7eea-9c31-4354-894f-59feed59071c-kube-api-access-jmkgw\") pod \"coredns-7c65d6cfc9-c8wqh\" (UID: \"192e7eea-9c31-4354-894f-59feed59071c\") " pod="kube-system/coredns-7c65d6cfc9-c8wqh" Sep 5 00:32:09.259392 containerd[1446]: time="2025-09-05T00:32:09.259350549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b94b9658b-jdnpd,Uid:b4104721-2685-4b1f-94b6-007263e183aa,Namespace:calico-apiserver,Attempt:0,}" Sep 5 00:32:09.266411 containerd[1446]: time="2025-09-05T00:32:09.266171814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-869c68467b-v22j2,Uid:19de0c52-9a52-436c-b9e4-3190b3fe0247,Namespace:calico-system,Attempt:0,}" Sep 5 00:32:09.276771 kubelet[2465]: E0905 00:32:09.276734 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:32:09.278069 containerd[1446]: time="2025-09-05T00:32:09.277767096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vx8zc,Uid:35c515fe-5ffe-4ef1-99c2-70d6c9f6b20d,Namespace:kube-system,Attempt:0,}" Sep 5 00:32:09.291099 containerd[1446]: time="2025-09-05T00:32:09.290792488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b94b9658b-ngd47,Uid:90fd82e1-4b15-4224-8bf1-2c50a55938a1,Namespace:calico-apiserver,Attempt:0,}" Sep 5 00:32:09.302020 containerd[1446]: time="2025-09-05T00:32:09.301981544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6cd887f554-kvqvk,Uid:eeb330c6-49f9-4171-ba33-9d61da205aa6,Namespace:calico-system,Attempt:0,}" Sep 5 00:32:09.305831 kubelet[2465]: E0905 00:32:09.305472 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:32:09.306339 containerd[1446]: time="2025-09-05T00:32:09.306279031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-s4h56,Uid:d3723c38-1afc-40d9-9c9f-ed74aafda062,Namespace:calico-system,Attempt:0,}" Sep 5 00:32:09.306570 containerd[1446]: time="2025-09-05T00:32:09.306305425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-c8wqh,Uid:192e7eea-9c31-4354-894f-59feed59071c,Namespace:kube-system,Attempt:0,}" Sep 5 00:32:09.319046 containerd[1446]: time="2025-09-05T00:32:09.319001773Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 5 00:32:09.397205 containerd[1446]: time="2025-09-05T00:32:09.397143406Z" level=error msg="Failed to destroy network for sandbox \"845fc777a4a36d84129ce3646095dce8a8df461e48644a8ccb9f1110f4764d64\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:32:09.397558 containerd[1446]: time="2025-09-05T00:32:09.397521959Z" level=error msg="encountered an error cleaning up failed sandbox \"845fc777a4a36d84129ce3646095dce8a8df461e48644a8ccb9f1110f4764d64\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:32:09.397614 containerd[1446]: time="2025-09-05T00:32:09.397577346Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b94b9658b-jdnpd,Uid:b4104721-2685-4b1f-94b6-007263e183aa,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"845fc777a4a36d84129ce3646095dce8a8df461e48644a8ccb9f1110f4764d64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:32:09.399919 kubelet[2465]: E0905 00:32:09.399827 2465 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"845fc777a4a36d84129ce3646095dce8a8df461e48644a8ccb9f1110f4764d64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:32:09.402116 kubelet[2465]: E0905 00:32:09.402065 2465 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"845fc777a4a36d84129ce3646095dce8a8df461e48644a8ccb9f1110f4764d64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b94b9658b-jdnpd" Sep 5 00:32:09.402116 kubelet[2465]: E0905 00:32:09.402113 2465 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"845fc777a4a36d84129ce3646095dce8a8df461e48644a8ccb9f1110f4764d64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b94b9658b-jdnpd" Sep 5 00:32:09.402304 kubelet[2465]: E0905 00:32:09.402165 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7b94b9658b-jdnpd_calico-apiserver(b4104721-2685-4b1f-94b6-007263e183aa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7b94b9658b-jdnpd_calico-apiserver(b4104721-2685-4b1f-94b6-007263e183aa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"845fc777a4a36d84129ce3646095dce8a8df461e48644a8ccb9f1110f4764d64\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b94b9658b-jdnpd" podUID="b4104721-2685-4b1f-94b6-007263e183aa" Sep 5 00:32:09.408240 containerd[1446]: time="2025-09-05T00:32:09.408190335Z" level=error msg="Failed to destroy network for sandbox \"0893cac7dbee3bfd15934b019a67a4b3a9434d67ef96211579ade423a1e34526\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:32:09.408712 containerd[1446]: time="2025-09-05T00:32:09.408667665Z" level=error msg="encountered an error cleaning up failed sandbox \"0893cac7dbee3bfd15934b019a67a4b3a9434d67ef96211579ade423a1e34526\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:32:09.408772 containerd[1446]: time="2025-09-05T00:32:09.408741608Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b94b9658b-ngd47,Uid:90fd82e1-4b15-4224-8bf1-2c50a55938a1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0893cac7dbee3bfd15934b019a67a4b3a9434d67ef96211579ade423a1e34526\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:32:09.409013 kubelet[2465]: E0905 00:32:09.408968 2465 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0893cac7dbee3bfd15934b019a67a4b3a9434d67ef96211579ade423a1e34526\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:32:09.409075 kubelet[2465]: E0905 00:32:09.409022 2465 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0893cac7dbee3bfd15934b019a67a4b3a9434d67ef96211579ade423a1e34526\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b94b9658b-ngd47" Sep 5 00:32:09.409075 kubelet[2465]: E0905 00:32:09.409042 2465 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0893cac7dbee3bfd15934b019a67a4b3a9434d67ef96211579ade423a1e34526\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b94b9658b-ngd47" Sep 5 00:32:09.409123 kubelet[2465]: E0905 00:32:09.409077 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7b94b9658b-ngd47_calico-apiserver(90fd82e1-4b15-4224-8bf1-2c50a55938a1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7b94b9658b-ngd47_calico-apiserver(90fd82e1-4b15-4224-8bf1-2c50a55938a1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0893cac7dbee3bfd15934b019a67a4b3a9434d67ef96211579ade423a1e34526\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b94b9658b-ngd47" podUID="90fd82e1-4b15-4224-8bf1-2c50a55938a1" Sep 5 00:32:09.421259 containerd[1446]: time="2025-09-05T00:32:09.420978781Z" level=error msg="Failed to destroy network for sandbox \"8313429ede94278ad5c5a1ab799089fc13106b9f22101dace780893eeab73a35\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:32:09.421450 containerd[1446]: time="2025-09-05T00:32:09.421415840Z" level=error msg="encountered an error cleaning up failed sandbox \"8313429ede94278ad5c5a1ab799089fc13106b9f22101dace780893eeab73a35\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:32:09.421505 containerd[1446]: time="2025-09-05T00:32:09.421471028Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-869c68467b-v22j2,Uid:19de0c52-9a52-436c-b9e4-3190b3fe0247,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8313429ede94278ad5c5a1ab799089fc13106b9f22101dace780893eeab73a35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:32:09.421820 kubelet[2465]: E0905 00:32:09.421780 2465 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8313429ede94278ad5c5a1ab799089fc13106b9f22101dace780893eeab73a35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:32:09.421957 kubelet[2465]: E0905 00:32:09.421849 2465 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8313429ede94278ad5c5a1ab799089fc13106b9f22101dace780893eeab73a35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-869c68467b-v22j2" Sep 5 00:32:09.421957 kubelet[2465]: E0905 00:32:09.421900 2465 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8313429ede94278ad5c5a1ab799089fc13106b9f22101dace780893eeab73a35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-869c68467b-v22j2" Sep 5 00:32:09.421957 kubelet[2465]: E0905 00:32:09.421942 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-869c68467b-v22j2_calico-system(19de0c52-9a52-436c-b9e4-3190b3fe0247)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-869c68467b-v22j2_calico-system(19de0c52-9a52-436c-b9e4-3190b3fe0247)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8313429ede94278ad5c5a1ab799089fc13106b9f22101dace780893eeab73a35\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-869c68467b-v22j2" podUID="19de0c52-9a52-436c-b9e4-3190b3fe0247" Sep 5 00:32:09.434977 containerd[1446]: time="2025-09-05T00:32:09.434932239Z" level=error msg="Failed to destroy network for sandbox \"4736016e632a47b15b4e30f05f70413ded89cd08d32f3e62a6aec4284e1fa6dd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:32:09.435636 containerd[1446]: time="2025-09-05T00:32:09.435492509Z" level=error msg="encountered an error cleaning up failed sandbox \"4736016e632a47b15b4e30f05f70413ded89cd08d32f3e62a6aec4284e1fa6dd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:32:09.435636 containerd[1446]: time="2025-09-05T00:32:09.435543978Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6cd887f554-kvqvk,Uid:eeb330c6-49f9-4171-ba33-9d61da205aa6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4736016e632a47b15b4e30f05f70413ded89cd08d32f3e62a6aec4284e1fa6dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:32:09.435814 kubelet[2465]: E0905 00:32:09.435757 2465 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4736016e632a47b15b4e30f05f70413ded89cd08d32f3e62a6aec4284e1fa6dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:32:09.435814 kubelet[2465]: E0905 00:32:09.435813 2465 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4736016e632a47b15b4e30f05f70413ded89cd08d32f3e62a6aec4284e1fa6dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6cd887f554-kvqvk" Sep 5 00:32:09.435919 kubelet[2465]: E0905 00:32:09.435830 2465 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4736016e632a47b15b4e30f05f70413ded89cd08d32f3e62a6aec4284e1fa6dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6cd887f554-kvqvk" Sep 5 00:32:09.435973 kubelet[2465]: E0905 00:32:09.435911 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6cd887f554-kvqvk_calico-system(eeb330c6-49f9-4171-ba33-9d61da205aa6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6cd887f554-kvqvk_calico-system(eeb330c6-49f9-4171-ba33-9d61da205aa6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4736016e632a47b15b4e30f05f70413ded89cd08d32f3e62a6aec4284e1fa6dd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6cd887f554-kvqvk" podUID="eeb330c6-49f9-4171-ba33-9d61da205aa6" Sep 5 00:32:09.440583 containerd[1446]: time="2025-09-05T00:32:09.440486636Z" level=error msg="Failed to destroy network for sandbox \"0d857350e94f7419b72a77b0e2aa8cd7ce34079bf4c83b93076824bfeced2184\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:32:09.440915 containerd[1446]: time="2025-09-05T00:32:09.440878546Z" level=error msg="encountered an error cleaning up failed sandbox \"0d857350e94f7419b72a77b0e2aa8cd7ce34079bf4c83b93076824bfeced2184\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:32:09.440970 containerd[1446]: time="2025-09-05T00:32:09.440932413Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vx8zc,Uid:35c515fe-5ffe-4ef1-99c2-70d6c9f6b20d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0d857350e94f7419b72a77b0e2aa8cd7ce34079bf4c83b93076824bfeced2184\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:32:09.441458 kubelet[2465]: E0905 00:32:09.441129 2465 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d857350e94f7419b72a77b0e2aa8cd7ce34079bf4c83b93076824bfeced2184\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:32:09.441458 kubelet[2465]: E0905 00:32:09.441179 2465 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d857350e94f7419b72a77b0e2aa8cd7ce34079bf4c83b93076824bfeced2184\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-vx8zc" Sep 5 00:32:09.441458 kubelet[2465]: E0905 00:32:09.441200 2465 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d857350e94f7419b72a77b0e2aa8cd7ce34079bf4c83b93076824bfeced2184\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-vx8zc" Sep 5 00:32:09.441604 kubelet[2465]: E0905 00:32:09.441258 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-vx8zc_kube-system(35c515fe-5ffe-4ef1-99c2-70d6c9f6b20d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-vx8zc_kube-system(35c515fe-5ffe-4ef1-99c2-70d6c9f6b20d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0d857350e94f7419b72a77b0e2aa8cd7ce34079bf4c83b93076824bfeced2184\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-vx8zc" podUID="35c515fe-5ffe-4ef1-99c2-70d6c9f6b20d" Sep 5 00:32:09.443438 containerd[1446]: time="2025-09-05T00:32:09.443408041Z" level=error msg="Failed to destroy network for sandbox \"e714b25324219a1c25792314649e68dc99079fff5b9dd9159b644379c4079743\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:32:09.444295 containerd[1446]: time="2025-09-05T00:32:09.444157428Z" level=error msg="encountered an error cleaning up failed sandbox \"e714b25324219a1c25792314649e68dc99079fff5b9dd9159b644379c4079743\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:32:09.444295 containerd[1446]: time="2025-09-05T00:32:09.444215215Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-s4h56,Uid:d3723c38-1afc-40d9-9c9f-ed74aafda062,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e714b25324219a1c25792314649e68dc99079fff5b9dd9159b644379c4079743\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:32:09.444590 kubelet[2465]: E0905 00:32:09.444377 2465 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e714b25324219a1c25792314649e68dc99079fff5b9dd9159b644379c4079743\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:32:09.444590 kubelet[2465]: E0905 00:32:09.444420 2465 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e714b25324219a1c25792314649e68dc99079fff5b9dd9159b644379c4079743\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-s4h56" Sep 5 00:32:09.444590 kubelet[2465]: E0905 00:32:09.444438 2465 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e714b25324219a1c25792314649e68dc99079fff5b9dd9159b644379c4079743\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-s4h56" Sep 5 00:32:09.444738 kubelet[2465]: E0905 00:32:09.444472 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-s4h56_calico-system(d3723c38-1afc-40d9-9c9f-ed74aafda062)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-s4h56_calico-system(d3723c38-1afc-40d9-9c9f-ed74aafda062)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e714b25324219a1c25792314649e68dc99079fff5b9dd9159b644379c4079743\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-s4h56" podUID="d3723c38-1afc-40d9-9c9f-ed74aafda062" Sep 5 00:32:09.455584 containerd[1446]: time="2025-09-05T00:32:09.455533001Z" level=error msg="Failed to destroy network for sandbox \"1003f23274bc0b838f0f917a029aba5c6d871ebc11763e025681afdab086d9aa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:32:09.455867 containerd[1446]: time="2025-09-05T00:32:09.455832652Z" level=error msg="encountered an error cleaning up failed sandbox \"1003f23274bc0b838f0f917a029aba5c6d871ebc11763e025681afdab086d9aa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:32:09.455929 containerd[1446]: time="2025-09-05T00:32:09.455905115Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-c8wqh,Uid:192e7eea-9c31-4354-894f-59feed59071c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1003f23274bc0b838f0f917a029aba5c6d871ebc11763e025681afdab086d9aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:32:09.456134 kubelet[2465]: E0905 00:32:09.456098 2465 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1003f23274bc0b838f0f917a029aba5c6d871ebc11763e025681afdab086d9aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:32:09.456190 kubelet[2465]: E0905 00:32:09.456158 2465 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1003f23274bc0b838f0f917a029aba5c6d871ebc11763e025681afdab086d9aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-c8wqh" Sep 5 00:32:09.456190 kubelet[2465]: E0905 00:32:09.456177 2465 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1003f23274bc0b838f0f917a029aba5c6d871ebc11763e025681afdab086d9aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-c8wqh" Sep 5 00:32:09.456239 kubelet[2465]: E0905 00:32:09.456221 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-c8wqh_kube-system(192e7eea-9c31-4354-894f-59feed59071c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-c8wqh_kube-system(192e7eea-9c31-4354-894f-59feed59071c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1003f23274bc0b838f0f917a029aba5c6d871ebc11763e025681afdab086d9aa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-c8wqh" podUID="192e7eea-9c31-4354-894f-59feed59071c" Sep 5 00:32:10.159452 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8313429ede94278ad5c5a1ab799089fc13106b9f22101dace780893eeab73a35-shm.mount: Deactivated successfully. Sep 5 00:32:10.159548 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-845fc777a4a36d84129ce3646095dce8a8df461e48644a8ccb9f1110f4764d64-shm.mount: Deactivated successfully. Sep 5 00:32:10.235796 systemd[1]: Created slice kubepods-besteffort-pod7cca8ed9_cabd_4207_ad70_24ca48f24180.slice - libcontainer container kubepods-besteffort-pod7cca8ed9_cabd_4207_ad70_24ca48f24180.slice. Sep 5 00:32:10.237993 containerd[1446]: time="2025-09-05T00:32:10.237953485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7wt2m,Uid:7cca8ed9-cabd-4207-ad70-24ca48f24180,Namespace:calico-system,Attempt:0,}" Sep 5 00:32:10.293898 containerd[1446]: time="2025-09-05T00:32:10.293757443Z" level=error msg="Failed to destroy network for sandbox \"1d3c80f06bb8177d58c20e8b8e139e3958646547a553cd24b8be6f82dd917a79\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:32:10.294731 containerd[1446]: time="2025-09-05T00:32:10.294585863Z" level=error msg="encountered an error cleaning up failed sandbox \"1d3c80f06bb8177d58c20e8b8e139e3958646547a553cd24b8be6f82dd917a79\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:32:10.294731 containerd[1446]: time="2025-09-05T00:32:10.294642691Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7wt2m,Uid:7cca8ed9-cabd-4207-ad70-24ca48f24180,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1d3c80f06bb8177d58c20e8b8e139e3958646547a553cd24b8be6f82dd917a79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:32:10.294921 kubelet[2465]: E0905 00:32:10.294883 2465 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d3c80f06bb8177d58c20e8b8e139e3958646547a553cd24b8be6f82dd917a79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:32:10.295263 kubelet[2465]: E0905 00:32:10.294942 2465 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d3c80f06bb8177d58c20e8b8e139e3958646547a553cd24b8be6f82dd917a79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7wt2m" Sep 5 00:32:10.295263 kubelet[2465]: E0905 00:32:10.294966 2465 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d3c80f06bb8177d58c20e8b8e139e3958646547a553cd24b8be6f82dd917a79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7wt2m" Sep 5 00:32:10.295263 kubelet[2465]: E0905 00:32:10.295009 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7wt2m_calico-system(7cca8ed9-cabd-4207-ad70-24ca48f24180)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7wt2m_calico-system(7cca8ed9-cabd-4207-ad70-24ca48f24180)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1d3c80f06bb8177d58c20e8b8e139e3958646547a553cd24b8be6f82dd917a79\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7wt2m" podUID="7cca8ed9-cabd-4207-ad70-24ca48f24180" Sep 5 00:32:10.298916 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1d3c80f06bb8177d58c20e8b8e139e3958646547a553cd24b8be6f82dd917a79-shm.mount: Deactivated successfully. Sep 5 00:32:10.318755 kubelet[2465]: I0905 00:32:10.318706 2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1003f23274bc0b838f0f917a029aba5c6d871ebc11763e025681afdab086d9aa" Sep 5 00:32:10.321578 containerd[1446]: time="2025-09-05T00:32:10.321085606Z" level=info msg="StopPodSandbox for \"1003f23274bc0b838f0f917a029aba5c6d871ebc11763e025681afdab086d9aa\"" Sep 5 00:32:10.321578 containerd[1446]: time="2025-09-05T00:32:10.321250410Z" level=info msg="Ensure that sandbox 1003f23274bc0b838f0f917a029aba5c6d871ebc11763e025681afdab086d9aa in task-service has been cleanup successfully" Sep 5 00:32:10.321730 kubelet[2465]: I0905 00:32:10.321363 2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e714b25324219a1c25792314649e68dc99079fff5b9dd9159b644379c4079743" Sep 5 00:32:10.322346 containerd[1446]: time="2025-09-05T00:32:10.322313940Z" level=info msg="StopPodSandbox for \"e714b25324219a1c25792314649e68dc99079fff5b9dd9159b644379c4079743\"" Sep 5 00:32:10.322546 containerd[1446]: time="2025-09-05T00:32:10.322523694Z" level=info msg="Ensure that sandbox e714b25324219a1c25792314649e68dc99079fff5b9dd9159b644379c4079743 in task-service has been cleanup successfully" Sep 5 00:32:10.323833 kubelet[2465]: I0905 00:32:10.323501 2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0893cac7dbee3bfd15934b019a67a4b3a9434d67ef96211579ade423a1e34526" Sep 5 00:32:10.324674 containerd[1446]: time="2025-09-05T00:32:10.324641596Z" level=info msg="StopPodSandbox for \"0893cac7dbee3bfd15934b019a67a4b3a9434d67ef96211579ade423a1e34526\"" Sep 5 00:32:10.325551 kubelet[2465]: I0905 00:32:10.325415 2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4736016e632a47b15b4e30f05f70413ded89cd08d32f3e62a6aec4284e1fa6dd" Sep 5 00:32:10.326152 containerd[1446]: time="2025-09-05T00:32:10.326121276Z" level=info msg="StopPodSandbox for \"4736016e632a47b15b4e30f05f70413ded89cd08d32f3e62a6aec4284e1fa6dd\"" Sep 5 00:32:10.327418 containerd[1446]: time="2025-09-05T00:32:10.327383202Z" level=info msg="Ensure that sandbox 4736016e632a47b15b4e30f05f70413ded89cd08d32f3e62a6aec4284e1fa6dd in task-service has been cleanup successfully" Sep 5 00:32:10.327969 containerd[1446]: time="2025-09-05T00:32:10.327934363Z" level=info msg="Ensure that sandbox 0893cac7dbee3bfd15934b019a67a4b3a9434d67ef96211579ade423a1e34526 in task-service has been cleanup successfully" Sep 5 00:32:10.330263 kubelet[2465]: I0905 00:32:10.330227 2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d857350e94f7419b72a77b0e2aa8cd7ce34079bf4c83b93076824bfeced2184" Sep 5 00:32:10.331333 containerd[1446]: time="2025-09-05T00:32:10.331157505Z" level=info msg="StopPodSandbox for \"0d857350e94f7419b72a77b0e2aa8cd7ce34079bf4c83b93076824bfeced2184\"" Sep 5 00:32:10.332058 containerd[1446]: time="2025-09-05T00:32:10.331980527Z" level=info msg="Ensure that sandbox 0d857350e94f7419b72a77b0e2aa8cd7ce34079bf4c83b93076824bfeced2184 in task-service has been cleanup successfully" Sep 5 00:32:10.336410 kubelet[2465]: I0905 00:32:10.335777 2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8313429ede94278ad5c5a1ab799089fc13106b9f22101dace780893eeab73a35" Sep 5 00:32:10.336807 containerd[1446]: time="2025-09-05T00:32:10.336713262Z" level=info msg="StopPodSandbox for \"8313429ede94278ad5c5a1ab799089fc13106b9f22101dace780893eeab73a35\"" Sep 5 00:32:10.337019 containerd[1446]: time="2025-09-05T00:32:10.336994081Z" level=info msg="Ensure that sandbox 8313429ede94278ad5c5a1ab799089fc13106b9f22101dace780893eeab73a35 in task-service has been cleanup successfully" Sep 5 00:32:10.340491 kubelet[2465]: I0905 00:32:10.340440 2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="845fc777a4a36d84129ce3646095dce8a8df461e48644a8ccb9f1110f4764d64" Sep 5 00:32:10.342531 containerd[1446]: time="2025-09-05T00:32:10.342009476Z" level=info msg="StopPodSandbox for \"845fc777a4a36d84129ce3646095dce8a8df461e48644a8ccb9f1110f4764d64\"" Sep 5 00:32:10.342531 containerd[1446]: time="2025-09-05T00:32:10.342427985Z" level=info msg="Ensure that sandbox 845fc777a4a36d84129ce3646095dce8a8df461e48644a8ccb9f1110f4764d64 in task-service has been cleanup successfully" Sep 5 00:32:10.347812 kubelet[2465]: I0905 00:32:10.347783 2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d3c80f06bb8177d58c20e8b8e139e3958646547a553cd24b8be6f82dd917a79" Sep 5 00:32:10.348507 containerd[1446]: time="2025-09-05T00:32:10.348349383Z" level=info msg="StopPodSandbox for \"1d3c80f06bb8177d58c20e8b8e139e3958646547a553cd24b8be6f82dd917a79\"" Sep 5 00:32:10.350601 containerd[1446]: time="2025-09-05T00:32:10.350463725Z" level=info msg="Ensure that sandbox 1d3c80f06bb8177d58c20e8b8e139e3958646547a553cd24b8be6f82dd917a79 in task-service has been cleanup successfully" Sep 5 00:32:10.396072 kubelet[2465]: I0905 00:32:10.396026 2465 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 00:32:10.396787 kubelet[2465]: E0905 00:32:10.396765 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:32:10.409284 containerd[1446]: time="2025-09-05T00:32:10.409231081Z" level=error msg="StopPodSandbox for \"e714b25324219a1c25792314649e68dc99079fff5b9dd9159b644379c4079743\" failed" error="failed to destroy network for sandbox \"e714b25324219a1c25792314649e68dc99079fff5b9dd9159b644379c4079743\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:32:10.410073 kubelet[2465]: E0905 00:32:10.409491 2465 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e714b25324219a1c25792314649e68dc99079fff5b9dd9159b644379c4079743\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e714b25324219a1c25792314649e68dc99079fff5b9dd9159b644379c4079743" Sep 5 00:32:10.410073 kubelet[2465]: E0905 00:32:10.409542 2465 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e714b25324219a1c25792314649e68dc99079fff5b9dd9159b644379c4079743"} Sep 5 00:32:10.410073 kubelet[2465]: E0905 00:32:10.409594 2465 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d3723c38-1afc-40d9-9c9f-ed74aafda062\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e714b25324219a1c25792314649e68dc99079fff5b9dd9159b644379c4079743\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 00:32:10.410073 kubelet[2465]: E0905 00:32:10.409616 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d3723c38-1afc-40d9-9c9f-ed74aafda062\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e714b25324219a1c25792314649e68dc99079fff5b9dd9159b644379c4079743\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-s4h56" podUID="d3723c38-1afc-40d9-9c9f-ed74aafda062" Sep 5 00:32:10.415186 containerd[1446]: time="2025-09-05T00:32:10.415146881Z" level=error msg="StopPodSandbox for \"1003f23274bc0b838f0f917a029aba5c6d871ebc11763e025681afdab086d9aa\" failed" error="failed to destroy network for sandbox \"1003f23274bc0b838f0f917a029aba5c6d871ebc11763e025681afdab086d9aa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:32:10.416956 kubelet[2465]: E0905 00:32:10.416917 2465 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1003f23274bc0b838f0f917a029aba5c6d871ebc11763e025681afdab086d9aa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1003f23274bc0b838f0f917a029aba5c6d871ebc11763e025681afdab086d9aa" Sep 5 00:32:10.417041 kubelet[2465]: E0905 00:32:10.416960 2465 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1003f23274bc0b838f0f917a029aba5c6d871ebc11763e025681afdab086d9aa"} Sep 5 00:32:10.417041 kubelet[2465]: E0905 00:32:10.416995 2465 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"192e7eea-9c31-4354-894f-59feed59071c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1003f23274bc0b838f0f917a029aba5c6d871ebc11763e025681afdab086d9aa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 00:32:10.417041 kubelet[2465]: E0905 00:32:10.417015 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"192e7eea-9c31-4354-894f-59feed59071c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1003f23274bc0b838f0f917a029aba5c6d871ebc11763e025681afdab086d9aa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-c8wqh" podUID="192e7eea-9c31-4354-894f-59feed59071c" Sep 5 00:32:10.418778 containerd[1446]: time="2025-09-05T00:32:10.418742102Z" level=error msg="StopPodSandbox for \"0d857350e94f7419b72a77b0e2aa8cd7ce34079bf4c83b93076824bfeced2184\" failed" error="failed to destroy network for sandbox \"0d857350e94f7419b72a77b0e2aa8cd7ce34079bf4c83b93076824bfeced2184\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:32:10.419302 kubelet[2465]: E0905 00:32:10.418965 2465 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0d857350e94f7419b72a77b0e2aa8cd7ce34079bf4c83b93076824bfeced2184\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0d857350e94f7419b72a77b0e2aa8cd7ce34079bf4c83b93076824bfeced2184" Sep 5 00:32:10.419302 kubelet[2465]: E0905 00:32:10.419008 2465 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0d857350e94f7419b72a77b0e2aa8cd7ce34079bf4c83b93076824bfeced2184"} Sep 5 00:32:10.419302 kubelet[2465]: E0905 00:32:10.419032 2465 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"35c515fe-5ffe-4ef1-99c2-70d6c9f6b20d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0d857350e94f7419b72a77b0e2aa8cd7ce34079bf4c83b93076824bfeced2184\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 00:32:10.419302 kubelet[2465]: E0905 00:32:10.419050 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"35c515fe-5ffe-4ef1-99c2-70d6c9f6b20d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0d857350e94f7419b72a77b0e2aa8cd7ce34079bf4c83b93076824bfeced2184\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-vx8zc" podUID="35c515fe-5ffe-4ef1-99c2-70d6c9f6b20d" Sep 5 00:32:10.420076 containerd[1446]: time="2025-09-05T00:32:10.419992831Z" level=error msg="StopPodSandbox for \"8313429ede94278ad5c5a1ab799089fc13106b9f22101dace780893eeab73a35\" failed" error="failed to destroy network for sandbox \"8313429ede94278ad5c5a1ab799089fc13106b9f22101dace780893eeab73a35\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:32:10.420175 kubelet[2465]: E0905 00:32:10.420148 2465 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8313429ede94278ad5c5a1ab799089fc13106b9f22101dace780893eeab73a35\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8313429ede94278ad5c5a1ab799089fc13106b9f22101dace780893eeab73a35" Sep 5 00:32:10.420218 kubelet[2465]: E0905 00:32:10.420182 2465 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8313429ede94278ad5c5a1ab799089fc13106b9f22101dace780893eeab73a35"} Sep 5 00:32:10.420218 kubelet[2465]: E0905 00:32:10.420207 2465 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"19de0c52-9a52-436c-b9e4-3190b3fe0247\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8313429ede94278ad5c5a1ab799089fc13106b9f22101dace780893eeab73a35\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 00:32:10.420283 kubelet[2465]: E0905 00:32:10.420225 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"19de0c52-9a52-436c-b9e4-3190b3fe0247\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8313429ede94278ad5c5a1ab799089fc13106b9f22101dace780893eeab73a35\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-869c68467b-v22j2" podUID="19de0c52-9a52-436c-b9e4-3190b3fe0247" Sep 5 00:32:10.427891 containerd[1446]: time="2025-09-05T00:32:10.426955844Z" level=error msg="StopPodSandbox for \"4736016e632a47b15b4e30f05f70413ded89cd08d32f3e62a6aec4284e1fa6dd\" failed" error="failed to destroy network for sandbox \"4736016e632a47b15b4e30f05f70413ded89cd08d32f3e62a6aec4284e1fa6dd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:32:10.430034 containerd[1446]: time="2025-09-05T00:32:10.429995466Z" level=error msg="StopPodSandbox for \"0893cac7dbee3bfd15934b019a67a4b3a9434d67ef96211579ade423a1e34526\" failed" error="failed to destroy network for sandbox \"0893cac7dbee3bfd15934b019a67a4b3a9434d67ef96211579ade423a1e34526\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:32:10.431751 kubelet[2465]: E0905 00:32:10.431703 2465 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0893cac7dbee3bfd15934b019a67a4b3a9434d67ef96211579ade423a1e34526\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0893cac7dbee3bfd15934b019a67a4b3a9434d67ef96211579ade423a1e34526" Sep 5 00:32:10.431825 kubelet[2465]: E0905 00:32:10.431760 2465 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0893cac7dbee3bfd15934b019a67a4b3a9434d67ef96211579ade423a1e34526"} Sep 5 00:32:10.432004 kubelet[2465]: E0905 00:32:10.431975 2465 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"90fd82e1-4b15-4224-8bf1-2c50a55938a1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0893cac7dbee3bfd15934b019a67a4b3a9434d67ef96211579ade423a1e34526\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 00:32:10.432079 kubelet[2465]: E0905 00:32:10.432008 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"90fd82e1-4b15-4224-8bf1-2c50a55938a1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0893cac7dbee3bfd15934b019a67a4b3a9434d67ef96211579ade423a1e34526\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b94b9658b-ngd47" podUID="90fd82e1-4b15-4224-8bf1-2c50a55938a1" Sep 5 00:32:10.432210 kubelet[2465]: E0905 00:32:10.432181 2465 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4736016e632a47b15b4e30f05f70413ded89cd08d32f3e62a6aec4284e1fa6dd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4736016e632a47b15b4e30f05f70413ded89cd08d32f3e62a6aec4284e1fa6dd" Sep 5 00:32:10.432251 kubelet[2465]: E0905 00:32:10.432210 2465 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4736016e632a47b15b4e30f05f70413ded89cd08d32f3e62a6aec4284e1fa6dd"} Sep 5 00:32:10.432251 kubelet[2465]: E0905 00:32:10.432231 2465 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"eeb330c6-49f9-4171-ba33-9d61da205aa6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4736016e632a47b15b4e30f05f70413ded89cd08d32f3e62a6aec4284e1fa6dd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 00:32:10.432315 kubelet[2465]: E0905 00:32:10.432247 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"eeb330c6-49f9-4171-ba33-9d61da205aa6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4736016e632a47b15b4e30f05f70413ded89cd08d32f3e62a6aec4284e1fa6dd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6cd887f554-kvqvk" podUID="eeb330c6-49f9-4171-ba33-9d61da205aa6" Sep 5 00:32:10.440919 containerd[1446]: time="2025-09-05T00:32:10.440741619Z" level=error msg="StopPodSandbox for \"845fc777a4a36d84129ce3646095dce8a8df461e48644a8ccb9f1110f4764d64\" failed" error="failed to destroy network for sandbox \"845fc777a4a36d84129ce3646095dce8a8df461e48644a8ccb9f1110f4764d64\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:32:10.441367 kubelet[2465]: E0905 00:32:10.441238 2465 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"845fc777a4a36d84129ce3646095dce8a8df461e48644a8ccb9f1110f4764d64\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="845fc777a4a36d84129ce3646095dce8a8df461e48644a8ccb9f1110f4764d64" Sep 5 00:32:10.441367 kubelet[2465]: E0905 00:32:10.441285 2465 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"845fc777a4a36d84129ce3646095dce8a8df461e48644a8ccb9f1110f4764d64"} Sep 5 00:32:10.441537 kubelet[2465]: E0905 00:32:10.441509 2465 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b4104721-2685-4b1f-94b6-007263e183aa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"845fc777a4a36d84129ce3646095dce8a8df461e48644a8ccb9f1110f4764d64\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 00:32:10.442303 kubelet[2465]: E0905 00:32:10.441555 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b4104721-2685-4b1f-94b6-007263e183aa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"845fc777a4a36d84129ce3646095dce8a8df461e48644a8ccb9f1110f4764d64\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b94b9658b-jdnpd" podUID="b4104721-2685-4b1f-94b6-007263e183aa" Sep 5 00:32:10.442402 containerd[1446]: time="2025-09-05T00:32:10.441998987Z" level=error msg="StopPodSandbox for \"1d3c80f06bb8177d58c20e8b8e139e3958646547a553cd24b8be6f82dd917a79\" failed" error="failed to destroy network for sandbox \"1d3c80f06bb8177d58c20e8b8e139e3958646547a553cd24b8be6f82dd917a79\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:32:10.443183 kubelet[2465]: E0905 00:32:10.443151 2465 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1d3c80f06bb8177d58c20e8b8e139e3958646547a553cd24b8be6f82dd917a79\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1d3c80f06bb8177d58c20e8b8e139e3958646547a553cd24b8be6f82dd917a79" Sep 5 00:32:10.443302 kubelet[2465]: E0905 00:32:10.443279 2465 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1d3c80f06bb8177d58c20e8b8e139e3958646547a553cd24b8be6f82dd917a79"} Sep 5 00:32:10.443400 kubelet[2465]: E0905 00:32:10.443314 2465 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7cca8ed9-cabd-4207-ad70-24ca48f24180\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1d3c80f06bb8177d58c20e8b8e139e3958646547a553cd24b8be6f82dd917a79\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 00:32:10.443400 kubelet[2465]: E0905 00:32:10.443357 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7cca8ed9-cabd-4207-ad70-24ca48f24180\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1d3c80f06bb8177d58c20e8b8e139e3958646547a553cd24b8be6f82dd917a79\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7wt2m" podUID="7cca8ed9-cabd-4207-ad70-24ca48f24180" Sep 5 00:32:11.349949 kubelet[2465]: E0905 00:32:11.349917 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:32:13.241838 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2041154158.mount: Deactivated successfully. Sep 5 00:32:13.399941 containerd[1446]: time="2025-09-05T00:32:13.399267357Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:32:13.400272 containerd[1446]: time="2025-09-05T00:32:13.399935878Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 5 00:32:13.400664 containerd[1446]: time="2025-09-05T00:32:13.400644111Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:32:13.402874 containerd[1446]: time="2025-09-05T00:32:13.402747496Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:32:13.403271 containerd[1446]: time="2025-09-05T00:32:13.403247127Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 4.084202684s" Sep 5 00:32:13.403314 containerd[1446]: time="2025-09-05T00:32:13.403278561Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 5 00:32:13.420753 containerd[1446]: time="2025-09-05T00:32:13.420605310Z" level=info msg="CreateContainer within sandbox \"bb355bf4315f3951ace3a2f9ed645365469e0c19a2826e13e891ecf59609e614\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 5 00:32:13.443514 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1990174668.mount: Deactivated successfully. Sep 5 00:32:13.447542 containerd[1446]: time="2025-09-05T00:32:13.447499553Z" level=info msg="CreateContainer within sandbox \"bb355bf4315f3951ace3a2f9ed645365469e0c19a2826e13e891ecf59609e614\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"21f7dd7e613d1b084d06c50c532de32cdd371badd4245adc451e8d62d768ad4e\"" Sep 5 00:32:13.448972 containerd[1446]: time="2025-09-05T00:32:13.448264296Z" level=info msg="StartContainer for \"21f7dd7e613d1b084d06c50c532de32cdd371badd4245adc451e8d62d768ad4e\"" Sep 5 00:32:13.498006 systemd[1]: Started cri-containerd-21f7dd7e613d1b084d06c50c532de32cdd371badd4245adc451e8d62d768ad4e.scope - libcontainer container 21f7dd7e613d1b084d06c50c532de32cdd371badd4245adc451e8d62d768ad4e. Sep 5 00:32:13.525586 containerd[1446]: time="2025-09-05T00:32:13.525543590Z" level=info msg="StartContainer for \"21f7dd7e613d1b084d06c50c532de32cdd371badd4245adc451e8d62d768ad4e\" returns successfully" Sep 5 00:32:13.638434 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 5 00:32:13.638551 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 5 00:32:13.727331 containerd[1446]: time="2025-09-05T00:32:13.727274403Z" level=info msg="StopPodSandbox for \"4736016e632a47b15b4e30f05f70413ded89cd08d32f3e62a6aec4284e1fa6dd\"" Sep 5 00:32:13.912456 containerd[1446]: 2025-09-05 00:32:13.824 [INFO][3744] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4736016e632a47b15b4e30f05f70413ded89cd08d32f3e62a6aec4284e1fa6dd" Sep 5 00:32:13.912456 containerd[1446]: 2025-09-05 00:32:13.826 [INFO][3744] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4736016e632a47b15b4e30f05f70413ded89cd08d32f3e62a6aec4284e1fa6dd" iface="eth0" netns="/var/run/netns/cni-09527b04-2c27-18c4-5ac4-5fc59fb30305" Sep 5 00:32:13.912456 containerd[1446]: 2025-09-05 00:32:13.826 [INFO][3744] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4736016e632a47b15b4e30f05f70413ded89cd08d32f3e62a6aec4284e1fa6dd" iface="eth0" netns="/var/run/netns/cni-09527b04-2c27-18c4-5ac4-5fc59fb30305" Sep 5 00:32:13.912456 containerd[1446]: 2025-09-05 00:32:13.827 [INFO][3744] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4736016e632a47b15b4e30f05f70413ded89cd08d32f3e62a6aec4284e1fa6dd" iface="eth0" netns="/var/run/netns/cni-09527b04-2c27-18c4-5ac4-5fc59fb30305" Sep 5 00:32:13.912456 containerd[1446]: 2025-09-05 00:32:13.827 [INFO][3744] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4736016e632a47b15b4e30f05f70413ded89cd08d32f3e62a6aec4284e1fa6dd" Sep 5 00:32:13.912456 containerd[1446]: 2025-09-05 00:32:13.827 [INFO][3744] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4736016e632a47b15b4e30f05f70413ded89cd08d32f3e62a6aec4284e1fa6dd" Sep 5 00:32:13.912456 containerd[1446]: 2025-09-05 00:32:13.897 [INFO][3760] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4736016e632a47b15b4e30f05f70413ded89cd08d32f3e62a6aec4284e1fa6dd" HandleID="k8s-pod-network.4736016e632a47b15b4e30f05f70413ded89cd08d32f3e62a6aec4284e1fa6dd" Workload="localhost-k8s-whisker--6cd887f554--kvqvk-eth0" Sep 5 00:32:13.912456 containerd[1446]: 2025-09-05 00:32:13.897 [INFO][3760] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:32:13.912456 containerd[1446]: 2025-09-05 00:32:13.897 [INFO][3760] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:32:13.912456 containerd[1446]: 2025-09-05 00:32:13.907 [WARNING][3760] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4736016e632a47b15b4e30f05f70413ded89cd08d32f3e62a6aec4284e1fa6dd" HandleID="k8s-pod-network.4736016e632a47b15b4e30f05f70413ded89cd08d32f3e62a6aec4284e1fa6dd" Workload="localhost-k8s-whisker--6cd887f554--kvqvk-eth0" Sep 5 00:32:13.912456 containerd[1446]: 2025-09-05 00:32:13.907 [INFO][3760] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4736016e632a47b15b4e30f05f70413ded89cd08d32f3e62a6aec4284e1fa6dd" HandleID="k8s-pod-network.4736016e632a47b15b4e30f05f70413ded89cd08d32f3e62a6aec4284e1fa6dd" Workload="localhost-k8s-whisker--6cd887f554--kvqvk-eth0" Sep 5 00:32:13.912456 containerd[1446]: 2025-09-05 00:32:13.908 [INFO][3760] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:32:13.912456 containerd[1446]: 2025-09-05 00:32:13.910 [INFO][3744] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4736016e632a47b15b4e30f05f70413ded89cd08d32f3e62a6aec4284e1fa6dd" Sep 5 00:32:13.912908 containerd[1446]: time="2025-09-05T00:32:13.912719841Z" level=info msg="TearDown network for sandbox \"4736016e632a47b15b4e30f05f70413ded89cd08d32f3e62a6aec4284e1fa6dd\" successfully" Sep 5 00:32:13.912908 containerd[1446]: time="2025-09-05T00:32:13.912821462Z" level=info msg="StopPodSandbox for \"4736016e632a47b15b4e30f05f70413ded89cd08d32f3e62a6aec4284e1fa6dd\" returns successfully" Sep 5 00:32:14.042139 kubelet[2465]: I0905 00:32:14.042087 2465 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6z4g\" (UniqueName: \"kubernetes.io/projected/eeb330c6-49f9-4171-ba33-9d61da205aa6-kube-api-access-h6z4g\") pod \"eeb330c6-49f9-4171-ba33-9d61da205aa6\" (UID: \"eeb330c6-49f9-4171-ba33-9d61da205aa6\") " Sep 5 00:32:14.042139 kubelet[2465]: I0905 00:32:14.042138 2465 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eeb330c6-49f9-4171-ba33-9d61da205aa6-whisker-ca-bundle\") pod \"eeb330c6-49f9-4171-ba33-9d61da205aa6\" (UID: \"eeb330c6-49f9-4171-ba33-9d61da205aa6\") " Sep 5 00:32:14.042526 kubelet[2465]: I0905 00:32:14.042164 2465 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/eeb330c6-49f9-4171-ba33-9d61da205aa6-whisker-backend-key-pair\") pod \"eeb330c6-49f9-4171-ba33-9d61da205aa6\" (UID: \"eeb330c6-49f9-4171-ba33-9d61da205aa6\") " Sep 5 00:32:14.045493 kubelet[2465]: I0905 00:32:14.045369 2465 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eeb330c6-49f9-4171-ba33-9d61da205aa6-kube-api-access-h6z4g" (OuterVolumeSpecName: "kube-api-access-h6z4g") pod "eeb330c6-49f9-4171-ba33-9d61da205aa6" (UID: "eeb330c6-49f9-4171-ba33-9d61da205aa6"). InnerVolumeSpecName "kube-api-access-h6z4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 5 00:32:14.048292 kubelet[2465]: I0905 00:32:14.048127 2465 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eeb330c6-49f9-4171-ba33-9d61da205aa6-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "eeb330c6-49f9-4171-ba33-9d61da205aa6" (UID: "eeb330c6-49f9-4171-ba33-9d61da205aa6"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 5 00:32:14.049769 kubelet[2465]: I0905 00:32:14.049687 2465 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eeb330c6-49f9-4171-ba33-9d61da205aa6-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "eeb330c6-49f9-4171-ba33-9d61da205aa6" (UID: "eeb330c6-49f9-4171-ba33-9d61da205aa6"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 5 00:32:14.143097 kubelet[2465]: I0905 00:32:14.143056 2465 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eeb330c6-49f9-4171-ba33-9d61da205aa6-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 5 00:32:14.143097 kubelet[2465]: I0905 00:32:14.143091 2465 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/eeb330c6-49f9-4171-ba33-9d61da205aa6-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 5 00:32:14.143097 kubelet[2465]: I0905 00:32:14.143101 2465 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6z4g\" (UniqueName: \"kubernetes.io/projected/eeb330c6-49f9-4171-ba33-9d61da205aa6-kube-api-access-h6z4g\") on node \"localhost\" DevicePath \"\"" Sep 5 00:32:14.242887 systemd[1]: run-netns-cni\x2d09527b04\x2d2c27\x2d18c4\x2d5ac4\x2d5fc59fb30305.mount: Deactivated successfully. Sep 5 00:32:14.242978 systemd[1]: var-lib-kubelet-pods-eeb330c6\x2d49f9\x2d4171\x2dba33\x2d9d61da205aa6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh6z4g.mount: Deactivated successfully. Sep 5 00:32:14.243036 systemd[1]: var-lib-kubelet-pods-eeb330c6\x2d49f9\x2d4171\x2dba33\x2d9d61da205aa6-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 5 00:32:14.361643 systemd[1]: Removed slice kubepods-besteffort-podeeb330c6_49f9_4171_ba33_9d61da205aa6.slice - libcontainer container kubepods-besteffort-podeeb330c6_49f9_4171_ba33_9d61da205aa6.slice. Sep 5 00:32:14.383887 kubelet[2465]: I0905 00:32:14.383318 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-gv7d9" podStartSLOduration=1.8649008999999999 podStartE2EDuration="14.383301586s" podCreationTimestamp="2025-09-05 00:32:00 +0000 UTC" firstStartedPulling="2025-09-05 00:32:00.885512202 +0000 UTC m=+19.756562416" lastFinishedPulling="2025-09-05 00:32:13.403912888 +0000 UTC m=+32.274963102" observedRunningTime="2025-09-05 00:32:14.373425077 +0000 UTC m=+33.244475331" watchObservedRunningTime="2025-09-05 00:32:14.383301586 +0000 UTC m=+33.254351840" Sep 5 00:32:14.421965 systemd[1]: Created slice kubepods-besteffort-pod6e5b187e_c317_42e2_bdb7_6eff6baf9428.slice - libcontainer container kubepods-besteffort-pod6e5b187e_c317_42e2_bdb7_6eff6baf9428.slice. Sep 5 00:32:14.545306 kubelet[2465]: I0905 00:32:14.545142 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e5b187e-c317-42e2-bdb7-6eff6baf9428-whisker-ca-bundle\") pod \"whisker-7c7fcd65b9-r2mj9\" (UID: \"6e5b187e-c317-42e2-bdb7-6eff6baf9428\") " pod="calico-system/whisker-7c7fcd65b9-r2mj9" Sep 5 00:32:14.545306 kubelet[2465]: I0905 00:32:14.545183 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ph5pr\" (UniqueName: \"kubernetes.io/projected/6e5b187e-c317-42e2-bdb7-6eff6baf9428-kube-api-access-ph5pr\") pod \"whisker-7c7fcd65b9-r2mj9\" (UID: \"6e5b187e-c317-42e2-bdb7-6eff6baf9428\") " pod="calico-system/whisker-7c7fcd65b9-r2mj9" Sep 5 00:32:14.545306 kubelet[2465]: I0905 00:32:14.545253 2465 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6e5b187e-c317-42e2-bdb7-6eff6baf9428-whisker-backend-key-pair\") pod \"whisker-7c7fcd65b9-r2mj9\" (UID: \"6e5b187e-c317-42e2-bdb7-6eff6baf9428\") " pod="calico-system/whisker-7c7fcd65b9-r2mj9" Sep 5 00:32:14.725764 containerd[1446]: time="2025-09-05T00:32:14.725720039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7c7fcd65b9-r2mj9,Uid:6e5b187e-c317-42e2-bdb7-6eff6baf9428,Namespace:calico-system,Attempt:0,}" Sep 5 00:32:14.858266 systemd-networkd[1384]: cali3b582d5f794: Link UP Sep 5 00:32:14.858551 systemd-networkd[1384]: cali3b582d5f794: Gained carrier Sep 5 00:32:14.873220 containerd[1446]: 2025-09-05 00:32:14.777 [INFO][3783] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 5 00:32:14.873220 containerd[1446]: 2025-09-05 00:32:14.791 [INFO][3783] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--7c7fcd65b9--r2mj9-eth0 whisker-7c7fcd65b9- calico-system 6e5b187e-c317-42e2-bdb7-6eff6baf9428 891 0 2025-09-05 00:32:14 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7c7fcd65b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-7c7fcd65b9-r2mj9 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali3b582d5f794 [] [] }} ContainerID="9cd9127410ebe576265e091f66466cb7d7ad5ef50538b1808d644f9766f8b955" Namespace="calico-system" Pod="whisker-7c7fcd65b9-r2mj9" WorkloadEndpoint="localhost-k8s-whisker--7c7fcd65b9--r2mj9-" Sep 5 00:32:14.873220 containerd[1446]: 2025-09-05 00:32:14.792 [INFO][3783] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9cd9127410ebe576265e091f66466cb7d7ad5ef50538b1808d644f9766f8b955" Namespace="calico-system" Pod="whisker-7c7fcd65b9-r2mj9" WorkloadEndpoint="localhost-k8s-whisker--7c7fcd65b9--r2mj9-eth0" Sep 5 00:32:14.873220 containerd[1446]: 2025-09-05 00:32:14.814 [INFO][3797] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9cd9127410ebe576265e091f66466cb7d7ad5ef50538b1808d644f9766f8b955" HandleID="k8s-pod-network.9cd9127410ebe576265e091f66466cb7d7ad5ef50538b1808d644f9766f8b955" Workload="localhost-k8s-whisker--7c7fcd65b9--r2mj9-eth0" Sep 5 00:32:14.873220 containerd[1446]: 2025-09-05 00:32:14.815 [INFO][3797] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9cd9127410ebe576265e091f66466cb7d7ad5ef50538b1808d644f9766f8b955" HandleID="k8s-pod-network.9cd9127410ebe576265e091f66466cb7d7ad5ef50538b1808d644f9766f8b955" Workload="localhost-k8s-whisker--7c7fcd65b9--r2mj9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3200), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-7c7fcd65b9-r2mj9", "timestamp":"2025-09-05 00:32:14.814844294 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:32:14.873220 containerd[1446]: 2025-09-05 00:32:14.815 [INFO][3797] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:32:14.873220 containerd[1446]: 2025-09-05 00:32:14.815 [INFO][3797] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:32:14.873220 containerd[1446]: 2025-09-05 00:32:14.815 [INFO][3797] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:32:14.873220 containerd[1446]: 2025-09-05 00:32:14.825 [INFO][3797] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9cd9127410ebe576265e091f66466cb7d7ad5ef50538b1808d644f9766f8b955" host="localhost" Sep 5 00:32:14.873220 containerd[1446]: 2025-09-05 00:32:14.830 [INFO][3797] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:32:14.873220 containerd[1446]: 2025-09-05 00:32:14.834 [INFO][3797] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:32:14.873220 containerd[1446]: 2025-09-05 00:32:14.836 [INFO][3797] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:32:14.873220 containerd[1446]: 2025-09-05 00:32:14.838 [INFO][3797] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:32:14.873220 containerd[1446]: 2025-09-05 00:32:14.838 [INFO][3797] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9cd9127410ebe576265e091f66466cb7d7ad5ef50538b1808d644f9766f8b955" host="localhost" Sep 5 00:32:14.873220 containerd[1446]: 2025-09-05 00:32:14.840 [INFO][3797] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9cd9127410ebe576265e091f66466cb7d7ad5ef50538b1808d644f9766f8b955 Sep 5 00:32:14.873220 containerd[1446]: 2025-09-05 00:32:14.843 [INFO][3797] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9cd9127410ebe576265e091f66466cb7d7ad5ef50538b1808d644f9766f8b955" host="localhost" Sep 5 00:32:14.873220 containerd[1446]: 2025-09-05 00:32:14.848 [INFO][3797] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.9cd9127410ebe576265e091f66466cb7d7ad5ef50538b1808d644f9766f8b955" host="localhost" Sep 5 00:32:14.873220 containerd[1446]: 2025-09-05 00:32:14.848 [INFO][3797] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.9cd9127410ebe576265e091f66466cb7d7ad5ef50538b1808d644f9766f8b955" host="localhost" Sep 5 00:32:14.873220 containerd[1446]: 2025-09-05 00:32:14.848 [INFO][3797] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:32:14.873220 containerd[1446]: 2025-09-05 00:32:14.848 [INFO][3797] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="9cd9127410ebe576265e091f66466cb7d7ad5ef50538b1808d644f9766f8b955" HandleID="k8s-pod-network.9cd9127410ebe576265e091f66466cb7d7ad5ef50538b1808d644f9766f8b955" Workload="localhost-k8s-whisker--7c7fcd65b9--r2mj9-eth0" Sep 5 00:32:14.873758 containerd[1446]: 2025-09-05 00:32:14.850 [INFO][3783] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9cd9127410ebe576265e091f66466cb7d7ad5ef50538b1808d644f9766f8b955" Namespace="calico-system" Pod="whisker-7c7fcd65b9-r2mj9" WorkloadEndpoint="localhost-k8s-whisker--7c7fcd65b9--r2mj9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7c7fcd65b9--r2mj9-eth0", GenerateName:"whisker-7c7fcd65b9-", Namespace:"calico-system", SelfLink:"", UID:"6e5b187e-c317-42e2-bdb7-6eff6baf9428", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 32, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7c7fcd65b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-7c7fcd65b9-r2mj9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3b582d5f794", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:32:14.873758 containerd[1446]: 2025-09-05 00:32:14.851 [INFO][3783] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="9cd9127410ebe576265e091f66466cb7d7ad5ef50538b1808d644f9766f8b955" Namespace="calico-system" Pod="whisker-7c7fcd65b9-r2mj9" WorkloadEndpoint="localhost-k8s-whisker--7c7fcd65b9--r2mj9-eth0" Sep 5 00:32:14.873758 containerd[1446]: 2025-09-05 00:32:14.851 [INFO][3783] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3b582d5f794 ContainerID="9cd9127410ebe576265e091f66466cb7d7ad5ef50538b1808d644f9766f8b955" Namespace="calico-system" Pod="whisker-7c7fcd65b9-r2mj9" WorkloadEndpoint="localhost-k8s-whisker--7c7fcd65b9--r2mj9-eth0" Sep 5 00:32:14.873758 containerd[1446]: 2025-09-05 00:32:14.858 [INFO][3783] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9cd9127410ebe576265e091f66466cb7d7ad5ef50538b1808d644f9766f8b955" Namespace="calico-system" Pod="whisker-7c7fcd65b9-r2mj9" WorkloadEndpoint="localhost-k8s-whisker--7c7fcd65b9--r2mj9-eth0" Sep 5 00:32:14.873758 containerd[1446]: 2025-09-05 00:32:14.859 [INFO][3783] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9cd9127410ebe576265e091f66466cb7d7ad5ef50538b1808d644f9766f8b955" Namespace="calico-system" Pod="whisker-7c7fcd65b9-r2mj9" WorkloadEndpoint="localhost-k8s-whisker--7c7fcd65b9--r2mj9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7c7fcd65b9--r2mj9-eth0", GenerateName:"whisker-7c7fcd65b9-", Namespace:"calico-system", SelfLink:"", UID:"6e5b187e-c317-42e2-bdb7-6eff6baf9428", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 32, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7c7fcd65b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9cd9127410ebe576265e091f66466cb7d7ad5ef50538b1808d644f9766f8b955", Pod:"whisker-7c7fcd65b9-r2mj9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3b582d5f794", MAC:"ca:da:08:de:5b:03", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:32:14.873758 containerd[1446]: 2025-09-05 00:32:14.870 [INFO][3783] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9cd9127410ebe576265e091f66466cb7d7ad5ef50538b1808d644f9766f8b955" Namespace="calico-system" Pod="whisker-7c7fcd65b9-r2mj9" WorkloadEndpoint="localhost-k8s-whisker--7c7fcd65b9--r2mj9-eth0" Sep 5 00:32:14.909217 containerd[1446]: time="2025-09-05T00:32:14.892369848Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:32:14.909217 containerd[1446]: time="2025-09-05T00:32:14.909127446Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:32:14.909217 containerd[1446]: time="2025-09-05T00:32:14.909153161Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:32:14.909469 containerd[1446]: time="2025-09-05T00:32:14.909257904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:32:14.935045 systemd[1]: Started cri-containerd-9cd9127410ebe576265e091f66466cb7d7ad5ef50538b1808d644f9766f8b955.scope - libcontainer container 9cd9127410ebe576265e091f66466cb7d7ad5ef50538b1808d644f9766f8b955. Sep 5 00:32:14.972052 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:32:15.004265 containerd[1446]: time="2025-09-05T00:32:15.004199661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7c7fcd65b9-r2mj9,Uid:6e5b187e-c317-42e2-bdb7-6eff6baf9428,Namespace:calico-system,Attempt:0,} returns sandbox id \"9cd9127410ebe576265e091f66466cb7d7ad5ef50538b1808d644f9766f8b955\"" Sep 5 00:32:15.006902 containerd[1446]: time="2025-09-05T00:32:15.006650316Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 5 00:32:15.195902 kernel: bpftool[3978]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 5 00:32:15.231009 kubelet[2465]: I0905 00:32:15.230966 2465 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eeb330c6-49f9-4171-ba33-9d61da205aa6" path="/var/lib/kubelet/pods/eeb330c6-49f9-4171-ba33-9d61da205aa6/volumes" Sep 5 00:32:15.364653 kubelet[2465]: I0905 00:32:15.364612 2465 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 00:32:15.375420 systemd-networkd[1384]: vxlan.calico: Link UP Sep 5 00:32:15.375425 systemd-networkd[1384]: vxlan.calico: Gained carrier Sep 5 00:32:16.262570 containerd[1446]: time="2025-09-05T00:32:16.262519436Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:32:16.264018 containerd[1446]: time="2025-09-05T00:32:16.263537315Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 5 00:32:16.272073 containerd[1446]: time="2025-09-05T00:32:16.272035177Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:32:16.274554 containerd[1446]: time="2025-09-05T00:32:16.274495919Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:32:16.275364 containerd[1446]: time="2025-09-05T00:32:16.275321006Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 1.268628494s" Sep 5 00:32:16.275364 containerd[1446]: time="2025-09-05T00:32:16.275356085Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 5 00:32:16.277573 containerd[1446]: time="2025-09-05T00:32:16.277515679Z" level=info msg="CreateContainer within sandbox \"9cd9127410ebe576265e091f66466cb7d7ad5ef50538b1808d644f9766f8b955\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 5 00:32:16.294965 containerd[1446]: time="2025-09-05T00:32:16.294910866Z" level=info msg="CreateContainer within sandbox \"9cd9127410ebe576265e091f66466cb7d7ad5ef50538b1808d644f9766f8b955\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"a1e3aec79d5c36ca2bee59bd886a0f77f5f32efc07939bd417698b3a0534be10\"" Sep 5 00:32:16.296067 containerd[1446]: time="2025-09-05T00:32:16.296025101Z" level=info msg="StartContainer for \"a1e3aec79d5c36ca2bee59bd886a0f77f5f32efc07939bd417698b3a0534be10\"" Sep 5 00:32:16.334051 systemd[1]: Started cri-containerd-a1e3aec79d5c36ca2bee59bd886a0f77f5f32efc07939bd417698b3a0534be10.scope - libcontainer container a1e3aec79d5c36ca2bee59bd886a0f77f5f32efc07939bd417698b3a0534be10. Sep 5 00:32:16.368891 systemd-networkd[1384]: cali3b582d5f794: Gained IPv6LL Sep 5 00:32:16.371804 containerd[1446]: time="2025-09-05T00:32:16.371758525Z" level=info msg="StartContainer for \"a1e3aec79d5c36ca2bee59bd886a0f77f5f32efc07939bd417698b3a0534be10\" returns successfully" Sep 5 00:32:16.374628 containerd[1446]: time="2025-09-05T00:32:16.374603971Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 5 00:32:17.392408 systemd-networkd[1384]: vxlan.calico: Gained IPv6LL Sep 5 00:32:18.608132 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2014461981.mount: Deactivated successfully. Sep 5 00:32:18.697889 containerd[1446]: time="2025-09-05T00:32:18.697825935Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:32:18.698697 containerd[1446]: time="2025-09-05T00:32:18.698397473Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 5 00:32:18.699398 containerd[1446]: time="2025-09-05T00:32:18.699347598Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:32:18.701633 containerd[1446]: time="2025-09-05T00:32:18.701593193Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:32:18.702516 containerd[1446]: time="2025-09-05T00:32:18.702486879Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 2.327726395s" Sep 5 00:32:18.702565 containerd[1446]: time="2025-09-05T00:32:18.702521958Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 5 00:32:18.706186 containerd[1446]: time="2025-09-05T00:32:18.706139822Z" level=info msg="CreateContainer within sandbox \"9cd9127410ebe576265e091f66466cb7d7ad5ef50538b1808d644f9766f8b955\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 5 00:32:18.717451 containerd[1446]: time="2025-09-05T00:32:18.717361119Z" level=info msg="CreateContainer within sandbox \"9cd9127410ebe576265e091f66466cb7d7ad5ef50538b1808d644f9766f8b955\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"531c5236843a4b57576d80ade0b64f92f972107ad69572b7703c7c7de837af42\"" Sep 5 00:32:18.717930 containerd[1446]: time="2025-09-05T00:32:18.717900059Z" level=info msg="StartContainer for \"531c5236843a4b57576d80ade0b64f92f972107ad69572b7703c7c7de837af42\"" Sep 5 00:32:18.764069 systemd[1]: Started cri-containerd-531c5236843a4b57576d80ade0b64f92f972107ad69572b7703c7c7de837af42.scope - libcontainer container 531c5236843a4b57576d80ade0b64f92f972107ad69572b7703c7c7de837af42. Sep 5 00:32:18.794090 containerd[1446]: time="2025-09-05T00:32:18.794047872Z" level=info msg="StartContainer for \"531c5236843a4b57576d80ade0b64f92f972107ad69572b7703c7c7de837af42\" returns successfully" Sep 5 00:32:19.392130 kubelet[2465]: I0905 00:32:19.392066 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7c7fcd65b9-r2mj9" podStartSLOduration=1.694242473 podStartE2EDuration="5.39204788s" podCreationTimestamp="2025-09-05 00:32:14 +0000 UTC" firstStartedPulling="2025-09-05 00:32:15.005627317 +0000 UTC m=+33.876677531" lastFinishedPulling="2025-09-05 00:32:18.703432684 +0000 UTC m=+37.574482938" observedRunningTime="2025-09-05 00:32:19.391626735 +0000 UTC m=+38.262676989" watchObservedRunningTime="2025-09-05 00:32:19.39204788 +0000 UTC m=+38.263098134" Sep 5 00:32:21.230051 containerd[1446]: time="2025-09-05T00:32:21.229962841Z" level=info msg="StopPodSandbox for \"0893cac7dbee3bfd15934b019a67a4b3a9434d67ef96211579ade423a1e34526\"" Sep 5 00:32:21.309289 containerd[1446]: 2025-09-05 00:32:21.269 [INFO][4169] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0893cac7dbee3bfd15934b019a67a4b3a9434d67ef96211579ade423a1e34526" Sep 5 00:32:21.309289 containerd[1446]: 2025-09-05 00:32:21.269 [INFO][4169] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0893cac7dbee3bfd15934b019a67a4b3a9434d67ef96211579ade423a1e34526" iface="eth0" netns="/var/run/netns/cni-76c566ae-15a6-6e31-2941-d0ef9e6b7d01" Sep 5 00:32:21.309289 containerd[1446]: 2025-09-05 00:32:21.270 [INFO][4169] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0893cac7dbee3bfd15934b019a67a4b3a9434d67ef96211579ade423a1e34526" iface="eth0" netns="/var/run/netns/cni-76c566ae-15a6-6e31-2941-d0ef9e6b7d01" Sep 5 00:32:21.309289 containerd[1446]: 2025-09-05 00:32:21.270 [INFO][4169] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0893cac7dbee3bfd15934b019a67a4b3a9434d67ef96211579ade423a1e34526" iface="eth0" netns="/var/run/netns/cni-76c566ae-15a6-6e31-2941-d0ef9e6b7d01" Sep 5 00:32:21.309289 containerd[1446]: 2025-09-05 00:32:21.270 [INFO][4169] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0893cac7dbee3bfd15934b019a67a4b3a9434d67ef96211579ade423a1e34526" Sep 5 00:32:21.309289 containerd[1446]: 2025-09-05 00:32:21.270 [INFO][4169] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0893cac7dbee3bfd15934b019a67a4b3a9434d67ef96211579ade423a1e34526" Sep 5 00:32:21.309289 containerd[1446]: 2025-09-05 00:32:21.295 [INFO][4178] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0893cac7dbee3bfd15934b019a67a4b3a9434d67ef96211579ade423a1e34526" HandleID="k8s-pod-network.0893cac7dbee3bfd15934b019a67a4b3a9434d67ef96211579ade423a1e34526" Workload="localhost-k8s-calico--apiserver--7b94b9658b--ngd47-eth0" Sep 5 00:32:21.309289 containerd[1446]: 2025-09-05 00:32:21.295 [INFO][4178] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:32:21.309289 containerd[1446]: 2025-09-05 00:32:21.295 [INFO][4178] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:32:21.309289 containerd[1446]: 2025-09-05 00:32:21.304 [WARNING][4178] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0893cac7dbee3bfd15934b019a67a4b3a9434d67ef96211579ade423a1e34526" HandleID="k8s-pod-network.0893cac7dbee3bfd15934b019a67a4b3a9434d67ef96211579ade423a1e34526" Workload="localhost-k8s-calico--apiserver--7b94b9658b--ngd47-eth0" Sep 5 00:32:21.309289 containerd[1446]: 2025-09-05 00:32:21.304 [INFO][4178] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0893cac7dbee3bfd15934b019a67a4b3a9434d67ef96211579ade423a1e34526" HandleID="k8s-pod-network.0893cac7dbee3bfd15934b019a67a4b3a9434d67ef96211579ade423a1e34526" Workload="localhost-k8s-calico--apiserver--7b94b9658b--ngd47-eth0" Sep 5 00:32:21.309289 containerd[1446]: 2025-09-05 00:32:21.305 [INFO][4178] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:32:21.309289 containerd[1446]: 2025-09-05 00:32:21.307 [INFO][4169] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0893cac7dbee3bfd15934b019a67a4b3a9434d67ef96211579ade423a1e34526" Sep 5 00:32:21.309734 containerd[1446]: time="2025-09-05T00:32:21.309442208Z" level=info msg="TearDown network for sandbox \"0893cac7dbee3bfd15934b019a67a4b3a9434d67ef96211579ade423a1e34526\" successfully" Sep 5 00:32:21.309734 containerd[1446]: time="2025-09-05T00:32:21.309468247Z" level=info msg="StopPodSandbox for \"0893cac7dbee3bfd15934b019a67a4b3a9434d67ef96211579ade423a1e34526\" returns successfully" Sep 5 00:32:21.311779 systemd[1]: run-netns-cni\x2d76c566ae\x2d15a6\x2d6e31\x2d2941\x2dd0ef9e6b7d01.mount: Deactivated successfully. Sep 5 00:32:21.312708 containerd[1446]: time="2025-09-05T00:32:21.312097676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b94b9658b-ngd47,Uid:90fd82e1-4b15-4224-8bf1-2c50a55938a1,Namespace:calico-apiserver,Attempt:1,}" Sep 5 00:32:21.441658 systemd-networkd[1384]: cali0812e51c646: Link UP Sep 5 00:32:21.442551 systemd-networkd[1384]: cali0812e51c646: Gained carrier Sep 5 00:32:21.457244 containerd[1446]: 2025-09-05 00:32:21.383 [INFO][4187] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7b94b9658b--ngd47-eth0 calico-apiserver-7b94b9658b- calico-apiserver 90fd82e1-4b15-4224-8bf1-2c50a55938a1 923 0 2025-09-05 00:31:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7b94b9658b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7b94b9658b-ngd47 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0812e51c646 [] [] }} ContainerID="03e22a11c913ed9e2632d71523116f0e2d4d71a227394cd0dd7accc15d79fd53" Namespace="calico-apiserver" Pod="calico-apiserver-7b94b9658b-ngd47" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b94b9658b--ngd47-" Sep 5 00:32:21.457244 containerd[1446]: 2025-09-05 00:32:21.383 [INFO][4187] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="03e22a11c913ed9e2632d71523116f0e2d4d71a227394cd0dd7accc15d79fd53" Namespace="calico-apiserver" Pod="calico-apiserver-7b94b9658b-ngd47" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b94b9658b--ngd47-eth0" Sep 5 00:32:21.457244 containerd[1446]: 2025-09-05 00:32:21.407 [INFO][4201] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="03e22a11c913ed9e2632d71523116f0e2d4d71a227394cd0dd7accc15d79fd53" HandleID="k8s-pod-network.03e22a11c913ed9e2632d71523116f0e2d4d71a227394cd0dd7accc15d79fd53" Workload="localhost-k8s-calico--apiserver--7b94b9658b--ngd47-eth0" Sep 5 00:32:21.457244 containerd[1446]: 2025-09-05 00:32:21.407 [INFO][4201] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="03e22a11c913ed9e2632d71523116f0e2d4d71a227394cd0dd7accc15d79fd53" HandleID="k8s-pod-network.03e22a11c913ed9e2632d71523116f0e2d4d71a227394cd0dd7accc15d79fd53" Workload="localhost-k8s-calico--apiserver--7b94b9658b--ngd47-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003addc0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7b94b9658b-ngd47", "timestamp":"2025-09-05 00:32:21.407154184 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:32:21.457244 containerd[1446]: 2025-09-05 00:32:21.407 [INFO][4201] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:32:21.457244 containerd[1446]: 2025-09-05 00:32:21.407 [INFO][4201] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:32:21.457244 containerd[1446]: 2025-09-05 00:32:21.407 [INFO][4201] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:32:21.457244 containerd[1446]: 2025-09-05 00:32:21.416 [INFO][4201] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.03e22a11c913ed9e2632d71523116f0e2d4d71a227394cd0dd7accc15d79fd53" host="localhost" Sep 5 00:32:21.457244 containerd[1446]: 2025-09-05 00:32:21.420 [INFO][4201] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:32:21.457244 containerd[1446]: 2025-09-05 00:32:21.423 [INFO][4201] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:32:21.457244 containerd[1446]: 2025-09-05 00:32:21.425 [INFO][4201] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:32:21.457244 containerd[1446]: 2025-09-05 00:32:21.427 [INFO][4201] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:32:21.457244 containerd[1446]: 2025-09-05 00:32:21.427 [INFO][4201] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.03e22a11c913ed9e2632d71523116f0e2d4d71a227394cd0dd7accc15d79fd53" host="localhost" Sep 5 00:32:21.457244 containerd[1446]: 2025-09-05 00:32:21.428 [INFO][4201] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.03e22a11c913ed9e2632d71523116f0e2d4d71a227394cd0dd7accc15d79fd53 Sep 5 00:32:21.457244 containerd[1446]: 2025-09-05 00:32:21.431 [INFO][4201] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.03e22a11c913ed9e2632d71523116f0e2d4d71a227394cd0dd7accc15d79fd53" host="localhost" Sep 5 00:32:21.457244 containerd[1446]: 2025-09-05 00:32:21.437 [INFO][4201] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.03e22a11c913ed9e2632d71523116f0e2d4d71a227394cd0dd7accc15d79fd53" host="localhost" Sep 5 00:32:21.457244 containerd[1446]: 2025-09-05 00:32:21.437 [INFO][4201] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.03e22a11c913ed9e2632d71523116f0e2d4d71a227394cd0dd7accc15d79fd53" host="localhost" Sep 5 00:32:21.457244 containerd[1446]: 2025-09-05 00:32:21.437 [INFO][4201] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:32:21.457244 containerd[1446]: 2025-09-05 00:32:21.437 [INFO][4201] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="03e22a11c913ed9e2632d71523116f0e2d4d71a227394cd0dd7accc15d79fd53" HandleID="k8s-pod-network.03e22a11c913ed9e2632d71523116f0e2d4d71a227394cd0dd7accc15d79fd53" Workload="localhost-k8s-calico--apiserver--7b94b9658b--ngd47-eth0" Sep 5 00:32:21.457876 containerd[1446]: 2025-09-05 00:32:21.439 [INFO][4187] cni-plugin/k8s.go 418: Populated endpoint ContainerID="03e22a11c913ed9e2632d71523116f0e2d4d71a227394cd0dd7accc15d79fd53" Namespace="calico-apiserver" Pod="calico-apiserver-7b94b9658b-ngd47" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b94b9658b--ngd47-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7b94b9658b--ngd47-eth0", GenerateName:"calico-apiserver-7b94b9658b-", Namespace:"calico-apiserver", SelfLink:"", UID:"90fd82e1-4b15-4224-8bf1-2c50a55938a1", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 31, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b94b9658b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7b94b9658b-ngd47", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0812e51c646", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:32:21.457876 containerd[1446]: 2025-09-05 00:32:21.439 [INFO][4187] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="03e22a11c913ed9e2632d71523116f0e2d4d71a227394cd0dd7accc15d79fd53" Namespace="calico-apiserver" Pod="calico-apiserver-7b94b9658b-ngd47" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b94b9658b--ngd47-eth0" Sep 5 00:32:21.457876 containerd[1446]: 2025-09-05 00:32:21.439 [INFO][4187] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0812e51c646 ContainerID="03e22a11c913ed9e2632d71523116f0e2d4d71a227394cd0dd7accc15d79fd53" Namespace="calico-apiserver" Pod="calico-apiserver-7b94b9658b-ngd47" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b94b9658b--ngd47-eth0" Sep 5 00:32:21.457876 containerd[1446]: 2025-09-05 00:32:21.442 [INFO][4187] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="03e22a11c913ed9e2632d71523116f0e2d4d71a227394cd0dd7accc15d79fd53" Namespace="calico-apiserver" Pod="calico-apiserver-7b94b9658b-ngd47" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b94b9658b--ngd47-eth0" Sep 5 00:32:21.457876 containerd[1446]: 2025-09-05 00:32:21.442 [INFO][4187] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="03e22a11c913ed9e2632d71523116f0e2d4d71a227394cd0dd7accc15d79fd53" Namespace="calico-apiserver" Pod="calico-apiserver-7b94b9658b-ngd47" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b94b9658b--ngd47-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7b94b9658b--ngd47-eth0", GenerateName:"calico-apiserver-7b94b9658b-", Namespace:"calico-apiserver", SelfLink:"", UID:"90fd82e1-4b15-4224-8bf1-2c50a55938a1", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 31, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b94b9658b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"03e22a11c913ed9e2632d71523116f0e2d4d71a227394cd0dd7accc15d79fd53", Pod:"calico-apiserver-7b94b9658b-ngd47", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0812e51c646", MAC:"66:54:58:fb:95:25", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:32:21.457876 containerd[1446]: 2025-09-05 00:32:21.455 [INFO][4187] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="03e22a11c913ed9e2632d71523116f0e2d4d71a227394cd0dd7accc15d79fd53" Namespace="calico-apiserver" Pod="calico-apiserver-7b94b9658b-ngd47" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b94b9658b--ngd47-eth0" Sep 5 00:32:21.474028 containerd[1446]: time="2025-09-05T00:32:21.473691079Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:32:21.474028 containerd[1446]: time="2025-09-05T00:32:21.473750517Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:32:21.474028 containerd[1446]: time="2025-09-05T00:32:21.473761597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:32:21.474028 containerd[1446]: time="2025-09-05T00:32:21.473840354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:32:21.500071 systemd[1]: Started cri-containerd-03e22a11c913ed9e2632d71523116f0e2d4d71a227394cd0dd7accc15d79fd53.scope - libcontainer container 03e22a11c913ed9e2632d71523116f0e2d4d71a227394cd0dd7accc15d79fd53. Sep 5 00:32:21.513716 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:32:21.538814 containerd[1446]: time="2025-09-05T00:32:21.538744786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b94b9658b-ngd47,Uid:90fd82e1-4b15-4224-8bf1-2c50a55938a1,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"03e22a11c913ed9e2632d71523116f0e2d4d71a227394cd0dd7accc15d79fd53\"" Sep 5 00:32:21.540704 containerd[1446]: time="2025-09-05T00:32:21.540660400Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 5 00:32:22.959995 systemd-networkd[1384]: cali0812e51c646: Gained IPv6LL Sep 5 00:32:23.122931 kubelet[2465]: I0905 00:32:23.122879 2465 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 00:32:23.166449 systemd[1]: run-containerd-runc-k8s.io-21f7dd7e613d1b084d06c50c532de32cdd371badd4245adc451e8d62d768ad4e-runc.XS2yqy.mount: Deactivated successfully. Sep 5 00:32:23.230933 containerd[1446]: time="2025-09-05T00:32:23.230793792Z" level=info msg="StopPodSandbox for \"1003f23274bc0b838f0f917a029aba5c6d871ebc11763e025681afdab086d9aa\"" Sep 5 00:32:23.231528 containerd[1446]: time="2025-09-05T00:32:23.231146700Z" level=info msg="StopPodSandbox for \"e714b25324219a1c25792314649e68dc99079fff5b9dd9159b644379c4079743\"" Sep 5 00:32:23.232935 containerd[1446]: time="2025-09-05T00:32:23.231281056Z" level=info msg="StopPodSandbox for \"1d3c80f06bb8177d58c20e8b8e139e3958646547a553cd24b8be6f82dd917a79\"" Sep 5 00:32:23.385601 containerd[1446]: 2025-09-05 00:32:23.306 [INFO][4314] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e714b25324219a1c25792314649e68dc99079fff5b9dd9159b644379c4079743" Sep 5 00:32:23.385601 containerd[1446]: 2025-09-05 00:32:23.306 [INFO][4314] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e714b25324219a1c25792314649e68dc99079fff5b9dd9159b644379c4079743" iface="eth0" netns="/var/run/netns/cni-0d9e8c44-ec7f-4794-fb66-670a9ec7e3ad" Sep 5 00:32:23.385601 containerd[1446]: 2025-09-05 00:32:23.313 [INFO][4314] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e714b25324219a1c25792314649e68dc99079fff5b9dd9159b644379c4079743" iface="eth0" netns="/var/run/netns/cni-0d9e8c44-ec7f-4794-fb66-670a9ec7e3ad" Sep 5 00:32:23.385601 containerd[1446]: 2025-09-05 00:32:23.317 [INFO][4314] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e714b25324219a1c25792314649e68dc99079fff5b9dd9159b644379c4079743" iface="eth0" netns="/var/run/netns/cni-0d9e8c44-ec7f-4794-fb66-670a9ec7e3ad" Sep 5 00:32:23.385601 containerd[1446]: 2025-09-05 00:32:23.318 [INFO][4314] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e714b25324219a1c25792314649e68dc99079fff5b9dd9159b644379c4079743" Sep 5 00:32:23.385601 containerd[1446]: 2025-09-05 00:32:23.318 [INFO][4314] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e714b25324219a1c25792314649e68dc99079fff5b9dd9159b644379c4079743" Sep 5 00:32:23.385601 containerd[1446]: 2025-09-05 00:32:23.357 [INFO][4367] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e714b25324219a1c25792314649e68dc99079fff5b9dd9159b644379c4079743" HandleID="k8s-pod-network.e714b25324219a1c25792314649e68dc99079fff5b9dd9159b644379c4079743" Workload="localhost-k8s-goldmane--7988f88666--s4h56-eth0" Sep 5 00:32:23.385601 containerd[1446]: 2025-09-05 00:32:23.357 [INFO][4367] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:32:23.385601 containerd[1446]: 2025-09-05 00:32:23.357 [INFO][4367] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:32:23.385601 containerd[1446]: 2025-09-05 00:32:23.370 [WARNING][4367] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e714b25324219a1c25792314649e68dc99079fff5b9dd9159b644379c4079743" HandleID="k8s-pod-network.e714b25324219a1c25792314649e68dc99079fff5b9dd9159b644379c4079743" Workload="localhost-k8s-goldmane--7988f88666--s4h56-eth0" Sep 5 00:32:23.385601 containerd[1446]: 2025-09-05 00:32:23.370 [INFO][4367] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e714b25324219a1c25792314649e68dc99079fff5b9dd9159b644379c4079743" HandleID="k8s-pod-network.e714b25324219a1c25792314649e68dc99079fff5b9dd9159b644379c4079743" Workload="localhost-k8s-goldmane--7988f88666--s4h56-eth0" Sep 5 00:32:23.385601 containerd[1446]: 2025-09-05 00:32:23.373 [INFO][4367] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:32:23.385601 containerd[1446]: 2025-09-05 00:32:23.382 [INFO][4314] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e714b25324219a1c25792314649e68dc99079fff5b9dd9159b644379c4079743" Sep 5 00:32:23.386390 containerd[1446]: time="2025-09-05T00:32:23.386337574Z" level=info msg="TearDown network for sandbox \"e714b25324219a1c25792314649e68dc99079fff5b9dd9159b644379c4079743\" successfully" Sep 5 00:32:23.386690 containerd[1446]: time="2025-09-05T00:32:23.386668003Z" level=info msg="StopPodSandbox for \"e714b25324219a1c25792314649e68dc99079fff5b9dd9159b644379c4079743\" returns successfully" Sep 5 00:32:23.388888 containerd[1446]: time="2025-09-05T00:32:23.387398379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-s4h56,Uid:d3723c38-1afc-40d9-9c9f-ed74aafda062,Namespace:calico-system,Attempt:1,}" Sep 5 00:32:23.389732 systemd[1]: run-netns-cni\x2d0d9e8c44\x2dec7f\x2d4794\x2dfb66\x2d670a9ec7e3ad.mount: Deactivated successfully. Sep 5 00:32:23.401024 containerd[1446]: 2025-09-05 00:32:23.328 [INFO][4322] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1d3c80f06bb8177d58c20e8b8e139e3958646547a553cd24b8be6f82dd917a79" Sep 5 00:32:23.401024 containerd[1446]: 2025-09-05 00:32:23.328 [INFO][4322] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1d3c80f06bb8177d58c20e8b8e139e3958646547a553cd24b8be6f82dd917a79" iface="eth0" netns="/var/run/netns/cni-ef798dcf-a9b9-e263-cf4e-0c1df9f00236" Sep 5 00:32:23.401024 containerd[1446]: 2025-09-05 00:32:23.329 [INFO][4322] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1d3c80f06bb8177d58c20e8b8e139e3958646547a553cd24b8be6f82dd917a79" iface="eth0" netns="/var/run/netns/cni-ef798dcf-a9b9-e263-cf4e-0c1df9f00236" Sep 5 00:32:23.401024 containerd[1446]: 2025-09-05 00:32:23.329 [INFO][4322] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1d3c80f06bb8177d58c20e8b8e139e3958646547a553cd24b8be6f82dd917a79" iface="eth0" netns="/var/run/netns/cni-ef798dcf-a9b9-e263-cf4e-0c1df9f00236" Sep 5 00:32:23.401024 containerd[1446]: 2025-09-05 00:32:23.329 [INFO][4322] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1d3c80f06bb8177d58c20e8b8e139e3958646547a553cd24b8be6f82dd917a79" Sep 5 00:32:23.401024 containerd[1446]: 2025-09-05 00:32:23.329 [INFO][4322] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1d3c80f06bb8177d58c20e8b8e139e3958646547a553cd24b8be6f82dd917a79" Sep 5 00:32:23.401024 containerd[1446]: 2025-09-05 00:32:23.371 [INFO][4373] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1d3c80f06bb8177d58c20e8b8e139e3958646547a553cd24b8be6f82dd917a79" HandleID="k8s-pod-network.1d3c80f06bb8177d58c20e8b8e139e3958646547a553cd24b8be6f82dd917a79" Workload="localhost-k8s-csi--node--driver--7wt2m-eth0" Sep 5 00:32:23.401024 containerd[1446]: 2025-09-05 00:32:23.371 [INFO][4373] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:32:23.401024 containerd[1446]: 2025-09-05 00:32:23.373 [INFO][4373] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:32:23.401024 containerd[1446]: 2025-09-05 00:32:23.390 [WARNING][4373] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1d3c80f06bb8177d58c20e8b8e139e3958646547a553cd24b8be6f82dd917a79" HandleID="k8s-pod-network.1d3c80f06bb8177d58c20e8b8e139e3958646547a553cd24b8be6f82dd917a79" Workload="localhost-k8s-csi--node--driver--7wt2m-eth0" Sep 5 00:32:23.401024 containerd[1446]: 2025-09-05 00:32:23.391 [INFO][4373] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1d3c80f06bb8177d58c20e8b8e139e3958646547a553cd24b8be6f82dd917a79" HandleID="k8s-pod-network.1d3c80f06bb8177d58c20e8b8e139e3958646547a553cd24b8be6f82dd917a79" Workload="localhost-k8s-csi--node--driver--7wt2m-eth0" Sep 5 00:32:23.401024 containerd[1446]: 2025-09-05 00:32:23.394 [INFO][4373] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:32:23.401024 containerd[1446]: 2025-09-05 00:32:23.397 [INFO][4322] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1d3c80f06bb8177d58c20e8b8e139e3958646547a553cd24b8be6f82dd917a79" Sep 5 00:32:23.403317 systemd[1]: run-netns-cni\x2def798dcf\x2da9b9\x2de263\x2dcf4e\x2d0c1df9f00236.mount: Deactivated successfully. Sep 5 00:32:23.403614 containerd[1446]: time="2025-09-05T00:32:23.403576489Z" level=info msg="TearDown network for sandbox \"1d3c80f06bb8177d58c20e8b8e139e3958646547a553cd24b8be6f82dd917a79\" successfully" Sep 5 00:32:23.403676 containerd[1446]: time="2025-09-05T00:32:23.403616687Z" level=info msg="StopPodSandbox for \"1d3c80f06bb8177d58c20e8b8e139e3958646547a553cd24b8be6f82dd917a79\" returns successfully" Sep 5 00:32:23.404962 containerd[1446]: time="2025-09-05T00:32:23.404878006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7wt2m,Uid:7cca8ed9-cabd-4207-ad70-24ca48f24180,Namespace:calico-system,Attempt:1,}" Sep 5 00:32:23.415579 containerd[1446]: 2025-09-05 00:32:23.349 [INFO][4321] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1003f23274bc0b838f0f917a029aba5c6d871ebc11763e025681afdab086d9aa" Sep 5 00:32:23.415579 containerd[1446]: 2025-09-05 00:32:23.349 [INFO][4321] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1003f23274bc0b838f0f917a029aba5c6d871ebc11763e025681afdab086d9aa" iface="eth0" netns="/var/run/netns/cni-3cd08f00-5833-a2ef-214c-12a584e754aa" Sep 5 00:32:23.415579 containerd[1446]: 2025-09-05 00:32:23.349 [INFO][4321] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1003f23274bc0b838f0f917a029aba5c6d871ebc11763e025681afdab086d9aa" iface="eth0" netns="/var/run/netns/cni-3cd08f00-5833-a2ef-214c-12a584e754aa" Sep 5 00:32:23.415579 containerd[1446]: 2025-09-05 00:32:23.349 [INFO][4321] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1003f23274bc0b838f0f917a029aba5c6d871ebc11763e025681afdab086d9aa" iface="eth0" netns="/var/run/netns/cni-3cd08f00-5833-a2ef-214c-12a584e754aa" Sep 5 00:32:23.415579 containerd[1446]: 2025-09-05 00:32:23.349 [INFO][4321] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1003f23274bc0b838f0f917a029aba5c6d871ebc11763e025681afdab086d9aa" Sep 5 00:32:23.415579 containerd[1446]: 2025-09-05 00:32:23.349 [INFO][4321] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1003f23274bc0b838f0f917a029aba5c6d871ebc11763e025681afdab086d9aa" Sep 5 00:32:23.415579 containerd[1446]: 2025-09-05 00:32:23.395 [INFO][4382] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1003f23274bc0b838f0f917a029aba5c6d871ebc11763e025681afdab086d9aa" HandleID="k8s-pod-network.1003f23274bc0b838f0f917a029aba5c6d871ebc11763e025681afdab086d9aa" Workload="localhost-k8s-coredns--7c65d6cfc9--c8wqh-eth0" Sep 5 00:32:23.415579 containerd[1446]: 2025-09-05 00:32:23.396 [INFO][4382] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:32:23.415579 containerd[1446]: 2025-09-05 00:32:23.396 [INFO][4382] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:32:23.415579 containerd[1446]: 2025-09-05 00:32:23.408 [WARNING][4382] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1003f23274bc0b838f0f917a029aba5c6d871ebc11763e025681afdab086d9aa" HandleID="k8s-pod-network.1003f23274bc0b838f0f917a029aba5c6d871ebc11763e025681afdab086d9aa" Workload="localhost-k8s-coredns--7c65d6cfc9--c8wqh-eth0" Sep 5 00:32:23.415579 containerd[1446]: 2025-09-05 00:32:23.408 [INFO][4382] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1003f23274bc0b838f0f917a029aba5c6d871ebc11763e025681afdab086d9aa" HandleID="k8s-pod-network.1003f23274bc0b838f0f917a029aba5c6d871ebc11763e025681afdab086d9aa" Workload="localhost-k8s-coredns--7c65d6cfc9--c8wqh-eth0" Sep 5 00:32:23.415579 containerd[1446]: 2025-09-05 00:32:23.410 [INFO][4382] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:32:23.415579 containerd[1446]: 2025-09-05 00:32:23.413 [INFO][4321] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1003f23274bc0b838f0f917a029aba5c6d871ebc11763e025681afdab086d9aa" Sep 5 00:32:23.415579 containerd[1446]: time="2025-09-05T00:32:23.415491418Z" level=info msg="TearDown network for sandbox \"1003f23274bc0b838f0f917a029aba5c6d871ebc11763e025681afdab086d9aa\" successfully" Sep 5 00:32:23.415579 containerd[1446]: time="2025-09-05T00:32:23.415514177Z" level=info msg="StopPodSandbox for \"1003f23274bc0b838f0f917a029aba5c6d871ebc11763e025681afdab086d9aa\" returns successfully" Sep 5 00:32:23.416226 kubelet[2465]: E0905 00:32:23.415907 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:32:23.416462 containerd[1446]: time="2025-09-05T00:32:23.416434387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-c8wqh,Uid:192e7eea-9c31-4354-894f-59feed59071c,Namespace:kube-system,Attempt:1,}" Sep 5 00:32:23.536314 systemd-networkd[1384]: caliba5b7f6a825: Link UP Sep 5 00:32:23.536504 systemd-networkd[1384]: caliba5b7f6a825: Gained carrier Sep 5 00:32:23.554106 containerd[1446]: 2025-09-05 00:32:23.462 [INFO][4393] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7988f88666--s4h56-eth0 goldmane-7988f88666- calico-system d3723c38-1afc-40d9-9c9f-ed74aafda062 937 0 2025-09-05 00:31:59 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7988f88666-s4h56 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] caliba5b7f6a825 [] [] }} ContainerID="6152aff9e9f5741a7391f4609ed68237301c01da7d499e969971648f42293eda" Namespace="calico-system" Pod="goldmane-7988f88666-s4h56" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--s4h56-" Sep 5 00:32:23.554106 containerd[1446]: 2025-09-05 00:32:23.462 [INFO][4393] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6152aff9e9f5741a7391f4609ed68237301c01da7d499e969971648f42293eda" Namespace="calico-system" Pod="goldmane-7988f88666-s4h56" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--s4h56-eth0" Sep 5 00:32:23.554106 containerd[1446]: 2025-09-05 00:32:23.495 [INFO][4434] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6152aff9e9f5741a7391f4609ed68237301c01da7d499e969971648f42293eda" HandleID="k8s-pod-network.6152aff9e9f5741a7391f4609ed68237301c01da7d499e969971648f42293eda" Workload="localhost-k8s-goldmane--7988f88666--s4h56-eth0" Sep 5 00:32:23.554106 containerd[1446]: 2025-09-05 00:32:23.495 [INFO][4434] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6152aff9e9f5741a7391f4609ed68237301c01da7d499e969971648f42293eda" HandleID="k8s-pod-network.6152aff9e9f5741a7391f4609ed68237301c01da7d499e969971648f42293eda" Workload="localhost-k8s-goldmane--7988f88666--s4h56-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000136790), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7988f88666-s4h56", "timestamp":"2025-09-05 00:32:23.49535512 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:32:23.554106 containerd[1446]: 2025-09-05 00:32:23.495 [INFO][4434] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:32:23.554106 containerd[1446]: 2025-09-05 00:32:23.495 [INFO][4434] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:32:23.554106 containerd[1446]: 2025-09-05 00:32:23.495 [INFO][4434] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:32:23.554106 containerd[1446]: 2025-09-05 00:32:23.506 [INFO][4434] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6152aff9e9f5741a7391f4609ed68237301c01da7d499e969971648f42293eda" host="localhost" Sep 5 00:32:23.554106 containerd[1446]: 2025-09-05 00:32:23.512 [INFO][4434] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:32:23.554106 containerd[1446]: 2025-09-05 00:32:23.516 [INFO][4434] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:32:23.554106 containerd[1446]: 2025-09-05 00:32:23.518 [INFO][4434] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:32:23.554106 containerd[1446]: 2025-09-05 00:32:23.520 [INFO][4434] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:32:23.554106 containerd[1446]: 2025-09-05 00:32:23.520 [INFO][4434] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6152aff9e9f5741a7391f4609ed68237301c01da7d499e969971648f42293eda" host="localhost" Sep 5 00:32:23.554106 containerd[1446]: 2025-09-05 00:32:23.521 [INFO][4434] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6152aff9e9f5741a7391f4609ed68237301c01da7d499e969971648f42293eda Sep 5 00:32:23.554106 containerd[1446]: 2025-09-05 00:32:23.525 [INFO][4434] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6152aff9e9f5741a7391f4609ed68237301c01da7d499e969971648f42293eda" host="localhost" Sep 5 00:32:23.554106 containerd[1446]: 2025-09-05 00:32:23.531 [INFO][4434] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.6152aff9e9f5741a7391f4609ed68237301c01da7d499e969971648f42293eda" host="localhost" Sep 5 00:32:23.554106 containerd[1446]: 2025-09-05 00:32:23.531 [INFO][4434] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.6152aff9e9f5741a7391f4609ed68237301c01da7d499e969971648f42293eda" host="localhost" Sep 5 00:32:23.554106 containerd[1446]: 2025-09-05 00:32:23.531 [INFO][4434] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:32:23.554106 containerd[1446]: 2025-09-05 00:32:23.531 [INFO][4434] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="6152aff9e9f5741a7391f4609ed68237301c01da7d499e969971648f42293eda" HandleID="k8s-pod-network.6152aff9e9f5741a7391f4609ed68237301c01da7d499e969971648f42293eda" Workload="localhost-k8s-goldmane--7988f88666--s4h56-eth0" Sep 5 00:32:23.554767 containerd[1446]: 2025-09-05 00:32:23.533 [INFO][4393] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6152aff9e9f5741a7391f4609ed68237301c01da7d499e969971648f42293eda" Namespace="calico-system" Pod="goldmane-7988f88666-s4h56" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--s4h56-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--s4h56-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"d3723c38-1afc-40d9-9c9f-ed74aafda062", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 31, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7988f88666-s4h56", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliba5b7f6a825", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:32:23.554767 containerd[1446]: 2025-09-05 00:32:23.533 [INFO][4393] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="6152aff9e9f5741a7391f4609ed68237301c01da7d499e969971648f42293eda" Namespace="calico-system" Pod="goldmane-7988f88666-s4h56" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--s4h56-eth0" Sep 5 00:32:23.554767 containerd[1446]: 2025-09-05 00:32:23.533 [INFO][4393] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliba5b7f6a825 ContainerID="6152aff9e9f5741a7391f4609ed68237301c01da7d499e969971648f42293eda" Namespace="calico-system" Pod="goldmane-7988f88666-s4h56" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--s4h56-eth0" Sep 5 00:32:23.554767 containerd[1446]: 2025-09-05 00:32:23.536 [INFO][4393] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6152aff9e9f5741a7391f4609ed68237301c01da7d499e969971648f42293eda" Namespace="calico-system" Pod="goldmane-7988f88666-s4h56" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--s4h56-eth0" Sep 5 00:32:23.554767 containerd[1446]: 2025-09-05 00:32:23.538 [INFO][4393] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6152aff9e9f5741a7391f4609ed68237301c01da7d499e969971648f42293eda" Namespace="calico-system" Pod="goldmane-7988f88666-s4h56" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--s4h56-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--s4h56-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"d3723c38-1afc-40d9-9c9f-ed74aafda062", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 31, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6152aff9e9f5741a7391f4609ed68237301c01da7d499e969971648f42293eda", Pod:"goldmane-7988f88666-s4h56", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliba5b7f6a825", MAC:"8e:5b:e3:91:0f:05", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:32:23.554767 containerd[1446]: 2025-09-05 00:32:23.552 [INFO][4393] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6152aff9e9f5741a7391f4609ed68237301c01da7d499e969971648f42293eda" Namespace="calico-system" Pod="goldmane-7988f88666-s4h56" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--s4h56-eth0" Sep 5 00:32:23.569332 containerd[1446]: time="2025-09-05T00:32:23.569232339Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:32:23.569332 containerd[1446]: time="2025-09-05T00:32:23.569300057Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:32:23.569332 containerd[1446]: time="2025-09-05T00:32:23.569311816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:32:23.569535 containerd[1446]: time="2025-09-05T00:32:23.569385574Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:32:23.587086 systemd[1]: Started cri-containerd-6152aff9e9f5741a7391f4609ed68237301c01da7d499e969971648f42293eda.scope - libcontainer container 6152aff9e9f5741a7391f4609ed68237301c01da7d499e969971648f42293eda. Sep 5 00:32:23.598004 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:32:23.633369 containerd[1446]: time="2025-09-05T00:32:23.633306039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-s4h56,Uid:d3723c38-1afc-40d9-9c9f-ed74aafda062,Namespace:calico-system,Attempt:1,} returns sandbox id \"6152aff9e9f5741a7391f4609ed68237301c01da7d499e969971648f42293eda\"" Sep 5 00:32:23.660248 systemd-networkd[1384]: calic6e18199175: Link UP Sep 5 00:32:23.662386 systemd-networkd[1384]: calic6e18199175: Gained carrier Sep 5 00:32:23.692014 containerd[1446]: 2025-09-05 00:32:23.474 [INFO][4405] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--7wt2m-eth0 csi-node-driver- calico-system 7cca8ed9-cabd-4207-ad70-24ca48f24180 938 0 2025-09-05 00:32:00 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-7wt2m eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic6e18199175 [] [] }} ContainerID="41af5b36212e9d4860fdfdc0e6fda4dee38da12ad59831d8ecc1b1ff179e129a" Namespace="calico-system" Pod="csi-node-driver-7wt2m" WorkloadEndpoint="localhost-k8s-csi--node--driver--7wt2m-" Sep 5 00:32:23.692014 containerd[1446]: 2025-09-05 00:32:23.475 [INFO][4405] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="41af5b36212e9d4860fdfdc0e6fda4dee38da12ad59831d8ecc1b1ff179e129a" Namespace="calico-system" Pod="csi-node-driver-7wt2m" WorkloadEndpoint="localhost-k8s-csi--node--driver--7wt2m-eth0" Sep 5 00:32:23.692014 containerd[1446]: 2025-09-05 00:32:23.511 [INFO][4442] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="41af5b36212e9d4860fdfdc0e6fda4dee38da12ad59831d8ecc1b1ff179e129a" HandleID="k8s-pod-network.41af5b36212e9d4860fdfdc0e6fda4dee38da12ad59831d8ecc1b1ff179e129a" Workload="localhost-k8s-csi--node--driver--7wt2m-eth0" Sep 5 00:32:23.692014 containerd[1446]: 2025-09-05 00:32:23.511 [INFO][4442] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="41af5b36212e9d4860fdfdc0e6fda4dee38da12ad59831d8ecc1b1ff179e129a" HandleID="k8s-pod-network.41af5b36212e9d4860fdfdc0e6fda4dee38da12ad59831d8ecc1b1ff179e129a" Workload="localhost-k8s-csi--node--driver--7wt2m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002552d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-7wt2m", "timestamp":"2025-09-05 00:32:23.511163282 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:32:23.692014 containerd[1446]: 2025-09-05 00:32:23.511 [INFO][4442] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:32:23.692014 containerd[1446]: 2025-09-05 00:32:23.531 [INFO][4442] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:32:23.692014 containerd[1446]: 2025-09-05 00:32:23.531 [INFO][4442] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:32:23.692014 containerd[1446]: 2025-09-05 00:32:23.607 [INFO][4442] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.41af5b36212e9d4860fdfdc0e6fda4dee38da12ad59831d8ecc1b1ff179e129a" host="localhost" Sep 5 00:32:23.692014 containerd[1446]: 2025-09-05 00:32:23.613 [INFO][4442] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:32:23.692014 containerd[1446]: 2025-09-05 00:32:23.617 [INFO][4442] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:32:23.692014 containerd[1446]: 2025-09-05 00:32:23.619 [INFO][4442] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:32:23.692014 containerd[1446]: 2025-09-05 00:32:23.622 [INFO][4442] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:32:23.692014 containerd[1446]: 2025-09-05 00:32:23.622 [INFO][4442] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.41af5b36212e9d4860fdfdc0e6fda4dee38da12ad59831d8ecc1b1ff179e129a" host="localhost" Sep 5 00:32:23.692014 containerd[1446]: 2025-09-05 00:32:23.625 [INFO][4442] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.41af5b36212e9d4860fdfdc0e6fda4dee38da12ad59831d8ecc1b1ff179e129a Sep 5 00:32:23.692014 containerd[1446]: 2025-09-05 00:32:23.630 [INFO][4442] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.41af5b36212e9d4860fdfdc0e6fda4dee38da12ad59831d8ecc1b1ff179e129a" host="localhost" Sep 5 00:32:23.692014 containerd[1446]: 2025-09-05 00:32:23.641 [INFO][4442] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.41af5b36212e9d4860fdfdc0e6fda4dee38da12ad59831d8ecc1b1ff179e129a" host="localhost" Sep 5 00:32:23.692014 containerd[1446]: 2025-09-05 00:32:23.641 [INFO][4442] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.41af5b36212e9d4860fdfdc0e6fda4dee38da12ad59831d8ecc1b1ff179e129a" host="localhost" Sep 5 00:32:23.692014 containerd[1446]: 2025-09-05 00:32:23.641 [INFO][4442] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:32:23.692014 containerd[1446]: 2025-09-05 00:32:23.641 [INFO][4442] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="41af5b36212e9d4860fdfdc0e6fda4dee38da12ad59831d8ecc1b1ff179e129a" HandleID="k8s-pod-network.41af5b36212e9d4860fdfdc0e6fda4dee38da12ad59831d8ecc1b1ff179e129a" Workload="localhost-k8s-csi--node--driver--7wt2m-eth0" Sep 5 00:32:23.692555 containerd[1446]: 2025-09-05 00:32:23.652 [INFO][4405] cni-plugin/k8s.go 418: Populated endpoint ContainerID="41af5b36212e9d4860fdfdc0e6fda4dee38da12ad59831d8ecc1b1ff179e129a" Namespace="calico-system" Pod="csi-node-driver-7wt2m" WorkloadEndpoint="localhost-k8s-csi--node--driver--7wt2m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7wt2m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7cca8ed9-cabd-4207-ad70-24ca48f24180", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 32, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-7wt2m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic6e18199175", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:32:23.692555 containerd[1446]: 2025-09-05 00:32:23.652 [INFO][4405] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="41af5b36212e9d4860fdfdc0e6fda4dee38da12ad59831d8ecc1b1ff179e129a" Namespace="calico-system" Pod="csi-node-driver-7wt2m" WorkloadEndpoint="localhost-k8s-csi--node--driver--7wt2m-eth0" Sep 5 00:32:23.692555 containerd[1446]: 2025-09-05 00:32:23.652 [INFO][4405] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic6e18199175 ContainerID="41af5b36212e9d4860fdfdc0e6fda4dee38da12ad59831d8ecc1b1ff179e129a" Namespace="calico-system" Pod="csi-node-driver-7wt2m" WorkloadEndpoint="localhost-k8s-csi--node--driver--7wt2m-eth0" Sep 5 00:32:23.692555 containerd[1446]: 2025-09-05 00:32:23.662 [INFO][4405] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="41af5b36212e9d4860fdfdc0e6fda4dee38da12ad59831d8ecc1b1ff179e129a" Namespace="calico-system" Pod="csi-node-driver-7wt2m" WorkloadEndpoint="localhost-k8s-csi--node--driver--7wt2m-eth0" Sep 5 00:32:23.692555 containerd[1446]: 2025-09-05 00:32:23.667 [INFO][4405] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="41af5b36212e9d4860fdfdc0e6fda4dee38da12ad59831d8ecc1b1ff179e129a" Namespace="calico-system" Pod="csi-node-driver-7wt2m" WorkloadEndpoint="localhost-k8s-csi--node--driver--7wt2m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7wt2m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7cca8ed9-cabd-4207-ad70-24ca48f24180", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 32, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"41af5b36212e9d4860fdfdc0e6fda4dee38da12ad59831d8ecc1b1ff179e129a", Pod:"csi-node-driver-7wt2m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic6e18199175", MAC:"8e:2c:30:2e:85:5d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:32:23.692555 containerd[1446]: 2025-09-05 00:32:23.686 [INFO][4405] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="41af5b36212e9d4860fdfdc0e6fda4dee38da12ad59831d8ecc1b1ff179e129a" Namespace="calico-system" Pod="csi-node-driver-7wt2m" WorkloadEndpoint="localhost-k8s-csi--node--driver--7wt2m-eth0" Sep 5 00:32:23.712450 containerd[1446]: time="2025-09-05T00:32:23.712338848Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:32:23.712450 containerd[1446]: time="2025-09-05T00:32:23.712404846Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:32:23.712450 containerd[1446]: time="2025-09-05T00:32:23.712421966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:32:23.712681 containerd[1446]: time="2025-09-05T00:32:23.712631879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:32:23.744199 systemd[1]: Started cri-containerd-41af5b36212e9d4860fdfdc0e6fda4dee38da12ad59831d8ecc1b1ff179e129a.scope - libcontainer container 41af5b36212e9d4860fdfdc0e6fda4dee38da12ad59831d8ecc1b1ff179e129a. Sep 5 00:32:23.765206 systemd-networkd[1384]: cali899fb435dd9: Link UP Sep 5 00:32:23.768028 systemd-networkd[1384]: cali899fb435dd9: Gained carrier Sep 5 00:32:23.776011 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:32:23.788057 containerd[1446]: 2025-09-05 00:32:23.488 [INFO][4417] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--c8wqh-eth0 coredns-7c65d6cfc9- kube-system 192e7eea-9c31-4354-894f-59feed59071c 939 0 2025-09-05 00:31:48 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-c8wqh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali899fb435dd9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f5348d4e2605d36e5a2c69547d4619fb8ee00cdbd1b6f9a9906cb0e1265f1e4c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-c8wqh" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--c8wqh-" Sep 5 00:32:23.788057 containerd[1446]: 2025-09-05 00:32:23.488 [INFO][4417] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f5348d4e2605d36e5a2c69547d4619fb8ee00cdbd1b6f9a9906cb0e1265f1e4c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-c8wqh" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--c8wqh-eth0" Sep 5 00:32:23.788057 containerd[1446]: 2025-09-05 00:32:23.518 [INFO][4452] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f5348d4e2605d36e5a2c69547d4619fb8ee00cdbd1b6f9a9906cb0e1265f1e4c" HandleID="k8s-pod-network.f5348d4e2605d36e5a2c69547d4619fb8ee00cdbd1b6f9a9906cb0e1265f1e4c" Workload="localhost-k8s-coredns--7c65d6cfc9--c8wqh-eth0" Sep 5 00:32:23.788057 containerd[1446]: 2025-09-05 00:32:23.518 [INFO][4452] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f5348d4e2605d36e5a2c69547d4619fb8ee00cdbd1b6f9a9906cb0e1265f1e4c" HandleID="k8s-pod-network.f5348d4e2605d36e5a2c69547d4619fb8ee00cdbd1b6f9a9906cb0e1265f1e4c" Workload="localhost-k8s-coredns--7c65d6cfc9--c8wqh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c790), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-c8wqh", "timestamp":"2025-09-05 00:32:23.517995618 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:32:23.788057 containerd[1446]: 2025-09-05 00:32:23.518 [INFO][4452] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:32:23.788057 containerd[1446]: 2025-09-05 00:32:23.641 [INFO][4452] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:32:23.788057 containerd[1446]: 2025-09-05 00:32:23.641 [INFO][4452] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:32:23.788057 containerd[1446]: 2025-09-05 00:32:23.708 [INFO][4452] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f5348d4e2605d36e5a2c69547d4619fb8ee00cdbd1b6f9a9906cb0e1265f1e4c" host="localhost" Sep 5 00:32:23.788057 containerd[1446]: 2025-09-05 00:32:23.717 [INFO][4452] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:32:23.788057 containerd[1446]: 2025-09-05 00:32:23.723 [INFO][4452] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:32:23.788057 containerd[1446]: 2025-09-05 00:32:23.728 [INFO][4452] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:32:23.788057 containerd[1446]: 2025-09-05 00:32:23.737 [INFO][4452] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:32:23.788057 containerd[1446]: 2025-09-05 00:32:23.737 [INFO][4452] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f5348d4e2605d36e5a2c69547d4619fb8ee00cdbd1b6f9a9906cb0e1265f1e4c" host="localhost" Sep 5 00:32:23.788057 containerd[1446]: 2025-09-05 00:32:23.743 [INFO][4452] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f5348d4e2605d36e5a2c69547d4619fb8ee00cdbd1b6f9a9906cb0e1265f1e4c Sep 5 00:32:23.788057 containerd[1446]: 2025-09-05 00:32:23.751 [INFO][4452] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f5348d4e2605d36e5a2c69547d4619fb8ee00cdbd1b6f9a9906cb0e1265f1e4c" host="localhost" Sep 5 00:32:23.788057 containerd[1446]: 2025-09-05 00:32:23.759 [INFO][4452] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.f5348d4e2605d36e5a2c69547d4619fb8ee00cdbd1b6f9a9906cb0e1265f1e4c" host="localhost" Sep 5 00:32:23.788057 containerd[1446]: 2025-09-05 00:32:23.759 [INFO][4452] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.f5348d4e2605d36e5a2c69547d4619fb8ee00cdbd1b6f9a9906cb0e1265f1e4c" host="localhost" Sep 5 00:32:23.788057 containerd[1446]: 2025-09-05 00:32:23.759 [INFO][4452] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:32:23.788057 containerd[1446]: 2025-09-05 00:32:23.759 [INFO][4452] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="f5348d4e2605d36e5a2c69547d4619fb8ee00cdbd1b6f9a9906cb0e1265f1e4c" HandleID="k8s-pod-network.f5348d4e2605d36e5a2c69547d4619fb8ee00cdbd1b6f9a9906cb0e1265f1e4c" Workload="localhost-k8s-coredns--7c65d6cfc9--c8wqh-eth0" Sep 5 00:32:23.789195 containerd[1446]: 2025-09-05 00:32:23.762 [INFO][4417] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f5348d4e2605d36e5a2c69547d4619fb8ee00cdbd1b6f9a9906cb0e1265f1e4c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-c8wqh" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--c8wqh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--c8wqh-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"192e7eea-9c31-4354-894f-59feed59071c", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 31, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-c8wqh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali899fb435dd9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:32:23.789195 containerd[1446]: 2025-09-05 00:32:23.762 [INFO][4417] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="f5348d4e2605d36e5a2c69547d4619fb8ee00cdbd1b6f9a9906cb0e1265f1e4c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-c8wqh" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--c8wqh-eth0" Sep 5 00:32:23.789195 containerd[1446]: 2025-09-05 00:32:23.762 [INFO][4417] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali899fb435dd9 ContainerID="f5348d4e2605d36e5a2c69547d4619fb8ee00cdbd1b6f9a9906cb0e1265f1e4c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-c8wqh" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--c8wqh-eth0" Sep 5 00:32:23.789195 containerd[1446]: 2025-09-05 00:32:23.766 [INFO][4417] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f5348d4e2605d36e5a2c69547d4619fb8ee00cdbd1b6f9a9906cb0e1265f1e4c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-c8wqh" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--c8wqh-eth0" Sep 5 00:32:23.789195 containerd[1446]: 2025-09-05 00:32:23.767 [INFO][4417] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f5348d4e2605d36e5a2c69547d4619fb8ee00cdbd1b6f9a9906cb0e1265f1e4c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-c8wqh" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--c8wqh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--c8wqh-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"192e7eea-9c31-4354-894f-59feed59071c", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 31, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f5348d4e2605d36e5a2c69547d4619fb8ee00cdbd1b6f9a9906cb0e1265f1e4c", Pod:"coredns-7c65d6cfc9-c8wqh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali899fb435dd9", MAC:"62:f4:0a:3a:72:64", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:32:23.789195 containerd[1446]: 2025-09-05 00:32:23.781 [INFO][4417] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f5348d4e2605d36e5a2c69547d4619fb8ee00cdbd1b6f9a9906cb0e1265f1e4c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-c8wqh" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--c8wqh-eth0" Sep 5 00:32:23.808084 containerd[1446]: time="2025-09-05T00:32:23.807925755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7wt2m,Uid:7cca8ed9-cabd-4207-ad70-24ca48f24180,Namespace:calico-system,Attempt:1,} returns sandbox id \"41af5b36212e9d4860fdfdc0e6fda4dee38da12ad59831d8ecc1b1ff179e129a\"" Sep 5 00:32:23.823104 containerd[1446]: time="2025-09-05T00:32:23.822606514Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:32:23.823789 containerd[1446]: time="2025-09-05T00:32:23.822676672Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:32:23.823789 containerd[1446]: time="2025-09-05T00:32:23.823638960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:32:23.823789 containerd[1446]: time="2025-09-05T00:32:23.823749557Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:32:23.851081 systemd[1]: Started cri-containerd-f5348d4e2605d36e5a2c69547d4619fb8ee00cdbd1b6f9a9906cb0e1265f1e4c.scope - libcontainer container f5348d4e2605d36e5a2c69547d4619fb8ee00cdbd1b6f9a9906cb0e1265f1e4c. Sep 5 00:32:23.864168 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:32:23.895031 containerd[1446]: time="2025-09-05T00:32:23.894974542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-c8wqh,Uid:192e7eea-9c31-4354-894f-59feed59071c,Namespace:kube-system,Attempt:1,} returns sandbox id \"f5348d4e2605d36e5a2c69547d4619fb8ee00cdbd1b6f9a9906cb0e1265f1e4c\"" Sep 5 00:32:23.895992 kubelet[2465]: E0905 00:32:23.895839 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:32:23.899905 containerd[1446]: time="2025-09-05T00:32:23.898903653Z" level=info msg="CreateContainer within sandbox \"f5348d4e2605d36e5a2c69547d4619fb8ee00cdbd1b6f9a9906cb0e1265f1e4c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 00:32:24.167907 systemd[1]: run-netns-cni\x2d3cd08f00\x2d5833\x2da2ef\x2d214c\x2d12a584e754aa.mount: Deactivated successfully. Sep 5 00:32:24.229820 containerd[1446]: time="2025-09-05T00:32:24.229570977Z" level=info msg="StopPodSandbox for \"845fc777a4a36d84129ce3646095dce8a8df461e48644a8ccb9f1110f4764d64\"" Sep 5 00:32:24.230267 containerd[1446]: time="2025-09-05T00:32:24.229786610Z" level=info msg="StopPodSandbox for \"0d857350e94f7419b72a77b0e2aa8cd7ce34079bf4c83b93076824bfeced2184\"" Sep 5 00:32:24.282786 containerd[1446]: time="2025-09-05T00:32:24.282735282Z" level=info msg="CreateContainer within sandbox \"f5348d4e2605d36e5a2c69547d4619fb8ee00cdbd1b6f9a9906cb0e1265f1e4c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"29d8a6b6a5439f4afd534be77e1b247e496b800ef890fe5910e7be912f6eb440\"" Sep 5 00:32:24.285580 containerd[1446]: time="2025-09-05T00:32:24.284359390Z" level=info msg="StartContainer for \"29d8a6b6a5439f4afd534be77e1b247e496b800ef890fe5910e7be912f6eb440\"" Sep 5 00:32:24.317043 systemd[1]: Started cri-containerd-29d8a6b6a5439f4afd534be77e1b247e496b800ef890fe5910e7be912f6eb440.scope - libcontainer container 29d8a6b6a5439f4afd534be77e1b247e496b800ef890fe5910e7be912f6eb440. Sep 5 00:32:24.426238 containerd[1446]: 2025-09-05 00:32:24.359 [INFO][4639] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="845fc777a4a36d84129ce3646095dce8a8df461e48644a8ccb9f1110f4764d64" Sep 5 00:32:24.426238 containerd[1446]: 2025-09-05 00:32:24.359 [INFO][4639] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="845fc777a4a36d84129ce3646095dce8a8df461e48644a8ccb9f1110f4764d64" iface="eth0" netns="/var/run/netns/cni-bd0de701-0766-ab3a-3580-c88c65e67306" Sep 5 00:32:24.426238 containerd[1446]: 2025-09-05 00:32:24.360 [INFO][4639] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="845fc777a4a36d84129ce3646095dce8a8df461e48644a8ccb9f1110f4764d64" iface="eth0" netns="/var/run/netns/cni-bd0de701-0766-ab3a-3580-c88c65e67306" Sep 5 00:32:24.426238 containerd[1446]: 2025-09-05 00:32:24.360 [INFO][4639] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="845fc777a4a36d84129ce3646095dce8a8df461e48644a8ccb9f1110f4764d64" iface="eth0" netns="/var/run/netns/cni-bd0de701-0766-ab3a-3580-c88c65e67306" Sep 5 00:32:24.426238 containerd[1446]: 2025-09-05 00:32:24.360 [INFO][4639] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="845fc777a4a36d84129ce3646095dce8a8df461e48644a8ccb9f1110f4764d64" Sep 5 00:32:24.426238 containerd[1446]: 2025-09-05 00:32:24.360 [INFO][4639] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="845fc777a4a36d84129ce3646095dce8a8df461e48644a8ccb9f1110f4764d64" Sep 5 00:32:24.426238 containerd[1446]: 2025-09-05 00:32:24.377 [INFO][4686] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="845fc777a4a36d84129ce3646095dce8a8df461e48644a8ccb9f1110f4764d64" HandleID="k8s-pod-network.845fc777a4a36d84129ce3646095dce8a8df461e48644a8ccb9f1110f4764d64" Workload="localhost-k8s-calico--apiserver--7b94b9658b--jdnpd-eth0" Sep 5 00:32:24.426238 containerd[1446]: 2025-09-05 00:32:24.377 [INFO][4686] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:32:24.426238 containerd[1446]: 2025-09-05 00:32:24.378 [INFO][4686] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:32:24.426238 containerd[1446]: 2025-09-05 00:32:24.420 [WARNING][4686] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="845fc777a4a36d84129ce3646095dce8a8df461e48644a8ccb9f1110f4764d64" HandleID="k8s-pod-network.845fc777a4a36d84129ce3646095dce8a8df461e48644a8ccb9f1110f4764d64" Workload="localhost-k8s-calico--apiserver--7b94b9658b--jdnpd-eth0" Sep 5 00:32:24.426238 containerd[1446]: 2025-09-05 00:32:24.420 [INFO][4686] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="845fc777a4a36d84129ce3646095dce8a8df461e48644a8ccb9f1110f4764d64" HandleID="k8s-pod-network.845fc777a4a36d84129ce3646095dce8a8df461e48644a8ccb9f1110f4764d64" Workload="localhost-k8s-calico--apiserver--7b94b9658b--jdnpd-eth0" Sep 5 00:32:24.426238 containerd[1446]: 2025-09-05 00:32:24.422 [INFO][4686] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:32:24.426238 containerd[1446]: 2025-09-05 00:32:24.424 [INFO][4639] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="845fc777a4a36d84129ce3646095dce8a8df461e48644a8ccb9f1110f4764d64" Sep 5 00:32:24.426808 containerd[1446]: time="2025-09-05T00:32:24.426770928Z" level=info msg="TearDown network for sandbox \"845fc777a4a36d84129ce3646095dce8a8df461e48644a8ccb9f1110f4764d64\" successfully" Sep 5 00:32:24.426808 containerd[1446]: time="2025-09-05T00:32:24.426800767Z" level=info msg="StopPodSandbox for \"845fc777a4a36d84129ce3646095dce8a8df461e48644a8ccb9f1110f4764d64\" returns successfully" Sep 5 00:32:24.428918 systemd[1]: run-netns-cni\x2dbd0de701\x2d0766\x2dab3a\x2d3580\x2dc88c65e67306.mount: Deactivated successfully. Sep 5 00:32:24.429179 containerd[1446]: time="2025-09-05T00:32:24.429127173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b94b9658b-jdnpd,Uid:b4104721-2685-4b1f-94b6-007263e183aa,Namespace:calico-apiserver,Attempt:1,}" Sep 5 00:32:24.453793 containerd[1446]: time="2025-09-05T00:32:24.453678870Z" level=info msg="StartContainer for \"29d8a6b6a5439f4afd534be77e1b247e496b800ef890fe5910e7be912f6eb440\" returns successfully" Sep 5 00:32:24.460794 kubelet[2465]: E0905 00:32:24.460221 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:32:24.465998 containerd[1446]: 2025-09-05 00:32:24.420 [INFO][4638] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0d857350e94f7419b72a77b0e2aa8cd7ce34079bf4c83b93076824bfeced2184" Sep 5 00:32:24.465998 containerd[1446]: 2025-09-05 00:32:24.420 [INFO][4638] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0d857350e94f7419b72a77b0e2aa8cd7ce34079bf4c83b93076824bfeced2184" iface="eth0" netns="/var/run/netns/cni-a64d271e-0655-adb7-5d5b-9e334bcac251" Sep 5 00:32:24.465998 containerd[1446]: 2025-09-05 00:32:24.420 [INFO][4638] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0d857350e94f7419b72a77b0e2aa8cd7ce34079bf4c83b93076824bfeced2184" iface="eth0" netns="/var/run/netns/cni-a64d271e-0655-adb7-5d5b-9e334bcac251" Sep 5 00:32:24.465998 containerd[1446]: 2025-09-05 00:32:24.420 [INFO][4638] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0d857350e94f7419b72a77b0e2aa8cd7ce34079bf4c83b93076824bfeced2184" iface="eth0" netns="/var/run/netns/cni-a64d271e-0655-adb7-5d5b-9e334bcac251" Sep 5 00:32:24.465998 containerd[1446]: 2025-09-05 00:32:24.420 [INFO][4638] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0d857350e94f7419b72a77b0e2aa8cd7ce34079bf4c83b93076824bfeced2184" Sep 5 00:32:24.465998 containerd[1446]: 2025-09-05 00:32:24.421 [INFO][4638] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0d857350e94f7419b72a77b0e2aa8cd7ce34079bf4c83b93076824bfeced2184" Sep 5 00:32:24.465998 containerd[1446]: 2025-09-05 00:32:24.443 [INFO][4699] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0d857350e94f7419b72a77b0e2aa8cd7ce34079bf4c83b93076824bfeced2184" HandleID="k8s-pod-network.0d857350e94f7419b72a77b0e2aa8cd7ce34079bf4c83b93076824bfeced2184" Workload="localhost-k8s-coredns--7c65d6cfc9--vx8zc-eth0" Sep 5 00:32:24.465998 containerd[1446]: 2025-09-05 00:32:24.444 [INFO][4699] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:32:24.465998 containerd[1446]: 2025-09-05 00:32:24.444 [INFO][4699] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:32:24.465998 containerd[1446]: 2025-09-05 00:32:24.453 [WARNING][4699] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0d857350e94f7419b72a77b0e2aa8cd7ce34079bf4c83b93076824bfeced2184" HandleID="k8s-pod-network.0d857350e94f7419b72a77b0e2aa8cd7ce34079bf4c83b93076824bfeced2184" Workload="localhost-k8s-coredns--7c65d6cfc9--vx8zc-eth0" Sep 5 00:32:24.465998 containerd[1446]: 2025-09-05 00:32:24.453 [INFO][4699] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0d857350e94f7419b72a77b0e2aa8cd7ce34079bf4c83b93076824bfeced2184" HandleID="k8s-pod-network.0d857350e94f7419b72a77b0e2aa8cd7ce34079bf4c83b93076824bfeced2184" Workload="localhost-k8s-coredns--7c65d6cfc9--vx8zc-eth0" Sep 5 00:32:24.465998 containerd[1446]: 2025-09-05 00:32:24.455 [INFO][4699] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:32:24.465998 containerd[1446]: 2025-09-05 00:32:24.459 [INFO][4638] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0d857350e94f7419b72a77b0e2aa8cd7ce34079bf4c83b93076824bfeced2184" Sep 5 00:32:24.469112 containerd[1446]: time="2025-09-05T00:32:24.466194831Z" level=info msg="TearDown network for sandbox \"0d857350e94f7419b72a77b0e2aa8cd7ce34079bf4c83b93076824bfeced2184\" successfully" Sep 5 00:32:24.469112 containerd[1446]: time="2025-09-05T00:32:24.466222830Z" level=info msg="StopPodSandbox for \"0d857350e94f7419b72a77b0e2aa8cd7ce34079bf4c83b93076824bfeced2184\" returns successfully" Sep 5 00:32:24.468762 systemd[1]: run-netns-cni\x2da64d271e\x2d0655\x2dadb7\x2d5d5b\x2d9e334bcac251.mount: Deactivated successfully. Sep 5 00:32:24.469225 kubelet[2465]: E0905 00:32:24.466453 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:32:24.469649 containerd[1446]: time="2025-09-05T00:32:24.469423328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vx8zc,Uid:35c515fe-5ffe-4ef1-99c2-70d6c9f6b20d,Namespace:kube-system,Attempt:1,}" Sep 5 00:32:24.479418 kubelet[2465]: I0905 00:32:24.478972 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-c8wqh" podStartSLOduration=36.478936385 podStartE2EDuration="36.478936385s" podCreationTimestamp="2025-09-05 00:31:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:32:24.478496319 +0000 UTC m=+43.349546573" watchObservedRunningTime="2025-09-05 00:32:24.478936385 +0000 UTC m=+43.349986639" Sep 5 00:32:24.651464 systemd-networkd[1384]: calicef73ae12ce: Link UP Sep 5 00:32:24.651910 systemd-networkd[1384]: calicef73ae12ce: Gained carrier Sep 5 00:32:24.669648 containerd[1446]: 2025-09-05 00:32:24.557 [INFO][4716] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--vx8zc-eth0 coredns-7c65d6cfc9- kube-system 35c515fe-5ffe-4ef1-99c2-70d6c9f6b20d 961 0 2025-09-05 00:31:48 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-vx8zc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calicef73ae12ce [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ddcb2b692021f1a3c086c0f15a8a438ad2b3b76452630c2f9b5ce77337ce9ad4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vx8zc" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--vx8zc-" Sep 5 00:32:24.669648 containerd[1446]: 2025-09-05 00:32:24.557 [INFO][4716] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ddcb2b692021f1a3c086c0f15a8a438ad2b3b76452630c2f9b5ce77337ce9ad4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vx8zc" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--vx8zc-eth0" Sep 5 00:32:24.669648 containerd[1446]: 2025-09-05 00:32:24.591 [INFO][4744] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ddcb2b692021f1a3c086c0f15a8a438ad2b3b76452630c2f9b5ce77337ce9ad4" HandleID="k8s-pod-network.ddcb2b692021f1a3c086c0f15a8a438ad2b3b76452630c2f9b5ce77337ce9ad4" Workload="localhost-k8s-coredns--7c65d6cfc9--vx8zc-eth0" Sep 5 00:32:24.669648 containerd[1446]: 2025-09-05 00:32:24.591 [INFO][4744] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ddcb2b692021f1a3c086c0f15a8a438ad2b3b76452630c2f9b5ce77337ce9ad4" HandleID="k8s-pod-network.ddcb2b692021f1a3c086c0f15a8a438ad2b3b76452630c2f9b5ce77337ce9ad4" Workload="localhost-k8s-coredns--7c65d6cfc9--vx8zc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001164b0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-vx8zc", "timestamp":"2025-09-05 00:32:24.591182965 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:32:24.669648 containerd[1446]: 2025-09-05 00:32:24.591 [INFO][4744] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:32:24.669648 containerd[1446]: 2025-09-05 00:32:24.591 [INFO][4744] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:32:24.669648 containerd[1446]: 2025-09-05 00:32:24.591 [INFO][4744] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:32:24.669648 containerd[1446]: 2025-09-05 00:32:24.601 [INFO][4744] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ddcb2b692021f1a3c086c0f15a8a438ad2b3b76452630c2f9b5ce77337ce9ad4" host="localhost" Sep 5 00:32:24.669648 containerd[1446]: 2025-09-05 00:32:24.612 [INFO][4744] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:32:24.669648 containerd[1446]: 2025-09-05 00:32:24.618 [INFO][4744] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:32:24.669648 containerd[1446]: 2025-09-05 00:32:24.621 [INFO][4744] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:32:24.669648 containerd[1446]: 2025-09-05 00:32:24.624 [INFO][4744] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:32:24.669648 containerd[1446]: 2025-09-05 00:32:24.624 [INFO][4744] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ddcb2b692021f1a3c086c0f15a8a438ad2b3b76452630c2f9b5ce77337ce9ad4" host="localhost" Sep 5 00:32:24.669648 containerd[1446]: 2025-09-05 00:32:24.626 [INFO][4744] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ddcb2b692021f1a3c086c0f15a8a438ad2b3b76452630c2f9b5ce77337ce9ad4 Sep 5 00:32:24.669648 containerd[1446]: 2025-09-05 00:32:24.630 [INFO][4744] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ddcb2b692021f1a3c086c0f15a8a438ad2b3b76452630c2f9b5ce77337ce9ad4" host="localhost" Sep 5 00:32:24.669648 containerd[1446]: 2025-09-05 00:32:24.637 [INFO][4744] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.ddcb2b692021f1a3c086c0f15a8a438ad2b3b76452630c2f9b5ce77337ce9ad4" host="localhost" Sep 5 00:32:24.669648 containerd[1446]: 2025-09-05 00:32:24.637 [INFO][4744] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.ddcb2b692021f1a3c086c0f15a8a438ad2b3b76452630c2f9b5ce77337ce9ad4" host="localhost" Sep 5 00:32:24.669648 containerd[1446]: 2025-09-05 00:32:24.637 [INFO][4744] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:32:24.669648 containerd[1446]: 2025-09-05 00:32:24.637 [INFO][4744] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="ddcb2b692021f1a3c086c0f15a8a438ad2b3b76452630c2f9b5ce77337ce9ad4" HandleID="k8s-pod-network.ddcb2b692021f1a3c086c0f15a8a438ad2b3b76452630c2f9b5ce77337ce9ad4" Workload="localhost-k8s-coredns--7c65d6cfc9--vx8zc-eth0" Sep 5 00:32:24.670649 containerd[1446]: 2025-09-05 00:32:24.644 [INFO][4716] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ddcb2b692021f1a3c086c0f15a8a438ad2b3b76452630c2f9b5ce77337ce9ad4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vx8zc" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--vx8zc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--vx8zc-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"35c515fe-5ffe-4ef1-99c2-70d6c9f6b20d", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 31, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-vx8zc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicef73ae12ce", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:32:24.670649 containerd[1446]: 2025-09-05 00:32:24.644 [INFO][4716] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="ddcb2b692021f1a3c086c0f15a8a438ad2b3b76452630c2f9b5ce77337ce9ad4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vx8zc" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--vx8zc-eth0" Sep 5 00:32:24.670649 containerd[1446]: 2025-09-05 00:32:24.644 [INFO][4716] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicef73ae12ce ContainerID="ddcb2b692021f1a3c086c0f15a8a438ad2b3b76452630c2f9b5ce77337ce9ad4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vx8zc" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--vx8zc-eth0" Sep 5 00:32:24.670649 containerd[1446]: 2025-09-05 00:32:24.652 [INFO][4716] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ddcb2b692021f1a3c086c0f15a8a438ad2b3b76452630c2f9b5ce77337ce9ad4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vx8zc" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--vx8zc-eth0" Sep 5 00:32:24.670649 containerd[1446]: 2025-09-05 00:32:24.652 [INFO][4716] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ddcb2b692021f1a3c086c0f15a8a438ad2b3b76452630c2f9b5ce77337ce9ad4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vx8zc" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--vx8zc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--vx8zc-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"35c515fe-5ffe-4ef1-99c2-70d6c9f6b20d", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 31, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ddcb2b692021f1a3c086c0f15a8a438ad2b3b76452630c2f9b5ce77337ce9ad4", Pod:"coredns-7c65d6cfc9-vx8zc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicef73ae12ce", MAC:"ae:0d:35:d3:95:da", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:32:24.670649 containerd[1446]: 2025-09-05 00:32:24.665 [INFO][4716] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ddcb2b692021f1a3c086c0f15a8a438ad2b3b76452630c2f9b5ce77337ce9ad4" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vx8zc" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--vx8zc-eth0" Sep 5 00:32:24.735122 containerd[1446]: time="2025-09-05T00:32:24.733554265Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:32:24.735122 containerd[1446]: time="2025-09-05T00:32:24.733610903Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:32:24.735122 containerd[1446]: time="2025-09-05T00:32:24.733621743Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:32:24.735122 containerd[1446]: time="2025-09-05T00:32:24.733701900Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:32:24.746122 systemd-networkd[1384]: cali29b32730be7: Link UP Sep 5 00:32:24.748791 systemd-networkd[1384]: cali29b32730be7: Gained carrier Sep 5 00:32:24.764985 systemd[1]: Started cri-containerd-ddcb2b692021f1a3c086c0f15a8a438ad2b3b76452630c2f9b5ce77337ce9ad4.scope - libcontainer container ddcb2b692021f1a3c086c0f15a8a438ad2b3b76452630c2f9b5ce77337ce9ad4. Sep 5 00:32:24.769517 containerd[1446]: 2025-09-05 00:32:24.561 [INFO][4728] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7b94b9658b--jdnpd-eth0 calico-apiserver-7b94b9658b- calico-apiserver b4104721-2685-4b1f-94b6-007263e183aa 960 0 2025-09-05 00:31:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7b94b9658b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7b94b9658b-jdnpd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali29b32730be7 [] [] }} ContainerID="683350299dfce9222ad2e0621de04844a78eef45748eb23b6f481c60c42e3829" Namespace="calico-apiserver" Pod="calico-apiserver-7b94b9658b-jdnpd" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b94b9658b--jdnpd-" Sep 5 00:32:24.769517 containerd[1446]: 2025-09-05 00:32:24.561 [INFO][4728] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="683350299dfce9222ad2e0621de04844a78eef45748eb23b6f481c60c42e3829" Namespace="calico-apiserver" Pod="calico-apiserver-7b94b9658b-jdnpd" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b94b9658b--jdnpd-eth0" Sep 5 00:32:24.769517 containerd[1446]: 2025-09-05 00:32:24.598 [INFO][4750] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="683350299dfce9222ad2e0621de04844a78eef45748eb23b6f481c60c42e3829" HandleID="k8s-pod-network.683350299dfce9222ad2e0621de04844a78eef45748eb23b6f481c60c42e3829" Workload="localhost-k8s-calico--apiserver--7b94b9658b--jdnpd-eth0" Sep 5 00:32:24.769517 containerd[1446]: 2025-09-05 00:32:24.598 [INFO][4750] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="683350299dfce9222ad2e0621de04844a78eef45748eb23b6f481c60c42e3829" HandleID="k8s-pod-network.683350299dfce9222ad2e0621de04844a78eef45748eb23b6f481c60c42e3829" Workload="localhost-k8s-calico--apiserver--7b94b9658b--jdnpd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3b40), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7b94b9658b-jdnpd", "timestamp":"2025-09-05 00:32:24.598372976 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:32:24.769517 containerd[1446]: 2025-09-05 00:32:24.598 [INFO][4750] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:32:24.769517 containerd[1446]: 2025-09-05 00:32:24.637 [INFO][4750] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:32:24.769517 containerd[1446]: 2025-09-05 00:32:24.637 [INFO][4750] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:32:24.769517 containerd[1446]: 2025-09-05 00:32:24.703 [INFO][4750] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.683350299dfce9222ad2e0621de04844a78eef45748eb23b6f481c60c42e3829" host="localhost" Sep 5 00:32:24.769517 containerd[1446]: 2025-09-05 00:32:24.711 [INFO][4750] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:32:24.769517 containerd[1446]: 2025-09-05 00:32:24.718 [INFO][4750] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:32:24.769517 containerd[1446]: 2025-09-05 00:32:24.719 [INFO][4750] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:32:24.769517 containerd[1446]: 2025-09-05 00:32:24.722 [INFO][4750] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:32:24.769517 containerd[1446]: 2025-09-05 00:32:24.722 [INFO][4750] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.683350299dfce9222ad2e0621de04844a78eef45748eb23b6f481c60c42e3829" host="localhost" Sep 5 00:32:24.769517 containerd[1446]: 2025-09-05 00:32:24.723 [INFO][4750] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.683350299dfce9222ad2e0621de04844a78eef45748eb23b6f481c60c42e3829 Sep 5 00:32:24.769517 containerd[1446]: 2025-09-05 00:32:24.727 [INFO][4750] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.683350299dfce9222ad2e0621de04844a78eef45748eb23b6f481c60c42e3829" host="localhost" Sep 5 00:32:24.769517 containerd[1446]: 2025-09-05 00:32:24.736 [INFO][4750] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.683350299dfce9222ad2e0621de04844a78eef45748eb23b6f481c60c42e3829" host="localhost" Sep 5 00:32:24.769517 containerd[1446]: 2025-09-05 00:32:24.737 [INFO][4750] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.683350299dfce9222ad2e0621de04844a78eef45748eb23b6f481c60c42e3829" host="localhost" Sep 5 00:32:24.769517 containerd[1446]: 2025-09-05 00:32:24.737 [INFO][4750] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:32:24.769517 containerd[1446]: 2025-09-05 00:32:24.737 [INFO][4750] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="683350299dfce9222ad2e0621de04844a78eef45748eb23b6f481c60c42e3829" HandleID="k8s-pod-network.683350299dfce9222ad2e0621de04844a78eef45748eb23b6f481c60c42e3829" Workload="localhost-k8s-calico--apiserver--7b94b9658b--jdnpd-eth0" Sep 5 00:32:24.770744 containerd[1446]: 2025-09-05 00:32:24.741 [INFO][4728] cni-plugin/k8s.go 418: Populated endpoint ContainerID="683350299dfce9222ad2e0621de04844a78eef45748eb23b6f481c60c42e3829" Namespace="calico-apiserver" Pod="calico-apiserver-7b94b9658b-jdnpd" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b94b9658b--jdnpd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7b94b9658b--jdnpd-eth0", GenerateName:"calico-apiserver-7b94b9658b-", Namespace:"calico-apiserver", SelfLink:"", UID:"b4104721-2685-4b1f-94b6-007263e183aa", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 31, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b94b9658b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7b94b9658b-jdnpd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali29b32730be7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:32:24.770744 containerd[1446]: 2025-09-05 00:32:24.741 [INFO][4728] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="683350299dfce9222ad2e0621de04844a78eef45748eb23b6f481c60c42e3829" Namespace="calico-apiserver" Pod="calico-apiserver-7b94b9658b-jdnpd" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b94b9658b--jdnpd-eth0" Sep 5 00:32:24.770744 containerd[1446]: 2025-09-05 00:32:24.741 [INFO][4728] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali29b32730be7 ContainerID="683350299dfce9222ad2e0621de04844a78eef45748eb23b6f481c60c42e3829" Namespace="calico-apiserver" Pod="calico-apiserver-7b94b9658b-jdnpd" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b94b9658b--jdnpd-eth0" Sep 5 00:32:24.770744 containerd[1446]: 2025-09-05 00:32:24.750 [INFO][4728] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="683350299dfce9222ad2e0621de04844a78eef45748eb23b6f481c60c42e3829" Namespace="calico-apiserver" Pod="calico-apiserver-7b94b9658b-jdnpd" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b94b9658b--jdnpd-eth0" Sep 5 00:32:24.770744 containerd[1446]: 2025-09-05 00:32:24.751 [INFO][4728] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="683350299dfce9222ad2e0621de04844a78eef45748eb23b6f481c60c42e3829" Namespace="calico-apiserver" Pod="calico-apiserver-7b94b9658b-jdnpd" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b94b9658b--jdnpd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7b94b9658b--jdnpd-eth0", GenerateName:"calico-apiserver-7b94b9658b-", Namespace:"calico-apiserver", SelfLink:"", UID:"b4104721-2685-4b1f-94b6-007263e183aa", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 31, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b94b9658b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"683350299dfce9222ad2e0621de04844a78eef45748eb23b6f481c60c42e3829", Pod:"calico-apiserver-7b94b9658b-jdnpd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali29b32730be7", MAC:"6e:b1:fb:bd:4d:0b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:32:24.770744 containerd[1446]: 2025-09-05 00:32:24.762 [INFO][4728] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="683350299dfce9222ad2e0621de04844a78eef45748eb23b6f481c60c42e3829" Namespace="calico-apiserver" Pod="calico-apiserver-7b94b9658b-jdnpd" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b94b9658b--jdnpd-eth0" Sep 5 00:32:24.811521 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:32:24.816289 containerd[1446]: time="2025-09-05T00:32:24.816213909Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:32:24.816289 containerd[1446]: time="2025-09-05T00:32:24.816260827Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:32:24.816289 containerd[1446]: time="2025-09-05T00:32:24.816275787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:32:24.816490 containerd[1446]: time="2025-09-05T00:32:24.816357944Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:32:24.834449 containerd[1446]: time="2025-09-05T00:32:24.834318491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vx8zc,Uid:35c515fe-5ffe-4ef1-99c2-70d6c9f6b20d,Namespace:kube-system,Attempt:1,} returns sandbox id \"ddcb2b692021f1a3c086c0f15a8a438ad2b3b76452630c2f9b5ce77337ce9ad4\"" Sep 5 00:32:24.835814 kubelet[2465]: E0905 00:32:24.835211 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:32:24.838591 containerd[1446]: time="2025-09-05T00:32:24.837804740Z" level=info msg="CreateContainer within sandbox \"ddcb2b692021f1a3c086c0f15a8a438ad2b3b76452630c2f9b5ce77337ce9ad4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 00:32:24.840067 systemd[1]: Started cri-containerd-683350299dfce9222ad2e0621de04844a78eef45748eb23b6f481c60c42e3829.scope - libcontainer container 683350299dfce9222ad2e0621de04844a78eef45748eb23b6f481c60c42e3829. Sep 5 00:32:24.853192 containerd[1446]: time="2025-09-05T00:32:24.853149331Z" level=info msg="CreateContainer within sandbox \"ddcb2b692021f1a3c086c0f15a8a438ad2b3b76452630c2f9b5ce77337ce9ad4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a170b36f7db024c8e030934f137e24360eaf68b0276c7867c7a655c88e6d25a6\"" Sep 5 00:32:24.853844 containerd[1446]: time="2025-09-05T00:32:24.853814830Z" level=info msg="StartContainer for \"a170b36f7db024c8e030934f137e24360eaf68b0276c7867c7a655c88e6d25a6\"" Sep 5 00:32:24.871003 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:32:24.887059 systemd[1]: Started cri-containerd-a170b36f7db024c8e030934f137e24360eaf68b0276c7867c7a655c88e6d25a6.scope - libcontainer container a170b36f7db024c8e030934f137e24360eaf68b0276c7867c7a655c88e6d25a6. Sep 5 00:32:24.891360 containerd[1446]: time="2025-09-05T00:32:24.891157039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b94b9658b-jdnpd,Uid:b4104721-2685-4b1f-94b6-007263e183aa,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"683350299dfce9222ad2e0621de04844a78eef45748eb23b6f481c60c42e3829\"" Sep 5 00:32:24.921140 containerd[1446]: time="2025-09-05T00:32:24.920668938Z" level=info msg="StartContainer for \"a170b36f7db024c8e030934f137e24360eaf68b0276c7867c7a655c88e6d25a6\" returns successfully" Sep 5 00:32:24.945174 systemd-networkd[1384]: cali899fb435dd9: Gained IPv6LL Sep 5 00:32:25.139183 systemd-networkd[1384]: caliba5b7f6a825: Gained IPv6LL Sep 5 00:32:25.229795 containerd[1446]: time="2025-09-05T00:32:25.229684359Z" level=info msg="StopPodSandbox for \"8313429ede94278ad5c5a1ab799089fc13106b9f22101dace780893eeab73a35\"" Sep 5 00:32:25.257380 containerd[1446]: time="2025-09-05T00:32:25.257336421Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:32:25.258117 containerd[1446]: time="2025-09-05T00:32:25.258084237Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 5 00:32:25.259262 containerd[1446]: time="2025-09-05T00:32:25.259224562Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:32:25.265605 containerd[1446]: time="2025-09-05T00:32:25.265562845Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:32:25.267429 containerd[1446]: time="2025-09-05T00:32:25.267397468Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 3.726616073s" Sep 5 00:32:25.267593 containerd[1446]: time="2025-09-05T00:32:25.267432467Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 5 00:32:25.270113 containerd[1446]: time="2025-09-05T00:32:25.270086745Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 5 00:32:25.271619 containerd[1446]: time="2025-09-05T00:32:25.271514061Z" level=info msg="CreateContainer within sandbox \"03e22a11c913ed9e2632d71523116f0e2d4d71a227394cd0dd7accc15d79fd53\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 5 00:32:25.288728 containerd[1446]: time="2025-09-05T00:32:25.288679968Z" level=info msg="CreateContainer within sandbox \"03e22a11c913ed9e2632d71523116f0e2d4d71a227394cd0dd7accc15d79fd53\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6d79da69ccf669faca19f97afd13205f5690cddb458eaa4bac0442760a8af689\"" Sep 5 00:32:25.289918 containerd[1446]: time="2025-09-05T00:32:25.289462624Z" level=info msg="StartContainer for \"6d79da69ccf669faca19f97afd13205f5690cddb458eaa4bac0442760a8af689\"" Sep 5 00:32:25.327055 systemd[1]: Started cri-containerd-6d79da69ccf669faca19f97afd13205f5690cddb458eaa4bac0442760a8af689.scope - libcontainer container 6d79da69ccf669faca19f97afd13205f5690cddb458eaa4bac0442760a8af689. Sep 5 00:32:25.330720 containerd[1446]: 2025-09-05 00:32:25.286 [INFO][4915] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8313429ede94278ad5c5a1ab799089fc13106b9f22101dace780893eeab73a35" Sep 5 00:32:25.330720 containerd[1446]: 2025-09-05 00:32:25.286 [INFO][4915] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8313429ede94278ad5c5a1ab799089fc13106b9f22101dace780893eeab73a35" iface="eth0" netns="/var/run/netns/cni-2df20f28-6f0c-227e-babc-c36482a672ee" Sep 5 00:32:25.330720 containerd[1446]: 2025-09-05 00:32:25.286 [INFO][4915] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8313429ede94278ad5c5a1ab799089fc13106b9f22101dace780893eeab73a35" iface="eth0" netns="/var/run/netns/cni-2df20f28-6f0c-227e-babc-c36482a672ee" Sep 5 00:32:25.330720 containerd[1446]: 2025-09-05 00:32:25.286 [INFO][4915] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8313429ede94278ad5c5a1ab799089fc13106b9f22101dace780893eeab73a35" iface="eth0" netns="/var/run/netns/cni-2df20f28-6f0c-227e-babc-c36482a672ee" Sep 5 00:32:25.330720 containerd[1446]: 2025-09-05 00:32:25.286 [INFO][4915] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8313429ede94278ad5c5a1ab799089fc13106b9f22101dace780893eeab73a35" Sep 5 00:32:25.330720 containerd[1446]: 2025-09-05 00:32:25.286 [INFO][4915] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8313429ede94278ad5c5a1ab799089fc13106b9f22101dace780893eeab73a35" Sep 5 00:32:25.330720 containerd[1446]: 2025-09-05 00:32:25.308 [INFO][4928] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8313429ede94278ad5c5a1ab799089fc13106b9f22101dace780893eeab73a35" HandleID="k8s-pod-network.8313429ede94278ad5c5a1ab799089fc13106b9f22101dace780893eeab73a35" Workload="localhost-k8s-calico--kube--controllers--869c68467b--v22j2-eth0" Sep 5 00:32:25.330720 containerd[1446]: 2025-09-05 00:32:25.308 [INFO][4928] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:32:25.330720 containerd[1446]: 2025-09-05 00:32:25.309 [INFO][4928] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:32:25.330720 containerd[1446]: 2025-09-05 00:32:25.317 [WARNING][4928] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8313429ede94278ad5c5a1ab799089fc13106b9f22101dace780893eeab73a35" HandleID="k8s-pod-network.8313429ede94278ad5c5a1ab799089fc13106b9f22101dace780893eeab73a35" Workload="localhost-k8s-calico--kube--controllers--869c68467b--v22j2-eth0" Sep 5 00:32:25.330720 containerd[1446]: 2025-09-05 00:32:25.317 [INFO][4928] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8313429ede94278ad5c5a1ab799089fc13106b9f22101dace780893eeab73a35" HandleID="k8s-pod-network.8313429ede94278ad5c5a1ab799089fc13106b9f22101dace780893eeab73a35" Workload="localhost-k8s-calico--kube--controllers--869c68467b--v22j2-eth0" Sep 5 00:32:25.330720 containerd[1446]: 2025-09-05 00:32:25.320 [INFO][4928] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:32:25.330720 containerd[1446]: 2025-09-05 00:32:25.323 [INFO][4915] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8313429ede94278ad5c5a1ab799089fc13106b9f22101dace780893eeab73a35" Sep 5 00:32:25.332523 containerd[1446]: time="2025-09-05T00:32:25.332405811Z" level=info msg="TearDown network for sandbox \"8313429ede94278ad5c5a1ab799089fc13106b9f22101dace780893eeab73a35\" successfully" Sep 5 00:32:25.332523 containerd[1446]: time="2025-09-05T00:32:25.332439650Z" level=info msg="StopPodSandbox for \"8313429ede94278ad5c5a1ab799089fc13106b9f22101dace780893eeab73a35\" returns successfully" Sep 5 00:32:25.333228 containerd[1446]: time="2025-09-05T00:32:25.333197626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-869c68467b-v22j2,Uid:19de0c52-9a52-436c-b9e4-3190b3fe0247,Namespace:calico-system,Attempt:1,}" Sep 5 00:32:25.360427 containerd[1446]: time="2025-09-05T00:32:25.360393822Z" level=info msg="StartContainer for \"6d79da69ccf669faca19f97afd13205f5690cddb458eaa4bac0442760a8af689\" returns successfully" Sep 5 00:32:25.475372 kubelet[2465]: E0905 00:32:25.474842 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:32:25.475372 kubelet[2465]: E0905 00:32:25.475061 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:32:25.485520 systemd-networkd[1384]: cali0b0aa941d75: Link UP Sep 5 00:32:25.485690 systemd-networkd[1384]: cali0b0aa941d75: Gained carrier Sep 5 00:32:25.487789 kubelet[2465]: I0905 00:32:25.487729 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7b94b9658b-ngd47" podStartSLOduration=25.758370806 podStartE2EDuration="29.487713031s" podCreationTimestamp="2025-09-05 00:31:56 +0000 UTC" firstStartedPulling="2025-09-05 00:32:21.54008754 +0000 UTC m=+40.411137794" lastFinishedPulling="2025-09-05 00:32:25.269429765 +0000 UTC m=+44.140480019" observedRunningTime="2025-09-05 00:32:25.487441 +0000 UTC m=+44.358491254" watchObservedRunningTime="2025-09-05 00:32:25.487713031 +0000 UTC m=+44.358763285" Sep 5 00:32:25.500743 kubelet[2465]: I0905 00:32:25.500583 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-vx8zc" podStartSLOduration=37.500563993 podStartE2EDuration="37.500563993s" podCreationTimestamp="2025-09-05 00:31:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:32:25.499112518 +0000 UTC m=+44.370162772" watchObservedRunningTime="2025-09-05 00:32:25.500563993 +0000 UTC m=+44.371614247" Sep 5 00:32:25.505275 containerd[1446]: 2025-09-05 00:32:25.379 [INFO][4959] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--869c68467b--v22j2-eth0 calico-kube-controllers-869c68467b- calico-system 19de0c52-9a52-436c-b9e4-3190b3fe0247 991 0 2025-09-05 00:32:00 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:869c68467b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-869c68467b-v22j2 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali0b0aa941d75 [] [] }} ContainerID="6a16c4cb13ab3cc0b92432402b5d3bb45d3a9b72d4a67b000a375cea7965b6e3" Namespace="calico-system" Pod="calico-kube-controllers-869c68467b-v22j2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--869c68467b--v22j2-" Sep 5 00:32:25.505275 containerd[1446]: 2025-09-05 00:32:25.379 [INFO][4959] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6a16c4cb13ab3cc0b92432402b5d3bb45d3a9b72d4a67b000a375cea7965b6e3" Namespace="calico-system" Pod="calico-kube-controllers-869c68467b-v22j2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--869c68467b--v22j2-eth0" Sep 5 00:32:25.505275 containerd[1446]: 2025-09-05 00:32:25.438 [INFO][4986] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6a16c4cb13ab3cc0b92432402b5d3bb45d3a9b72d4a67b000a375cea7965b6e3" HandleID="k8s-pod-network.6a16c4cb13ab3cc0b92432402b5d3bb45d3a9b72d4a67b000a375cea7965b6e3" Workload="localhost-k8s-calico--kube--controllers--869c68467b--v22j2-eth0" Sep 5 00:32:25.505275 containerd[1446]: 2025-09-05 00:32:25.439 [INFO][4986] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6a16c4cb13ab3cc0b92432402b5d3bb45d3a9b72d4a67b000a375cea7965b6e3" HandleID="k8s-pod-network.6a16c4cb13ab3cc0b92432402b5d3bb45d3a9b72d4a67b000a375cea7965b6e3" Workload="localhost-k8s-calico--kube--controllers--869c68467b--v22j2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004dbd0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-869c68467b-v22j2", "timestamp":"2025-09-05 00:32:25.438895786 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:32:25.505275 containerd[1446]: 2025-09-05 00:32:25.439 [INFO][4986] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:32:25.505275 containerd[1446]: 2025-09-05 00:32:25.439 [INFO][4986] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:32:25.505275 containerd[1446]: 2025-09-05 00:32:25.439 [INFO][4986] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:32:25.505275 containerd[1446]: 2025-09-05 00:32:25.450 [INFO][4986] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6a16c4cb13ab3cc0b92432402b5d3bb45d3a9b72d4a67b000a375cea7965b6e3" host="localhost" Sep 5 00:32:25.505275 containerd[1446]: 2025-09-05 00:32:25.454 [INFO][4986] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:32:25.505275 containerd[1446]: 2025-09-05 00:32:25.460 [INFO][4986] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:32:25.505275 containerd[1446]: 2025-09-05 00:32:25.462 [INFO][4986] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:32:25.505275 containerd[1446]: 2025-09-05 00:32:25.464 [INFO][4986] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:32:25.505275 containerd[1446]: 2025-09-05 00:32:25.464 [INFO][4986] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6a16c4cb13ab3cc0b92432402b5d3bb45d3a9b72d4a67b000a375cea7965b6e3" host="localhost" Sep 5 00:32:25.505275 containerd[1446]: 2025-09-05 00:32:25.465 [INFO][4986] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6a16c4cb13ab3cc0b92432402b5d3bb45d3a9b72d4a67b000a375cea7965b6e3 Sep 5 00:32:25.505275 containerd[1446]: 2025-09-05 00:32:25.469 [INFO][4986] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6a16c4cb13ab3cc0b92432402b5d3bb45d3a9b72d4a67b000a375cea7965b6e3" host="localhost" Sep 5 00:32:25.505275 containerd[1446]: 2025-09-05 00:32:25.478 [INFO][4986] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.6a16c4cb13ab3cc0b92432402b5d3bb45d3a9b72d4a67b000a375cea7965b6e3" host="localhost" Sep 5 00:32:25.505275 containerd[1446]: 2025-09-05 00:32:25.478 [INFO][4986] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.6a16c4cb13ab3cc0b92432402b5d3bb45d3a9b72d4a67b000a375cea7965b6e3" host="localhost" Sep 5 00:32:25.505275 containerd[1446]: 2025-09-05 00:32:25.478 [INFO][4986] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:32:25.505275 containerd[1446]: 2025-09-05 00:32:25.478 [INFO][4986] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="6a16c4cb13ab3cc0b92432402b5d3bb45d3a9b72d4a67b000a375cea7965b6e3" HandleID="k8s-pod-network.6a16c4cb13ab3cc0b92432402b5d3bb45d3a9b72d4a67b000a375cea7965b6e3" Workload="localhost-k8s-calico--kube--controllers--869c68467b--v22j2-eth0" Sep 5 00:32:25.506101 containerd[1446]: 2025-09-05 00:32:25.480 [INFO][4959] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6a16c4cb13ab3cc0b92432402b5d3bb45d3a9b72d4a67b000a375cea7965b6e3" Namespace="calico-system" Pod="calico-kube-controllers-869c68467b-v22j2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--869c68467b--v22j2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--869c68467b--v22j2-eth0", GenerateName:"calico-kube-controllers-869c68467b-", Namespace:"calico-system", SelfLink:"", UID:"19de0c52-9a52-436c-b9e4-3190b3fe0247", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 32, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"869c68467b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-869c68467b-v22j2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0b0aa941d75", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:32:25.506101 containerd[1446]: 2025-09-05 00:32:25.480 [INFO][4959] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="6a16c4cb13ab3cc0b92432402b5d3bb45d3a9b72d4a67b000a375cea7965b6e3" Namespace="calico-system" Pod="calico-kube-controllers-869c68467b-v22j2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--869c68467b--v22j2-eth0" Sep 5 00:32:25.506101 containerd[1446]: 2025-09-05 00:32:25.480 [INFO][4959] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0b0aa941d75 ContainerID="6a16c4cb13ab3cc0b92432402b5d3bb45d3a9b72d4a67b000a375cea7965b6e3" Namespace="calico-system" Pod="calico-kube-controllers-869c68467b-v22j2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--869c68467b--v22j2-eth0" Sep 5 00:32:25.506101 containerd[1446]: 2025-09-05 00:32:25.484 [INFO][4959] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6a16c4cb13ab3cc0b92432402b5d3bb45d3a9b72d4a67b000a375cea7965b6e3" Namespace="calico-system" Pod="calico-kube-controllers-869c68467b-v22j2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--869c68467b--v22j2-eth0" Sep 5 00:32:25.506101 containerd[1446]: 2025-09-05 00:32:25.485 [INFO][4959] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6a16c4cb13ab3cc0b92432402b5d3bb45d3a9b72d4a67b000a375cea7965b6e3" Namespace="calico-system" Pod="calico-kube-controllers-869c68467b-v22j2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--869c68467b--v22j2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--869c68467b--v22j2-eth0", GenerateName:"calico-kube-controllers-869c68467b-", Namespace:"calico-system", SelfLink:"", UID:"19de0c52-9a52-436c-b9e4-3190b3fe0247", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 32, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"869c68467b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6a16c4cb13ab3cc0b92432402b5d3bb45d3a9b72d4a67b000a375cea7965b6e3", Pod:"calico-kube-controllers-869c68467b-v22j2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0b0aa941d75", MAC:"62:fd:41:bd:d2:c3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:32:25.506101 containerd[1446]: 2025-09-05 00:32:25.502 [INFO][4959] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6a16c4cb13ab3cc0b92432402b5d3bb45d3a9b72d4a67b000a375cea7965b6e3" Namespace="calico-system" Pod="calico-kube-controllers-869c68467b-v22j2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--869c68467b--v22j2-eth0" Sep 5 00:32:25.520201 systemd-networkd[1384]: calic6e18199175: Gained IPv6LL Sep 5 00:32:25.554469 containerd[1446]: time="2025-09-05T00:32:25.553967015Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:32:25.554601 containerd[1446]: time="2025-09-05T00:32:25.554482639Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:32:25.554601 containerd[1446]: time="2025-09-05T00:32:25.554506359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:32:25.554689 containerd[1446]: time="2025-09-05T00:32:25.554598836Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:32:25.574044 systemd[1]: Started cri-containerd-6a16c4cb13ab3cc0b92432402b5d3bb45d3a9b72d4a67b000a375cea7965b6e3.scope - libcontainer container 6a16c4cb13ab3cc0b92432402b5d3bb45d3a9b72d4a67b000a375cea7965b6e3. Sep 5 00:32:25.593066 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:32:25.617283 containerd[1446]: time="2025-09-05T00:32:25.617244852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-869c68467b-v22j2,Uid:19de0c52-9a52-436c-b9e4-3190b3fe0247,Namespace:calico-system,Attempt:1,} returns sandbox id \"6a16c4cb13ab3cc0b92432402b5d3bb45d3a9b72d4a67b000a375cea7965b6e3\"" Sep 5 00:32:25.668084 systemd[1]: Started sshd@7-10.0.0.114:22-10.0.0.1:34944.service - OpenSSH per-connection server daemon (10.0.0.1:34944). Sep 5 00:32:25.716203 sshd[5067]: Accepted publickey for core from 10.0.0.1 port 34944 ssh2: RSA SHA256:AGG14PALIK3Z6LO0MtFBMGyAQes3xIvrcXGuLmkN81k Sep 5 00:32:25.717668 sshd[5067]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:32:25.722939 systemd-logind[1422]: New session 8 of user core. Sep 5 00:32:25.732060 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 5 00:32:25.906019 systemd-networkd[1384]: cali29b32730be7: Gained IPv6LL Sep 5 00:32:25.967374 sshd[5067]: pam_unix(sshd:session): session closed for user core Sep 5 00:32:25.968062 systemd-networkd[1384]: calicef73ae12ce: Gained IPv6LL Sep 5 00:32:25.971031 systemd[1]: sshd@7-10.0.0.114:22-10.0.0.1:34944.service: Deactivated successfully. Sep 5 00:32:25.973266 systemd[1]: session-8.scope: Deactivated successfully. Sep 5 00:32:25.974650 systemd-logind[1422]: Session 8 logged out. Waiting for processes to exit. Sep 5 00:32:25.975384 systemd-logind[1422]: Removed session 8. Sep 5 00:32:26.168731 systemd[1]: run-containerd-runc-k8s.io-6d79da69ccf669faca19f97afd13205f5690cddb458eaa4bac0442760a8af689-runc.dWL9mw.mount: Deactivated successfully. Sep 5 00:32:26.168834 systemd[1]: run-netns-cni\x2d2df20f28\x2d6f0c\x2d227e\x2dbabc\x2dc36482a672ee.mount: Deactivated successfully. Sep 5 00:32:26.478264 kubelet[2465]: I0905 00:32:26.478012 2465 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 00:32:26.478577 kubelet[2465]: E0905 00:32:26.478548 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:32:26.478668 kubelet[2465]: E0905 00:32:26.478647 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:32:26.543980 systemd-networkd[1384]: cali0b0aa941d75: Gained IPv6LL Sep 5 00:32:27.481422 kubelet[2465]: E0905 00:32:27.481213 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:32:27.481845 kubelet[2465]: E0905 00:32:27.481643 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:32:27.649142 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1029416588.mount: Deactivated successfully. Sep 5 00:32:30.980800 systemd[1]: Started sshd@8-10.0.0.114:22-10.0.0.1:49792.service - OpenSSH per-connection server daemon (10.0.0.1:49792). Sep 5 00:32:31.031052 sshd[5092]: Accepted publickey for core from 10.0.0.1 port 49792 ssh2: RSA SHA256:AGG14PALIK3Z6LO0MtFBMGyAQes3xIvrcXGuLmkN81k Sep 5 00:32:31.032714 sshd[5092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:32:31.036730 systemd-logind[1422]: New session 9 of user core. Sep 5 00:32:31.047005 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 5 00:32:31.283097 sshd[5092]: pam_unix(sshd:session): session closed for user core Sep 5 00:32:31.286950 systemd[1]: sshd@8-10.0.0.114:22-10.0.0.1:49792.service: Deactivated successfully. Sep 5 00:32:31.289785 systemd[1]: session-9.scope: Deactivated successfully. Sep 5 00:32:31.290608 systemd-logind[1422]: Session 9 logged out. Waiting for processes to exit. Sep 5 00:32:31.291650 systemd-logind[1422]: Removed session 9. Sep 5 00:32:36.059280 containerd[1446]: time="2025-09-05T00:32:36.058839257Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:32:36.059948 containerd[1446]: time="2025-09-05T00:32:36.059919152Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 5 00:32:36.060635 containerd[1446]: time="2025-09-05T00:32:36.060592016Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:32:36.063131 containerd[1446]: time="2025-09-05T00:32:36.063089598Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:32:36.064192 containerd[1446]: time="2025-09-05T00:32:36.064153853Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 10.79403279s" Sep 5 00:32:36.064192 containerd[1446]: time="2025-09-05T00:32:36.064190732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 5 00:32:36.065812 containerd[1446]: time="2025-09-05T00:32:36.065787775Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 5 00:32:36.067670 containerd[1446]: time="2025-09-05T00:32:36.067539935Z" level=info msg="CreateContainer within sandbox \"6152aff9e9f5741a7391f4609ed68237301c01da7d499e969971648f42293eda\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 5 00:32:36.081608 containerd[1446]: time="2025-09-05T00:32:36.081493291Z" level=info msg="CreateContainer within sandbox \"6152aff9e9f5741a7391f4609ed68237301c01da7d499e969971648f42293eda\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"9615685fd22a98b13702ba1c08c55a1eb4de9796fd1ade34a08b0e45e1f49b55\"" Sep 5 00:32:36.082366 containerd[1446]: time="2025-09-05T00:32:36.082205715Z" level=info msg="StartContainer for \"9615685fd22a98b13702ba1c08c55a1eb4de9796fd1ade34a08b0e45e1f49b55\"" Sep 5 00:32:36.111014 systemd[1]: Started cri-containerd-9615685fd22a98b13702ba1c08c55a1eb4de9796fd1ade34a08b0e45e1f49b55.scope - libcontainer container 9615685fd22a98b13702ba1c08c55a1eb4de9796fd1ade34a08b0e45e1f49b55. Sep 5 00:32:36.148703 containerd[1446]: time="2025-09-05T00:32:36.148664454Z" level=info msg="StartContainer for \"9615685fd22a98b13702ba1c08c55a1eb4de9796fd1ade34a08b0e45e1f49b55\" returns successfully" Sep 5 00:32:36.295503 systemd[1]: Started sshd@9-10.0.0.114:22-10.0.0.1:49800.service - OpenSSH per-connection server daemon (10.0.0.1:49800). Sep 5 00:32:36.347349 sshd[5171]: Accepted publickey for core from 10.0.0.1 port 49800 ssh2: RSA SHA256:AGG14PALIK3Z6LO0MtFBMGyAQes3xIvrcXGuLmkN81k Sep 5 00:32:36.348674 sshd[5171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:32:36.352098 systemd-logind[1422]: New session 10 of user core. Sep 5 00:32:36.356994 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 5 00:32:36.518294 kubelet[2465]: I0905 00:32:36.518154 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-s4h56" podStartSLOduration=25.092103545 podStartE2EDuration="37.518066407s" podCreationTimestamp="2025-09-05 00:31:59 +0000 UTC" firstStartedPulling="2025-09-05 00:32:23.638980893 +0000 UTC m=+42.510031147" lastFinishedPulling="2025-09-05 00:32:36.064943755 +0000 UTC m=+54.935994009" observedRunningTime="2025-09-05 00:32:36.517319425 +0000 UTC m=+55.388369679" watchObservedRunningTime="2025-09-05 00:32:36.518066407 +0000 UTC m=+55.389116661" Sep 5 00:32:36.596653 sshd[5171]: pam_unix(sshd:session): session closed for user core Sep 5 00:32:36.606041 systemd[1]: sshd@9-10.0.0.114:22-10.0.0.1:49800.service: Deactivated successfully. Sep 5 00:32:36.608391 systemd[1]: session-10.scope: Deactivated successfully. Sep 5 00:32:36.610651 systemd-logind[1422]: Session 10 logged out. Waiting for processes to exit. Sep 5 00:32:36.616456 systemd[1]: Started sshd@10-10.0.0.114:22-10.0.0.1:49806.service - OpenSSH per-connection server daemon (10.0.0.1:49806). Sep 5 00:32:36.619237 systemd-logind[1422]: Removed session 10. Sep 5 00:32:36.652194 sshd[5211]: Accepted publickey for core from 10.0.0.1 port 49806 ssh2: RSA SHA256:AGG14PALIK3Z6LO0MtFBMGyAQes3xIvrcXGuLmkN81k Sep 5 00:32:36.653457 sshd[5211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:32:36.657356 systemd-logind[1422]: New session 11 of user core. Sep 5 00:32:36.663982 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 5 00:32:36.910787 sshd[5211]: pam_unix(sshd:session): session closed for user core Sep 5 00:32:36.924142 systemd[1]: sshd@10-10.0.0.114:22-10.0.0.1:49806.service: Deactivated successfully. Sep 5 00:32:36.927232 systemd[1]: session-11.scope: Deactivated successfully. Sep 5 00:32:36.930039 systemd-logind[1422]: Session 11 logged out. Waiting for processes to exit. Sep 5 00:32:36.937137 systemd[1]: Started sshd@11-10.0.0.114:22-10.0.0.1:49810.service - OpenSSH per-connection server daemon (10.0.0.1:49810). Sep 5 00:32:36.938103 systemd-logind[1422]: Removed session 11. Sep 5 00:32:36.974958 sshd[5224]: Accepted publickey for core from 10.0.0.1 port 49810 ssh2: RSA SHA256:AGG14PALIK3Z6LO0MtFBMGyAQes3xIvrcXGuLmkN81k Sep 5 00:32:36.976168 sshd[5224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:32:36.979698 systemd-logind[1422]: New session 12 of user core. Sep 5 00:32:36.989047 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 5 00:32:37.116310 sshd[5224]: pam_unix(sshd:session): session closed for user core Sep 5 00:32:37.119743 systemd[1]: sshd@11-10.0.0.114:22-10.0.0.1:49810.service: Deactivated successfully. Sep 5 00:32:37.121401 systemd[1]: session-12.scope: Deactivated successfully. Sep 5 00:32:37.122134 systemd-logind[1422]: Session 12 logged out. Waiting for processes to exit. Sep 5 00:32:37.122882 systemd-logind[1422]: Removed session 12. Sep 5 00:32:37.580374 containerd[1446]: time="2025-09-05T00:32:37.580278355Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:32:37.580967 containerd[1446]: time="2025-09-05T00:32:37.580935781Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 5 00:32:37.582793 containerd[1446]: time="2025-09-05T00:32:37.582760979Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:32:37.586445 containerd[1446]: time="2025-09-05T00:32:37.585918548Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:32:37.586723 containerd[1446]: time="2025-09-05T00:32:37.586694450Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 1.520870876s" Sep 5 00:32:37.586786 containerd[1446]: time="2025-09-05T00:32:37.586729370Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 5 00:32:37.588320 containerd[1446]: time="2025-09-05T00:32:37.588256255Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 5 00:32:37.589121 containerd[1446]: time="2025-09-05T00:32:37.589047957Z" level=info msg="CreateContainer within sandbox \"41af5b36212e9d4860fdfdc0e6fda4dee38da12ad59831d8ecc1b1ff179e129a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 5 00:32:37.603635 containerd[1446]: time="2025-09-05T00:32:37.603493351Z" level=info msg="CreateContainer within sandbox \"41af5b36212e9d4860fdfdc0e6fda4dee38da12ad59831d8ecc1b1ff179e129a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"dab35e831d2eac281f849a38869dcc229309fb16b5ddf9555f3094df81a509bd\"" Sep 5 00:32:37.603967 containerd[1446]: time="2025-09-05T00:32:37.603933261Z" level=info msg="StartContainer for \"dab35e831d2eac281f849a38869dcc229309fb16b5ddf9555f3094df81a509bd\"" Sep 5 00:32:37.641018 systemd[1]: Started cri-containerd-dab35e831d2eac281f849a38869dcc229309fb16b5ddf9555f3094df81a509bd.scope - libcontainer container dab35e831d2eac281f849a38869dcc229309fb16b5ddf9555f3094df81a509bd. Sep 5 00:32:37.664621 containerd[1446]: time="2025-09-05T00:32:37.664582570Z" level=info msg="StartContainer for \"dab35e831d2eac281f849a38869dcc229309fb16b5ddf9555f3094df81a509bd\" returns successfully" Sep 5 00:32:38.004008 containerd[1446]: time="2025-09-05T00:32:38.003587149Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:32:38.004261 containerd[1446]: time="2025-09-05T00:32:38.004221935Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 5 00:32:38.006463 containerd[1446]: time="2025-09-05T00:32:38.006430046Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 418.140192ms" Sep 5 00:32:38.006553 containerd[1446]: time="2025-09-05T00:32:38.006466726Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 5 00:32:38.007990 containerd[1446]: time="2025-09-05T00:32:38.007768577Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 5 00:32:38.008604 containerd[1446]: time="2025-09-05T00:32:38.008561399Z" level=info msg="CreateContainer within sandbox \"683350299dfce9222ad2e0621de04844a78eef45748eb23b6f481c60c42e3829\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 5 00:32:38.017975 containerd[1446]: time="2025-09-05T00:32:38.017938353Z" level=info msg="CreateContainer within sandbox \"683350299dfce9222ad2e0621de04844a78eef45748eb23b6f481c60c42e3829\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"03a5d1451c6bebfdf583c616479627b019583eaf4fa06513e4b0d8820bbf2f77\"" Sep 5 00:32:38.018344 containerd[1446]: time="2025-09-05T00:32:38.018322584Z" level=info msg="StartContainer for \"03a5d1451c6bebfdf583c616479627b019583eaf4fa06513e4b0d8820bbf2f77\"" Sep 5 00:32:38.041006 systemd[1]: Started cri-containerd-03a5d1451c6bebfdf583c616479627b019583eaf4fa06513e4b0d8820bbf2f77.scope - libcontainer container 03a5d1451c6bebfdf583c616479627b019583eaf4fa06513e4b0d8820bbf2f77. Sep 5 00:32:38.069281 containerd[1446]: time="2025-09-05T00:32:38.069234903Z" level=info msg="StartContainer for \"03a5d1451c6bebfdf583c616479627b019583eaf4fa06513e4b0d8820bbf2f77\" returns successfully" Sep 5 00:32:38.586885 kubelet[2465]: I0905 00:32:38.585906 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7b94b9658b-jdnpd" podStartSLOduration=29.471844824 podStartE2EDuration="42.585888438s" podCreationTimestamp="2025-09-05 00:31:56 +0000 UTC" firstStartedPulling="2025-09-05 00:32:24.893122576 +0000 UTC m=+43.764172790" lastFinishedPulling="2025-09-05 00:32:38.00716615 +0000 UTC m=+56.878216404" observedRunningTime="2025-09-05 00:32:38.5848885 +0000 UTC m=+57.455938754" watchObservedRunningTime="2025-09-05 00:32:38.585888438 +0000 UTC m=+57.456938732" Sep 5 00:32:39.535075 kubelet[2465]: I0905 00:32:39.535026 2465 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 00:32:40.649956 containerd[1446]: time="2025-09-05T00:32:40.649842733Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:32:40.650950 containerd[1446]: time="2025-09-05T00:32:40.650908671Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 5 00:32:40.651839 containerd[1446]: time="2025-09-05T00:32:40.651796932Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:32:40.653647 containerd[1446]: time="2025-09-05T00:32:40.653614774Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:32:40.654471 containerd[1446]: time="2025-09-05T00:32:40.654437797Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 2.646630341s" Sep 5 00:32:40.654525 containerd[1446]: time="2025-09-05T00:32:40.654479636Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 5 00:32:40.655439 containerd[1446]: time="2025-09-05T00:32:40.655416736Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 5 00:32:40.662132 containerd[1446]: time="2025-09-05T00:32:40.662099516Z" level=info msg="CreateContainer within sandbox \"6a16c4cb13ab3cc0b92432402b5d3bb45d3a9b72d4a67b000a375cea7965b6e3\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 5 00:32:40.676892 containerd[1446]: time="2025-09-05T00:32:40.676800008Z" level=info msg="CreateContainer within sandbox \"6a16c4cb13ab3cc0b92432402b5d3bb45d3a9b72d4a67b000a375cea7965b6e3\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"47ade15fc7db2651e76bfed4a025c885e71a0dc4417f98e931c80679712de08d\"" Sep 5 00:32:40.677414 containerd[1446]: time="2025-09-05T00:32:40.677387596Z" level=info msg="StartContainer for \"47ade15fc7db2651e76bfed4a025c885e71a0dc4417f98e931c80679712de08d\"" Sep 5 00:32:40.703016 systemd[1]: Started cri-containerd-47ade15fc7db2651e76bfed4a025c885e71a0dc4417f98e931c80679712de08d.scope - libcontainer container 47ade15fc7db2651e76bfed4a025c885e71a0dc4417f98e931c80679712de08d. Sep 5 00:32:40.736844 containerd[1446]: time="2025-09-05T00:32:40.736807151Z" level=info msg="StartContainer for \"47ade15fc7db2651e76bfed4a025c885e71a0dc4417f98e931c80679712de08d\" returns successfully" Sep 5 00:32:41.227764 containerd[1446]: time="2025-09-05T00:32:41.227722822Z" level=info msg="StopPodSandbox for \"e714b25324219a1c25792314649e68dc99079fff5b9dd9159b644379c4079743\"" Sep 5 00:32:41.284686 kubelet[2465]: I0905 00:32:41.284068 2465 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 00:32:41.370505 containerd[1446]: 2025-09-05 00:32:41.303 [WARNING][5403] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e714b25324219a1c25792314649e68dc99079fff5b9dd9159b644379c4079743" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--s4h56-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"d3723c38-1afc-40d9-9c9f-ed74aafda062", ResourceVersion:"1116", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 31, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6152aff9e9f5741a7391f4609ed68237301c01da7d499e969971648f42293eda", Pod:"goldmane-7988f88666-s4h56", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliba5b7f6a825", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:32:41.370505 containerd[1446]: 2025-09-05 00:32:41.304 [INFO][5403] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e714b25324219a1c25792314649e68dc99079fff5b9dd9159b644379c4079743" Sep 5 00:32:41.370505 containerd[1446]: 2025-09-05 00:32:41.304 [INFO][5403] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e714b25324219a1c25792314649e68dc99079fff5b9dd9159b644379c4079743" iface="eth0" netns="" Sep 5 00:32:41.370505 containerd[1446]: 2025-09-05 00:32:41.304 [INFO][5403] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e714b25324219a1c25792314649e68dc99079fff5b9dd9159b644379c4079743" Sep 5 00:32:41.370505 containerd[1446]: 2025-09-05 00:32:41.304 [INFO][5403] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e714b25324219a1c25792314649e68dc99079fff5b9dd9159b644379c4079743" Sep 5 00:32:41.370505 containerd[1446]: 2025-09-05 00:32:41.349 [INFO][5412] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e714b25324219a1c25792314649e68dc99079fff5b9dd9159b644379c4079743" HandleID="k8s-pod-network.e714b25324219a1c25792314649e68dc99079fff5b9dd9159b644379c4079743" Workload="localhost-k8s-goldmane--7988f88666--s4h56-eth0" Sep 5 00:32:41.370505 containerd[1446]: 2025-09-05 00:32:41.349 [INFO][5412] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:32:41.370505 containerd[1446]: 2025-09-05 00:32:41.349 [INFO][5412] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:32:41.370505 containerd[1446]: 2025-09-05 00:32:41.361 [WARNING][5412] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e714b25324219a1c25792314649e68dc99079fff5b9dd9159b644379c4079743" HandleID="k8s-pod-network.e714b25324219a1c25792314649e68dc99079fff5b9dd9159b644379c4079743" Workload="localhost-k8s-goldmane--7988f88666--s4h56-eth0" Sep 5 00:32:41.370505 containerd[1446]: 2025-09-05 00:32:41.361 [INFO][5412] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e714b25324219a1c25792314649e68dc99079fff5b9dd9159b644379c4079743" HandleID="k8s-pod-network.e714b25324219a1c25792314649e68dc99079fff5b9dd9159b644379c4079743" Workload="localhost-k8s-goldmane--7988f88666--s4h56-eth0" Sep 5 00:32:41.370505 containerd[1446]: 2025-09-05 00:32:41.363 [INFO][5412] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:32:41.370505 containerd[1446]: 2025-09-05 00:32:41.366 [INFO][5403] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e714b25324219a1c25792314649e68dc99079fff5b9dd9159b644379c4079743" Sep 5 00:32:41.371055 containerd[1446]: time="2025-09-05T00:32:41.370571023Z" level=info msg="TearDown network for sandbox \"e714b25324219a1c25792314649e68dc99079fff5b9dd9159b644379c4079743\" successfully" Sep 5 00:32:41.371055 containerd[1446]: time="2025-09-05T00:32:41.370595783Z" level=info msg="StopPodSandbox for \"e714b25324219a1c25792314649e68dc99079fff5b9dd9159b644379c4079743\" returns successfully" Sep 5 00:32:41.371158 containerd[1446]: time="2025-09-05T00:32:41.371115212Z" level=info msg="RemovePodSandbox for \"e714b25324219a1c25792314649e68dc99079fff5b9dd9159b644379c4079743\"" Sep 5 00:32:41.406409 containerd[1446]: time="2025-09-05T00:32:41.406348092Z" level=info msg="Forcibly stopping sandbox \"e714b25324219a1c25792314649e68dc99079fff5b9dd9159b644379c4079743\"" Sep 5 00:32:41.481079 containerd[1446]: 2025-09-05 00:32:41.448 [WARNING][5432] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e714b25324219a1c25792314649e68dc99079fff5b9dd9159b644379c4079743" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--s4h56-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"d3723c38-1afc-40d9-9c9f-ed74aafda062", ResourceVersion:"1116", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 31, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6152aff9e9f5741a7391f4609ed68237301c01da7d499e969971648f42293eda", Pod:"goldmane-7988f88666-s4h56", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliba5b7f6a825", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:32:41.481079 containerd[1446]: 2025-09-05 00:32:41.449 [INFO][5432] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e714b25324219a1c25792314649e68dc99079fff5b9dd9159b644379c4079743" Sep 5 00:32:41.481079 containerd[1446]: 2025-09-05 00:32:41.449 [INFO][5432] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e714b25324219a1c25792314649e68dc99079fff5b9dd9159b644379c4079743" iface="eth0" netns="" Sep 5 00:32:41.481079 containerd[1446]: 2025-09-05 00:32:41.449 [INFO][5432] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e714b25324219a1c25792314649e68dc99079fff5b9dd9159b644379c4079743" Sep 5 00:32:41.481079 containerd[1446]: 2025-09-05 00:32:41.449 [INFO][5432] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e714b25324219a1c25792314649e68dc99079fff5b9dd9159b644379c4079743" Sep 5 00:32:41.481079 containerd[1446]: 2025-09-05 00:32:41.467 [INFO][5443] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e714b25324219a1c25792314649e68dc99079fff5b9dd9159b644379c4079743" HandleID="k8s-pod-network.e714b25324219a1c25792314649e68dc99079fff5b9dd9159b644379c4079743" Workload="localhost-k8s-goldmane--7988f88666--s4h56-eth0" Sep 5 00:32:41.481079 containerd[1446]: 2025-09-05 00:32:41.468 [INFO][5443] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:32:41.481079 containerd[1446]: 2025-09-05 00:32:41.468 [INFO][5443] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:32:41.481079 containerd[1446]: 2025-09-05 00:32:41.476 [WARNING][5443] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e714b25324219a1c25792314649e68dc99079fff5b9dd9159b644379c4079743" HandleID="k8s-pod-network.e714b25324219a1c25792314649e68dc99079fff5b9dd9159b644379c4079743" Workload="localhost-k8s-goldmane--7988f88666--s4h56-eth0" Sep 5 00:32:41.481079 containerd[1446]: 2025-09-05 00:32:41.476 [INFO][5443] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e714b25324219a1c25792314649e68dc99079fff5b9dd9159b644379c4079743" HandleID="k8s-pod-network.e714b25324219a1c25792314649e68dc99079fff5b9dd9159b644379c4079743" Workload="localhost-k8s-goldmane--7988f88666--s4h56-eth0" Sep 5 00:32:41.481079 containerd[1446]: 2025-09-05 00:32:41.477 [INFO][5443] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:32:41.481079 containerd[1446]: 2025-09-05 00:32:41.479 [INFO][5432] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e714b25324219a1c25792314649e68dc99079fff5b9dd9159b644379c4079743" Sep 5 00:32:41.481079 containerd[1446]: time="2025-09-05T00:32:41.481053525Z" level=info msg="TearDown network for sandbox \"e714b25324219a1c25792314649e68dc99079fff5b9dd9159b644379c4079743\" successfully" Sep 5 00:32:41.499375 containerd[1446]: time="2025-09-05T00:32:41.499319152Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e714b25324219a1c25792314649e68dc99079fff5b9dd9159b644379c4079743\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 00:32:41.499479 containerd[1446]: time="2025-09-05T00:32:41.499412750Z" level=info msg="RemovePodSandbox \"e714b25324219a1c25792314649e68dc99079fff5b9dd9159b644379c4079743\" returns successfully" Sep 5 00:32:41.500043 containerd[1446]: time="2025-09-05T00:32:41.499967499Z" level=info msg="StopPodSandbox for \"4736016e632a47b15b4e30f05f70413ded89cd08d32f3e62a6aec4284e1fa6dd\"" Sep 5 00:32:41.577961 containerd[1446]: 2025-09-05 00:32:41.529 [WARNING][5460] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="4736016e632a47b15b4e30f05f70413ded89cd08d32f3e62a6aec4284e1fa6dd" WorkloadEndpoint="localhost-k8s-whisker--6cd887f554--kvqvk-eth0" Sep 5 00:32:41.577961 containerd[1446]: 2025-09-05 00:32:41.529 [INFO][5460] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4736016e632a47b15b4e30f05f70413ded89cd08d32f3e62a6aec4284e1fa6dd" Sep 5 00:32:41.577961 containerd[1446]: 2025-09-05 00:32:41.529 [INFO][5460] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4736016e632a47b15b4e30f05f70413ded89cd08d32f3e62a6aec4284e1fa6dd" iface="eth0" netns="" Sep 5 00:32:41.577961 containerd[1446]: 2025-09-05 00:32:41.529 [INFO][5460] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4736016e632a47b15b4e30f05f70413ded89cd08d32f3e62a6aec4284e1fa6dd" Sep 5 00:32:41.577961 containerd[1446]: 2025-09-05 00:32:41.529 [INFO][5460] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4736016e632a47b15b4e30f05f70413ded89cd08d32f3e62a6aec4284e1fa6dd" Sep 5 00:32:41.577961 containerd[1446]: 2025-09-05 00:32:41.562 [INFO][5469] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4736016e632a47b15b4e30f05f70413ded89cd08d32f3e62a6aec4284e1fa6dd" HandleID="k8s-pod-network.4736016e632a47b15b4e30f05f70413ded89cd08d32f3e62a6aec4284e1fa6dd" Workload="localhost-k8s-whisker--6cd887f554--kvqvk-eth0" Sep 5 00:32:41.577961 containerd[1446]: 2025-09-05 00:32:41.562 [INFO][5469] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:32:41.577961 containerd[1446]: 2025-09-05 00:32:41.562 [INFO][5469] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:32:41.577961 containerd[1446]: 2025-09-05 00:32:41.571 [WARNING][5469] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4736016e632a47b15b4e30f05f70413ded89cd08d32f3e62a6aec4284e1fa6dd" HandleID="k8s-pod-network.4736016e632a47b15b4e30f05f70413ded89cd08d32f3e62a6aec4284e1fa6dd" Workload="localhost-k8s-whisker--6cd887f554--kvqvk-eth0" Sep 5 00:32:41.577961 containerd[1446]: 2025-09-05 00:32:41.571 [INFO][5469] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4736016e632a47b15b4e30f05f70413ded89cd08d32f3e62a6aec4284e1fa6dd" HandleID="k8s-pod-network.4736016e632a47b15b4e30f05f70413ded89cd08d32f3e62a6aec4284e1fa6dd" Workload="localhost-k8s-whisker--6cd887f554--kvqvk-eth0" Sep 5 00:32:41.577961 containerd[1446]: 2025-09-05 00:32:41.572 [INFO][5469] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:32:41.577961 containerd[1446]: 2025-09-05 00:32:41.575 [INFO][5460] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4736016e632a47b15b4e30f05f70413ded89cd08d32f3e62a6aec4284e1fa6dd" Sep 5 00:32:41.578896 containerd[1446]: time="2025-09-05T00:32:41.577984625Z" level=info msg="TearDown network for sandbox \"4736016e632a47b15b4e30f05f70413ded89cd08d32f3e62a6aec4284e1fa6dd\" successfully" Sep 5 00:32:41.578896 containerd[1446]: time="2025-09-05T00:32:41.578006624Z" level=info msg="StopPodSandbox for \"4736016e632a47b15b4e30f05f70413ded89cd08d32f3e62a6aec4284e1fa6dd\" returns successfully" Sep 5 00:32:41.578896 containerd[1446]: time="2025-09-05T00:32:41.578439815Z" level=info msg="RemovePodSandbox for \"4736016e632a47b15b4e30f05f70413ded89cd08d32f3e62a6aec4284e1fa6dd\"" Sep 5 00:32:41.578896 containerd[1446]: time="2025-09-05T00:32:41.578468695Z" level=info msg="Forcibly stopping sandbox \"4736016e632a47b15b4e30f05f70413ded89cd08d32f3e62a6aec4284e1fa6dd\"" Sep 5 00:32:41.590519 kubelet[2465]: I0905 00:32:41.590446 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-869c68467b-v22j2" podStartSLOduration=26.55343449 podStartE2EDuration="41.590413611s" podCreationTimestamp="2025-09-05 00:32:00 +0000 UTC" firstStartedPulling="2025-09-05 00:32:25.618319538 +0000 UTC m=+44.489369792" lastFinishedPulling="2025-09-05 00:32:40.655298659 +0000 UTC m=+59.526348913" observedRunningTime="2025-09-05 00:32:41.552963336 +0000 UTC m=+60.424013590" watchObservedRunningTime="2025-09-05 00:32:41.590413611 +0000 UTC m=+60.461463865" Sep 5 00:32:41.648035 containerd[1446]: 2025-09-05 00:32:41.613 [WARNING][5510] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="4736016e632a47b15b4e30f05f70413ded89cd08d32f3e62a6aec4284e1fa6dd" WorkloadEndpoint="localhost-k8s-whisker--6cd887f554--kvqvk-eth0" Sep 5 00:32:41.648035 containerd[1446]: 2025-09-05 00:32:41.613 [INFO][5510] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4736016e632a47b15b4e30f05f70413ded89cd08d32f3e62a6aec4284e1fa6dd" Sep 5 00:32:41.648035 containerd[1446]: 2025-09-05 00:32:41.613 [INFO][5510] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4736016e632a47b15b4e30f05f70413ded89cd08d32f3e62a6aec4284e1fa6dd" iface="eth0" netns="" Sep 5 00:32:41.648035 containerd[1446]: 2025-09-05 00:32:41.613 [INFO][5510] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4736016e632a47b15b4e30f05f70413ded89cd08d32f3e62a6aec4284e1fa6dd" Sep 5 00:32:41.648035 containerd[1446]: 2025-09-05 00:32:41.613 [INFO][5510] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4736016e632a47b15b4e30f05f70413ded89cd08d32f3e62a6aec4284e1fa6dd" Sep 5 00:32:41.648035 containerd[1446]: 2025-09-05 00:32:41.632 [INFO][5518] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4736016e632a47b15b4e30f05f70413ded89cd08d32f3e62a6aec4284e1fa6dd" HandleID="k8s-pod-network.4736016e632a47b15b4e30f05f70413ded89cd08d32f3e62a6aec4284e1fa6dd" Workload="localhost-k8s-whisker--6cd887f554--kvqvk-eth0" Sep 5 00:32:41.648035 containerd[1446]: 2025-09-05 00:32:41.632 [INFO][5518] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:32:41.648035 containerd[1446]: 2025-09-05 00:32:41.632 [INFO][5518] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:32:41.648035 containerd[1446]: 2025-09-05 00:32:41.643 [WARNING][5518] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4736016e632a47b15b4e30f05f70413ded89cd08d32f3e62a6aec4284e1fa6dd" HandleID="k8s-pod-network.4736016e632a47b15b4e30f05f70413ded89cd08d32f3e62a6aec4284e1fa6dd" Workload="localhost-k8s-whisker--6cd887f554--kvqvk-eth0" Sep 5 00:32:41.648035 containerd[1446]: 2025-09-05 00:32:41.643 [INFO][5518] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4736016e632a47b15b4e30f05f70413ded89cd08d32f3e62a6aec4284e1fa6dd" HandleID="k8s-pod-network.4736016e632a47b15b4e30f05f70413ded89cd08d32f3e62a6aec4284e1fa6dd" Workload="localhost-k8s-whisker--6cd887f554--kvqvk-eth0" Sep 5 00:32:41.648035 containerd[1446]: 2025-09-05 00:32:41.644 [INFO][5518] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:32:41.648035 containerd[1446]: 2025-09-05 00:32:41.646 [INFO][5510] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4736016e632a47b15b4e30f05f70413ded89cd08d32f3e62a6aec4284e1fa6dd" Sep 5 00:32:41.648372 containerd[1446]: time="2025-09-05T00:32:41.648069832Z" level=info msg="TearDown network for sandbox \"4736016e632a47b15b4e30f05f70413ded89cd08d32f3e62a6aec4284e1fa6dd\" successfully" Sep 5 00:32:41.650843 containerd[1446]: time="2025-09-05T00:32:41.650811496Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4736016e632a47b15b4e30f05f70413ded89cd08d32f3e62a6aec4284e1fa6dd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 00:32:41.651117 containerd[1446]: time="2025-09-05T00:32:41.650884975Z" level=info msg="RemovePodSandbox \"4736016e632a47b15b4e30f05f70413ded89cd08d32f3e62a6aec4284e1fa6dd\" returns successfully" Sep 5 00:32:41.651815 containerd[1446]: time="2025-09-05T00:32:41.651538441Z" level=info msg="StopPodSandbox for \"0d857350e94f7419b72a77b0e2aa8cd7ce34079bf4c83b93076824bfeced2184\"" Sep 5 00:32:41.728537 containerd[1446]: 2025-09-05 00:32:41.692 [WARNING][5536] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0d857350e94f7419b72a77b0e2aa8cd7ce34079bf4c83b93076824bfeced2184" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--vx8zc-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"35c515fe-5ffe-4ef1-99c2-70d6c9f6b20d", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 31, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ddcb2b692021f1a3c086c0f15a8a438ad2b3b76452630c2f9b5ce77337ce9ad4", Pod:"coredns-7c65d6cfc9-vx8zc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicef73ae12ce", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:32:41.728537 containerd[1446]: 2025-09-05 00:32:41.692 [INFO][5536] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0d857350e94f7419b72a77b0e2aa8cd7ce34079bf4c83b93076824bfeced2184" Sep 5 00:32:41.728537 containerd[1446]: 2025-09-05 00:32:41.692 [INFO][5536] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0d857350e94f7419b72a77b0e2aa8cd7ce34079bf4c83b93076824bfeced2184" iface="eth0" netns="" Sep 5 00:32:41.728537 containerd[1446]: 2025-09-05 00:32:41.692 [INFO][5536] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0d857350e94f7419b72a77b0e2aa8cd7ce34079bf4c83b93076824bfeced2184" Sep 5 00:32:41.728537 containerd[1446]: 2025-09-05 00:32:41.692 [INFO][5536] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0d857350e94f7419b72a77b0e2aa8cd7ce34079bf4c83b93076824bfeced2184" Sep 5 00:32:41.728537 containerd[1446]: 2025-09-05 00:32:41.711 [INFO][5545] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0d857350e94f7419b72a77b0e2aa8cd7ce34079bf4c83b93076824bfeced2184" HandleID="k8s-pod-network.0d857350e94f7419b72a77b0e2aa8cd7ce34079bf4c83b93076824bfeced2184" Workload="localhost-k8s-coredns--7c65d6cfc9--vx8zc-eth0" Sep 5 00:32:41.728537 containerd[1446]: 2025-09-05 00:32:41.712 [INFO][5545] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:32:41.728537 containerd[1446]: 2025-09-05 00:32:41.712 [INFO][5545] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:32:41.728537 containerd[1446]: 2025-09-05 00:32:41.723 [WARNING][5545] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0d857350e94f7419b72a77b0e2aa8cd7ce34079bf4c83b93076824bfeced2184" HandleID="k8s-pod-network.0d857350e94f7419b72a77b0e2aa8cd7ce34079bf4c83b93076824bfeced2184" Workload="localhost-k8s-coredns--7c65d6cfc9--vx8zc-eth0" Sep 5 00:32:41.728537 containerd[1446]: 2025-09-05 00:32:41.723 [INFO][5545] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0d857350e94f7419b72a77b0e2aa8cd7ce34079bf4c83b93076824bfeced2184" HandleID="k8s-pod-network.0d857350e94f7419b72a77b0e2aa8cd7ce34079bf4c83b93076824bfeced2184" Workload="localhost-k8s-coredns--7c65d6cfc9--vx8zc-eth0" Sep 5 00:32:41.728537 containerd[1446]: 2025-09-05 00:32:41.725 [INFO][5545] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:32:41.728537 containerd[1446]: 2025-09-05 00:32:41.726 [INFO][5536] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0d857350e94f7419b72a77b0e2aa8cd7ce34079bf4c83b93076824bfeced2184" Sep 5 00:32:41.729214 containerd[1446]: time="2025-09-05T00:32:41.728584667Z" level=info msg="TearDown network for sandbox \"0d857350e94f7419b72a77b0e2aa8cd7ce34079bf4c83b93076824bfeced2184\" successfully" Sep 5 00:32:41.729214 containerd[1446]: time="2025-09-05T00:32:41.728608067Z" level=info msg="StopPodSandbox for \"0d857350e94f7419b72a77b0e2aa8cd7ce34079bf4c83b93076824bfeced2184\" returns successfully" Sep 5 00:32:41.730556 containerd[1446]: time="2025-09-05T00:32:41.730360591Z" level=info msg="RemovePodSandbox for \"0d857350e94f7419b72a77b0e2aa8cd7ce34079bf4c83b93076824bfeced2184\"" Sep 5 00:32:41.730556 containerd[1446]: time="2025-09-05T00:32:41.730398670Z" level=info msg="Forcibly stopping sandbox \"0d857350e94f7419b72a77b0e2aa8cd7ce34079bf4c83b93076824bfeced2184\"" Sep 5 00:32:41.810079 containerd[1446]: 2025-09-05 00:32:41.770 [WARNING][5562] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0d857350e94f7419b72a77b0e2aa8cd7ce34079bf4c83b93076824bfeced2184" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--vx8zc-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"35c515fe-5ffe-4ef1-99c2-70d6c9f6b20d", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 31, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ddcb2b692021f1a3c086c0f15a8a438ad2b3b76452630c2f9b5ce77337ce9ad4", Pod:"coredns-7c65d6cfc9-vx8zc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicef73ae12ce", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:32:41.810079 containerd[1446]: 2025-09-05 00:32:41.770 [INFO][5562] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0d857350e94f7419b72a77b0e2aa8cd7ce34079bf4c83b93076824bfeced2184" Sep 5 00:32:41.810079 containerd[1446]: 2025-09-05 00:32:41.770 [INFO][5562] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0d857350e94f7419b72a77b0e2aa8cd7ce34079bf4c83b93076824bfeced2184" iface="eth0" netns="" Sep 5 00:32:41.810079 containerd[1446]: 2025-09-05 00:32:41.770 [INFO][5562] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0d857350e94f7419b72a77b0e2aa8cd7ce34079bf4c83b93076824bfeced2184" Sep 5 00:32:41.810079 containerd[1446]: 2025-09-05 00:32:41.770 [INFO][5562] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0d857350e94f7419b72a77b0e2aa8cd7ce34079bf4c83b93076824bfeced2184" Sep 5 00:32:41.810079 containerd[1446]: 2025-09-05 00:32:41.794 [INFO][5571] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0d857350e94f7419b72a77b0e2aa8cd7ce34079bf4c83b93076824bfeced2184" HandleID="k8s-pod-network.0d857350e94f7419b72a77b0e2aa8cd7ce34079bf4c83b93076824bfeced2184" Workload="localhost-k8s-coredns--7c65d6cfc9--vx8zc-eth0" Sep 5 00:32:41.810079 containerd[1446]: 2025-09-05 00:32:41.794 [INFO][5571] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:32:41.810079 containerd[1446]: 2025-09-05 00:32:41.794 [INFO][5571] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:32:41.810079 containerd[1446]: 2025-09-05 00:32:41.803 [WARNING][5571] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0d857350e94f7419b72a77b0e2aa8cd7ce34079bf4c83b93076824bfeced2184" HandleID="k8s-pod-network.0d857350e94f7419b72a77b0e2aa8cd7ce34079bf4c83b93076824bfeced2184" Workload="localhost-k8s-coredns--7c65d6cfc9--vx8zc-eth0" Sep 5 00:32:41.810079 containerd[1446]: 2025-09-05 00:32:41.803 [INFO][5571] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0d857350e94f7419b72a77b0e2aa8cd7ce34079bf4c83b93076824bfeced2184" HandleID="k8s-pod-network.0d857350e94f7419b72a77b0e2aa8cd7ce34079bf4c83b93076824bfeced2184" Workload="localhost-k8s-coredns--7c65d6cfc9--vx8zc-eth0" Sep 5 00:32:41.810079 containerd[1446]: 2025-09-05 00:32:41.806 [INFO][5571] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:32:41.810079 containerd[1446]: 2025-09-05 00:32:41.808 [INFO][5562] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0d857350e94f7419b72a77b0e2aa8cd7ce34079bf4c83b93076824bfeced2184" Sep 5 00:32:41.810494 containerd[1446]: time="2025-09-05T00:32:41.810119401Z" level=info msg="TearDown network for sandbox \"0d857350e94f7419b72a77b0e2aa8cd7ce34079bf4c83b93076824bfeced2184\" successfully" Sep 5 00:32:41.813109 containerd[1446]: time="2025-09-05T00:32:41.812924944Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0d857350e94f7419b72a77b0e2aa8cd7ce34079bf4c83b93076824bfeced2184\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 00:32:41.813109 containerd[1446]: time="2025-09-05T00:32:41.812993662Z" level=info msg="RemovePodSandbox \"0d857350e94f7419b72a77b0e2aa8cd7ce34079bf4c83b93076824bfeced2184\" returns successfully" Sep 5 00:32:41.814265 containerd[1446]: time="2025-09-05T00:32:41.814087080Z" level=info msg="StopPodSandbox for \"845fc777a4a36d84129ce3646095dce8a8df461e48644a8ccb9f1110f4764d64\"" Sep 5 00:32:41.898059 containerd[1446]: 2025-09-05 00:32:41.856 [WARNING][5594] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="845fc777a4a36d84129ce3646095dce8a8df461e48644a8ccb9f1110f4764d64" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7b94b9658b--jdnpd-eth0", GenerateName:"calico-apiserver-7b94b9658b-", Namespace:"calico-apiserver", SelfLink:"", UID:"b4104721-2685-4b1f-94b6-007263e183aa", ResourceVersion:"1168", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 31, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b94b9658b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"683350299dfce9222ad2e0621de04844a78eef45748eb23b6f481c60c42e3829", Pod:"calico-apiserver-7b94b9658b-jdnpd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali29b32730be7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:32:41.898059 containerd[1446]: 2025-09-05 00:32:41.857 [INFO][5594] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="845fc777a4a36d84129ce3646095dce8a8df461e48644a8ccb9f1110f4764d64" Sep 5 00:32:41.898059 containerd[1446]: 2025-09-05 00:32:41.857 [INFO][5594] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="845fc777a4a36d84129ce3646095dce8a8df461e48644a8ccb9f1110f4764d64" iface="eth0" netns="" Sep 5 00:32:41.898059 containerd[1446]: 2025-09-05 00:32:41.857 [INFO][5594] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="845fc777a4a36d84129ce3646095dce8a8df461e48644a8ccb9f1110f4764d64" Sep 5 00:32:41.898059 containerd[1446]: 2025-09-05 00:32:41.857 [INFO][5594] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="845fc777a4a36d84129ce3646095dce8a8df461e48644a8ccb9f1110f4764d64" Sep 5 00:32:41.898059 containerd[1446]: 2025-09-05 00:32:41.878 [INFO][5603] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="845fc777a4a36d84129ce3646095dce8a8df461e48644a8ccb9f1110f4764d64" HandleID="k8s-pod-network.845fc777a4a36d84129ce3646095dce8a8df461e48644a8ccb9f1110f4764d64" Workload="localhost-k8s-calico--apiserver--7b94b9658b--jdnpd-eth0" Sep 5 00:32:41.898059 containerd[1446]: 2025-09-05 00:32:41.878 [INFO][5603] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:32:41.898059 containerd[1446]: 2025-09-05 00:32:41.879 [INFO][5603] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:32:41.898059 containerd[1446]: 2025-09-05 00:32:41.890 [WARNING][5603] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="845fc777a4a36d84129ce3646095dce8a8df461e48644a8ccb9f1110f4764d64" HandleID="k8s-pod-network.845fc777a4a36d84129ce3646095dce8a8df461e48644a8ccb9f1110f4764d64" Workload="localhost-k8s-calico--apiserver--7b94b9658b--jdnpd-eth0" Sep 5 00:32:41.898059 containerd[1446]: 2025-09-05 00:32:41.890 [INFO][5603] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="845fc777a4a36d84129ce3646095dce8a8df461e48644a8ccb9f1110f4764d64" HandleID="k8s-pod-network.845fc777a4a36d84129ce3646095dce8a8df461e48644a8ccb9f1110f4764d64" Workload="localhost-k8s-calico--apiserver--7b94b9658b--jdnpd-eth0" Sep 5 00:32:41.898059 containerd[1446]: 2025-09-05 00:32:41.892 [INFO][5603] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:32:41.898059 containerd[1446]: 2025-09-05 00:32:41.894 [INFO][5594] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="845fc777a4a36d84129ce3646095dce8a8df461e48644a8ccb9f1110f4764d64" Sep 5 00:32:41.898059 containerd[1446]: time="2025-09-05T00:32:41.897937566Z" level=info msg="TearDown network for sandbox \"845fc777a4a36d84129ce3646095dce8a8df461e48644a8ccb9f1110f4764d64\" successfully" Sep 5 00:32:41.898059 containerd[1446]: time="2025-09-05T00:32:41.897964126Z" level=info msg="StopPodSandbox for \"845fc777a4a36d84129ce3646095dce8a8df461e48644a8ccb9f1110f4764d64\" returns successfully" Sep 5 00:32:41.899482 containerd[1446]: time="2025-09-05T00:32:41.899043904Z" level=info msg="RemovePodSandbox for \"845fc777a4a36d84129ce3646095dce8a8df461e48644a8ccb9f1110f4764d64\"" Sep 5 00:32:41.899482 containerd[1446]: time="2025-09-05T00:32:41.899085343Z" level=info msg="Forcibly stopping sandbox \"845fc777a4a36d84129ce3646095dce8a8df461e48644a8ccb9f1110f4764d64\"" Sep 5 00:32:41.982735 containerd[1446]: time="2025-09-05T00:32:41.981473179Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:32:41.982735 containerd[1446]: time="2025-09-05T00:32:41.982272563Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 5 00:32:41.983117 containerd[1446]: time="2025-09-05T00:32:41.983076067Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:32:41.985606 containerd[1446]: time="2025-09-05T00:32:41.985569616Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:32:41.986310 containerd[1446]: time="2025-09-05T00:32:41.986267401Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 1.330820306s" Sep 5 00:32:41.986385 containerd[1446]: time="2025-09-05T00:32:41.986316760Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 5 00:32:41.989867 containerd[1446]: time="2025-09-05T00:32:41.989825609Z" level=info msg="CreateContainer within sandbox \"41af5b36212e9d4860fdfdc0e6fda4dee38da12ad59831d8ecc1b1ff179e129a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 5 00:32:42.002355 containerd[1446]: time="2025-09-05T00:32:42.002274355Z" level=info msg="CreateContainer within sandbox \"41af5b36212e9d4860fdfdc0e6fda4dee38da12ad59831d8ecc1b1ff179e129a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"6050eb3d3c2d1506fc07e3cc89d4c7559e96a2fbd020680442507ac6167b1589\"" Sep 5 00:32:42.002943 containerd[1446]: time="2025-09-05T00:32:42.002910662Z" level=info msg="StartContainer for \"6050eb3d3c2d1506fc07e3cc89d4c7559e96a2fbd020680442507ac6167b1589\"" Sep 5 00:32:42.005092 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount914874027.mount: Deactivated successfully. Sep 5 00:32:42.034909 containerd[1446]: 2025-09-05 00:32:41.977 [WARNING][5621] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="845fc777a4a36d84129ce3646095dce8a8df461e48644a8ccb9f1110f4764d64" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7b94b9658b--jdnpd-eth0", GenerateName:"calico-apiserver-7b94b9658b-", Namespace:"calico-apiserver", SelfLink:"", UID:"b4104721-2685-4b1f-94b6-007263e183aa", ResourceVersion:"1168", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 31, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b94b9658b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"683350299dfce9222ad2e0621de04844a78eef45748eb23b6f481c60c42e3829", Pod:"calico-apiserver-7b94b9658b-jdnpd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali29b32730be7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:32:42.034909 containerd[1446]: 2025-09-05 00:32:41.977 [INFO][5621] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="845fc777a4a36d84129ce3646095dce8a8df461e48644a8ccb9f1110f4764d64" Sep 5 00:32:42.034909 containerd[1446]: 2025-09-05 00:32:41.977 [INFO][5621] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="845fc777a4a36d84129ce3646095dce8a8df461e48644a8ccb9f1110f4764d64" iface="eth0" netns="" Sep 5 00:32:42.034909 containerd[1446]: 2025-09-05 00:32:41.978 [INFO][5621] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="845fc777a4a36d84129ce3646095dce8a8df461e48644a8ccb9f1110f4764d64" Sep 5 00:32:42.034909 containerd[1446]: 2025-09-05 00:32:41.978 [INFO][5621] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="845fc777a4a36d84129ce3646095dce8a8df461e48644a8ccb9f1110f4764d64" Sep 5 00:32:42.034909 containerd[1446]: 2025-09-05 00:32:42.016 [INFO][5632] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="845fc777a4a36d84129ce3646095dce8a8df461e48644a8ccb9f1110f4764d64" HandleID="k8s-pod-network.845fc777a4a36d84129ce3646095dce8a8df461e48644a8ccb9f1110f4764d64" Workload="localhost-k8s-calico--apiserver--7b94b9658b--jdnpd-eth0" Sep 5 00:32:42.034909 containerd[1446]: 2025-09-05 00:32:42.016 [INFO][5632] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:32:42.034909 containerd[1446]: 2025-09-05 00:32:42.016 [INFO][5632] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:32:42.034909 containerd[1446]: 2025-09-05 00:32:42.028 [WARNING][5632] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="845fc777a4a36d84129ce3646095dce8a8df461e48644a8ccb9f1110f4764d64" HandleID="k8s-pod-network.845fc777a4a36d84129ce3646095dce8a8df461e48644a8ccb9f1110f4764d64" Workload="localhost-k8s-calico--apiserver--7b94b9658b--jdnpd-eth0" Sep 5 00:32:42.034909 containerd[1446]: 2025-09-05 00:32:42.028 [INFO][5632] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="845fc777a4a36d84129ce3646095dce8a8df461e48644a8ccb9f1110f4764d64" HandleID="k8s-pod-network.845fc777a4a36d84129ce3646095dce8a8df461e48644a8ccb9f1110f4764d64" Workload="localhost-k8s-calico--apiserver--7b94b9658b--jdnpd-eth0" Sep 5 00:32:42.034909 containerd[1446]: 2025-09-05 00:32:42.030 [INFO][5632] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:32:42.034909 containerd[1446]: 2025-09-05 00:32:42.032 [INFO][5621] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="845fc777a4a36d84129ce3646095dce8a8df461e48644a8ccb9f1110f4764d64" Sep 5 00:32:42.034909 containerd[1446]: time="2025-09-05T00:32:42.034376075Z" level=info msg="TearDown network for sandbox \"845fc777a4a36d84129ce3646095dce8a8df461e48644a8ccb9f1110f4764d64\" successfully" Sep 5 00:32:42.036064 systemd[1]: Started cri-containerd-6050eb3d3c2d1506fc07e3cc89d4c7559e96a2fbd020680442507ac6167b1589.scope - libcontainer container 6050eb3d3c2d1506fc07e3cc89d4c7559e96a2fbd020680442507ac6167b1589. Sep 5 00:32:42.048992 containerd[1446]: time="2025-09-05T00:32:42.048947305Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"845fc777a4a36d84129ce3646095dce8a8df461e48644a8ccb9f1110f4764d64\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 00:32:42.049114 containerd[1446]: time="2025-09-05T00:32:42.049021663Z" level=info msg="RemovePodSandbox \"845fc777a4a36d84129ce3646095dce8a8df461e48644a8ccb9f1110f4764d64\" returns successfully" Sep 5 00:32:42.049835 containerd[1446]: time="2025-09-05T00:32:42.049506773Z" level=info msg="StopPodSandbox for \"1d3c80f06bb8177d58c20e8b8e139e3958646547a553cd24b8be6f82dd917a79\"" Sep 5 00:32:42.061013 containerd[1446]: time="2025-09-05T00:32:42.060926546Z" level=info msg="StartContainer for \"6050eb3d3c2d1506fc07e3cc89d4c7559e96a2fbd020680442507ac6167b1589\" returns successfully" Sep 5 00:32:42.125610 containerd[1446]: 2025-09-05 00:32:42.088 [WARNING][5674] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1d3c80f06bb8177d58c20e8b8e139e3958646547a553cd24b8be6f82dd917a79" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7wt2m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7cca8ed9-cabd-4207-ad70-24ca48f24180", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 32, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"41af5b36212e9d4860fdfdc0e6fda4dee38da12ad59831d8ecc1b1ff179e129a", Pod:"csi-node-driver-7wt2m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic6e18199175", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:32:42.125610 containerd[1446]: 2025-09-05 00:32:42.088 [INFO][5674] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1d3c80f06bb8177d58c20e8b8e139e3958646547a553cd24b8be6f82dd917a79" Sep 5 00:32:42.125610 containerd[1446]: 2025-09-05 00:32:42.088 [INFO][5674] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1d3c80f06bb8177d58c20e8b8e139e3958646547a553cd24b8be6f82dd917a79" iface="eth0" netns="" Sep 5 00:32:42.125610 containerd[1446]: 2025-09-05 00:32:42.088 [INFO][5674] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1d3c80f06bb8177d58c20e8b8e139e3958646547a553cd24b8be6f82dd917a79" Sep 5 00:32:42.125610 containerd[1446]: 2025-09-05 00:32:42.088 [INFO][5674] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1d3c80f06bb8177d58c20e8b8e139e3958646547a553cd24b8be6f82dd917a79" Sep 5 00:32:42.125610 containerd[1446]: 2025-09-05 00:32:42.108 [INFO][5693] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1d3c80f06bb8177d58c20e8b8e139e3958646547a553cd24b8be6f82dd917a79" HandleID="k8s-pod-network.1d3c80f06bb8177d58c20e8b8e139e3958646547a553cd24b8be6f82dd917a79" Workload="localhost-k8s-csi--node--driver--7wt2m-eth0" Sep 5 00:32:42.125610 containerd[1446]: 2025-09-05 00:32:42.108 [INFO][5693] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:32:42.125610 containerd[1446]: 2025-09-05 00:32:42.108 [INFO][5693] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:32:42.125610 containerd[1446]: 2025-09-05 00:32:42.116 [WARNING][5693] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1d3c80f06bb8177d58c20e8b8e139e3958646547a553cd24b8be6f82dd917a79" HandleID="k8s-pod-network.1d3c80f06bb8177d58c20e8b8e139e3958646547a553cd24b8be6f82dd917a79" Workload="localhost-k8s-csi--node--driver--7wt2m-eth0" Sep 5 00:32:42.125610 containerd[1446]: 2025-09-05 00:32:42.117 [INFO][5693] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1d3c80f06bb8177d58c20e8b8e139e3958646547a553cd24b8be6f82dd917a79" HandleID="k8s-pod-network.1d3c80f06bb8177d58c20e8b8e139e3958646547a553cd24b8be6f82dd917a79" Workload="localhost-k8s-csi--node--driver--7wt2m-eth0" Sep 5 00:32:42.125610 containerd[1446]: 2025-09-05 00:32:42.121 [INFO][5693] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:32:42.125610 containerd[1446]: 2025-09-05 00:32:42.123 [INFO][5674] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1d3c80f06bb8177d58c20e8b8e139e3958646547a553cd24b8be6f82dd917a79" Sep 5 00:32:42.125610 containerd[1446]: time="2025-09-05T00:32:42.125478339Z" level=info msg="TearDown network for sandbox \"1d3c80f06bb8177d58c20e8b8e139e3958646547a553cd24b8be6f82dd917a79\" successfully" Sep 5 00:32:42.125610 containerd[1446]: time="2025-09-05T00:32:42.125503858Z" level=info msg="StopPodSandbox for \"1d3c80f06bb8177d58c20e8b8e139e3958646547a553cd24b8be6f82dd917a79\" returns successfully" Sep 5 00:32:42.127997 containerd[1446]: time="2025-09-05T00:32:42.127576537Z" level=info msg="RemovePodSandbox for \"1d3c80f06bb8177d58c20e8b8e139e3958646547a553cd24b8be6f82dd917a79\"" Sep 5 00:32:42.127997 containerd[1446]: time="2025-09-05T00:32:42.127606976Z" level=info msg="Forcibly stopping sandbox \"1d3c80f06bb8177d58c20e8b8e139e3958646547a553cd24b8be6f82dd917a79\"" Sep 5 00:32:42.135220 systemd[1]: Started sshd@12-10.0.0.114:22-10.0.0.1:52570.service - OpenSSH per-connection server daemon (10.0.0.1:52570). Sep 5 00:32:42.184700 sshd[5709]: Accepted publickey for core from 10.0.0.1 port 52570 ssh2: RSA SHA256:AGG14PALIK3Z6LO0MtFBMGyAQes3xIvrcXGuLmkN81k Sep 5 00:32:42.188485 sshd[5709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:32:42.193258 systemd-logind[1422]: New session 13 of user core. Sep 5 00:32:42.197695 containerd[1446]: 2025-09-05 00:32:42.163 [WARNING][5714] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1d3c80f06bb8177d58c20e8b8e139e3958646547a553cd24b8be6f82dd917a79" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7wt2m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7cca8ed9-cabd-4207-ad70-24ca48f24180", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 32, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"41af5b36212e9d4860fdfdc0e6fda4dee38da12ad59831d8ecc1b1ff179e129a", Pod:"csi-node-driver-7wt2m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic6e18199175", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:32:42.197695 containerd[1446]: 2025-09-05 00:32:42.163 [INFO][5714] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1d3c80f06bb8177d58c20e8b8e139e3958646547a553cd24b8be6f82dd917a79" Sep 5 00:32:42.197695 containerd[1446]: 2025-09-05 00:32:42.163 [INFO][5714] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1d3c80f06bb8177d58c20e8b8e139e3958646547a553cd24b8be6f82dd917a79" iface="eth0" netns="" Sep 5 00:32:42.197695 containerd[1446]: 2025-09-05 00:32:42.163 [INFO][5714] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1d3c80f06bb8177d58c20e8b8e139e3958646547a553cd24b8be6f82dd917a79" Sep 5 00:32:42.197695 containerd[1446]: 2025-09-05 00:32:42.163 [INFO][5714] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1d3c80f06bb8177d58c20e8b8e139e3958646547a553cd24b8be6f82dd917a79" Sep 5 00:32:42.197695 containerd[1446]: 2025-09-05 00:32:42.181 [INFO][5723] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1d3c80f06bb8177d58c20e8b8e139e3958646547a553cd24b8be6f82dd917a79" HandleID="k8s-pod-network.1d3c80f06bb8177d58c20e8b8e139e3958646547a553cd24b8be6f82dd917a79" Workload="localhost-k8s-csi--node--driver--7wt2m-eth0" Sep 5 00:32:42.197695 containerd[1446]: 2025-09-05 00:32:42.181 [INFO][5723] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:32:42.197695 containerd[1446]: 2025-09-05 00:32:42.181 [INFO][5723] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:32:42.197695 containerd[1446]: 2025-09-05 00:32:42.191 [WARNING][5723] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1d3c80f06bb8177d58c20e8b8e139e3958646547a553cd24b8be6f82dd917a79" HandleID="k8s-pod-network.1d3c80f06bb8177d58c20e8b8e139e3958646547a553cd24b8be6f82dd917a79" Workload="localhost-k8s-csi--node--driver--7wt2m-eth0" Sep 5 00:32:42.197695 containerd[1446]: 2025-09-05 00:32:42.191 [INFO][5723] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1d3c80f06bb8177d58c20e8b8e139e3958646547a553cd24b8be6f82dd917a79" HandleID="k8s-pod-network.1d3c80f06bb8177d58c20e8b8e139e3958646547a553cd24b8be6f82dd917a79" Workload="localhost-k8s-csi--node--driver--7wt2m-eth0" Sep 5 00:32:42.197695 containerd[1446]: 2025-09-05 00:32:42.193 [INFO][5723] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:32:42.197695 containerd[1446]: 2025-09-05 00:32:42.195 [INFO][5714] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1d3c80f06bb8177d58c20e8b8e139e3958646547a553cd24b8be6f82dd917a79" Sep 5 00:32:42.198064 containerd[1446]: time="2025-09-05T00:32:42.197751898Z" level=info msg="TearDown network for sandbox \"1d3c80f06bb8177d58c20e8b8e139e3958646547a553cd24b8be6f82dd917a79\" successfully" Sep 5 00:32:42.199034 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 5 00:32:42.201924 containerd[1446]: time="2025-09-05T00:32:42.201833457Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1d3c80f06bb8177d58c20e8b8e139e3958646547a553cd24b8be6f82dd917a79\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 00:32:42.202012 containerd[1446]: time="2025-09-05T00:32:42.201990814Z" level=info msg="RemovePodSandbox \"1d3c80f06bb8177d58c20e8b8e139e3958646547a553cd24b8be6f82dd917a79\" returns successfully" Sep 5 00:32:42.202645 containerd[1446]: time="2025-09-05T00:32:42.202607401Z" level=info msg="StopPodSandbox for \"0893cac7dbee3bfd15934b019a67a4b3a9434d67ef96211579ade423a1e34526\"" Sep 5 00:32:42.279273 containerd[1446]: 2025-09-05 00:32:42.236 [WARNING][5741] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0893cac7dbee3bfd15934b019a67a4b3a9434d67ef96211579ade423a1e34526" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7b94b9658b--ngd47-eth0", GenerateName:"calico-apiserver-7b94b9658b-", Namespace:"calico-apiserver", SelfLink:"", UID:"90fd82e1-4b15-4224-8bf1-2c50a55938a1", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 31, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b94b9658b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"03e22a11c913ed9e2632d71523116f0e2d4d71a227394cd0dd7accc15d79fd53", Pod:"calico-apiserver-7b94b9658b-ngd47", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0812e51c646", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:32:42.279273 containerd[1446]: 2025-09-05 00:32:42.237 [INFO][5741] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0893cac7dbee3bfd15934b019a67a4b3a9434d67ef96211579ade423a1e34526" Sep 5 00:32:42.279273 containerd[1446]: 2025-09-05 00:32:42.237 [INFO][5741] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0893cac7dbee3bfd15934b019a67a4b3a9434d67ef96211579ade423a1e34526" iface="eth0" netns="" Sep 5 00:32:42.279273 containerd[1446]: 2025-09-05 00:32:42.237 [INFO][5741] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0893cac7dbee3bfd15934b019a67a4b3a9434d67ef96211579ade423a1e34526" Sep 5 00:32:42.279273 containerd[1446]: 2025-09-05 00:32:42.237 [INFO][5741] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0893cac7dbee3bfd15934b019a67a4b3a9434d67ef96211579ade423a1e34526" Sep 5 00:32:42.279273 containerd[1446]: 2025-09-05 00:32:42.258 [INFO][5750] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0893cac7dbee3bfd15934b019a67a4b3a9434d67ef96211579ade423a1e34526" HandleID="k8s-pod-network.0893cac7dbee3bfd15934b019a67a4b3a9434d67ef96211579ade423a1e34526" Workload="localhost-k8s-calico--apiserver--7b94b9658b--ngd47-eth0" Sep 5 00:32:42.279273 containerd[1446]: 2025-09-05 00:32:42.259 [INFO][5750] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:32:42.279273 containerd[1446]: 2025-09-05 00:32:42.259 [INFO][5750] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:32:42.279273 containerd[1446]: 2025-09-05 00:32:42.269 [WARNING][5750] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0893cac7dbee3bfd15934b019a67a4b3a9434d67ef96211579ade423a1e34526" HandleID="k8s-pod-network.0893cac7dbee3bfd15934b019a67a4b3a9434d67ef96211579ade423a1e34526" Workload="localhost-k8s-calico--apiserver--7b94b9658b--ngd47-eth0" Sep 5 00:32:42.279273 containerd[1446]: 2025-09-05 00:32:42.269 [INFO][5750] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0893cac7dbee3bfd15934b019a67a4b3a9434d67ef96211579ade423a1e34526" HandleID="k8s-pod-network.0893cac7dbee3bfd15934b019a67a4b3a9434d67ef96211579ade423a1e34526" Workload="localhost-k8s-calico--apiserver--7b94b9658b--ngd47-eth0" Sep 5 00:32:42.279273 containerd[1446]: 2025-09-05 00:32:42.271 [INFO][5750] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:32:42.279273 containerd[1446]: 2025-09-05 00:32:42.273 [INFO][5741] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0893cac7dbee3bfd15934b019a67a4b3a9434d67ef96211579ade423a1e34526" Sep 5 00:32:42.279733 containerd[1446]: time="2025-09-05T00:32:42.279312512Z" level=info msg="TearDown network for sandbox \"0893cac7dbee3bfd15934b019a67a4b3a9434d67ef96211579ade423a1e34526\" successfully" Sep 5 00:32:42.279733 containerd[1446]: time="2025-09-05T00:32:42.279336192Z" level=info msg="StopPodSandbox for \"0893cac7dbee3bfd15934b019a67a4b3a9434d67ef96211579ade423a1e34526\" returns successfully" Sep 5 00:32:42.279864 containerd[1446]: time="2025-09-05T00:32:42.279834782Z" level=info msg="RemovePodSandbox for \"0893cac7dbee3bfd15934b019a67a4b3a9434d67ef96211579ade423a1e34526\"" Sep 5 00:32:42.279944 containerd[1446]: time="2025-09-05T00:32:42.279885941Z" level=info msg="Forcibly stopping sandbox \"0893cac7dbee3bfd15934b019a67a4b3a9434d67ef96211579ade423a1e34526\"" Sep 5 00:32:42.327322 kubelet[2465]: I0905 00:32:42.327087 2465 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 5 00:32:42.337851 kubelet[2465]: I0905 00:32:42.337394 2465 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 5 00:32:42.355562 containerd[1446]: 2025-09-05 00:32:42.319 [WARNING][5771] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0893cac7dbee3bfd15934b019a67a4b3a9434d67ef96211579ade423a1e34526" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7b94b9658b--ngd47-eth0", GenerateName:"calico-apiserver-7b94b9658b-", Namespace:"calico-apiserver", SelfLink:"", UID:"90fd82e1-4b15-4224-8bf1-2c50a55938a1", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 31, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b94b9658b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"03e22a11c913ed9e2632d71523116f0e2d4d71a227394cd0dd7accc15d79fd53", Pod:"calico-apiserver-7b94b9658b-ngd47", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0812e51c646", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:32:42.355562 containerd[1446]: 2025-09-05 00:32:42.320 [INFO][5771] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0893cac7dbee3bfd15934b019a67a4b3a9434d67ef96211579ade423a1e34526" Sep 5 00:32:42.355562 containerd[1446]: 2025-09-05 00:32:42.320 [INFO][5771] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0893cac7dbee3bfd15934b019a67a4b3a9434d67ef96211579ade423a1e34526" iface="eth0" netns="" Sep 5 00:32:42.355562 containerd[1446]: 2025-09-05 00:32:42.320 [INFO][5771] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0893cac7dbee3bfd15934b019a67a4b3a9434d67ef96211579ade423a1e34526" Sep 5 00:32:42.355562 containerd[1446]: 2025-09-05 00:32:42.320 [INFO][5771] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0893cac7dbee3bfd15934b019a67a4b3a9434d67ef96211579ade423a1e34526" Sep 5 00:32:42.355562 containerd[1446]: 2025-09-05 00:32:42.339 [INFO][5783] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0893cac7dbee3bfd15934b019a67a4b3a9434d67ef96211579ade423a1e34526" HandleID="k8s-pod-network.0893cac7dbee3bfd15934b019a67a4b3a9434d67ef96211579ade423a1e34526" Workload="localhost-k8s-calico--apiserver--7b94b9658b--ngd47-eth0" Sep 5 00:32:42.355562 containerd[1446]: 2025-09-05 00:32:42.339 [INFO][5783] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:32:42.355562 containerd[1446]: 2025-09-05 00:32:42.340 [INFO][5783] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:32:42.355562 containerd[1446]: 2025-09-05 00:32:42.348 [WARNING][5783] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0893cac7dbee3bfd15934b019a67a4b3a9434d67ef96211579ade423a1e34526" HandleID="k8s-pod-network.0893cac7dbee3bfd15934b019a67a4b3a9434d67ef96211579ade423a1e34526" Workload="localhost-k8s-calico--apiserver--7b94b9658b--ngd47-eth0" Sep 5 00:32:42.355562 containerd[1446]: 2025-09-05 00:32:42.348 [INFO][5783] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0893cac7dbee3bfd15934b019a67a4b3a9434d67ef96211579ade423a1e34526" HandleID="k8s-pod-network.0893cac7dbee3bfd15934b019a67a4b3a9434d67ef96211579ade423a1e34526" Workload="localhost-k8s-calico--apiserver--7b94b9658b--ngd47-eth0" Sep 5 00:32:42.355562 containerd[1446]: 2025-09-05 00:32:42.350 [INFO][5783] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:32:42.355562 containerd[1446]: 2025-09-05 00:32:42.352 [INFO][5771] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0893cac7dbee3bfd15934b019a67a4b3a9434d67ef96211579ade423a1e34526" Sep 5 00:32:42.356078 containerd[1446]: time="2025-09-05T00:32:42.355607871Z" level=info msg="TearDown network for sandbox \"0893cac7dbee3bfd15934b019a67a4b3a9434d67ef96211579ade423a1e34526\" successfully" Sep 5 00:32:42.375114 containerd[1446]: time="2025-09-05T00:32:42.375009045Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0893cac7dbee3bfd15934b019a67a4b3a9434d67ef96211579ade423a1e34526\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 00:32:42.375114 containerd[1446]: time="2025-09-05T00:32:42.375111283Z" level=info msg="RemovePodSandbox \"0893cac7dbee3bfd15934b019a67a4b3a9434d67ef96211579ade423a1e34526\" returns successfully" Sep 5 00:32:42.376706 containerd[1446]: time="2025-09-05T00:32:42.376615133Z" level=info msg="StopPodSandbox for \"8313429ede94278ad5c5a1ab799089fc13106b9f22101dace780893eeab73a35\"" Sep 5 00:32:42.462486 containerd[1446]: 2025-09-05 00:32:42.421 [WARNING][5801] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8313429ede94278ad5c5a1ab799089fc13106b9f22101dace780893eeab73a35" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--869c68467b--v22j2-eth0", GenerateName:"calico-kube-controllers-869c68467b-", Namespace:"calico-system", SelfLink:"", UID:"19de0c52-9a52-436c-b9e4-3190b3fe0247", ResourceVersion:"1179", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 32, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"869c68467b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6a16c4cb13ab3cc0b92432402b5d3bb45d3a9b72d4a67b000a375cea7965b6e3", Pod:"calico-kube-controllers-869c68467b-v22j2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0b0aa941d75", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:32:42.462486 containerd[1446]: 2025-09-05 00:32:42.422 [INFO][5801] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8313429ede94278ad5c5a1ab799089fc13106b9f22101dace780893eeab73a35" Sep 5 00:32:42.462486 containerd[1446]: 2025-09-05 00:32:42.422 [INFO][5801] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8313429ede94278ad5c5a1ab799089fc13106b9f22101dace780893eeab73a35" iface="eth0" netns="" Sep 5 00:32:42.462486 containerd[1446]: 2025-09-05 00:32:42.422 [INFO][5801] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8313429ede94278ad5c5a1ab799089fc13106b9f22101dace780893eeab73a35" Sep 5 00:32:42.462486 containerd[1446]: 2025-09-05 00:32:42.422 [INFO][5801] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8313429ede94278ad5c5a1ab799089fc13106b9f22101dace780893eeab73a35" Sep 5 00:32:42.462486 containerd[1446]: 2025-09-05 00:32:42.447 [INFO][5810] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8313429ede94278ad5c5a1ab799089fc13106b9f22101dace780893eeab73a35" HandleID="k8s-pod-network.8313429ede94278ad5c5a1ab799089fc13106b9f22101dace780893eeab73a35" Workload="localhost-k8s-calico--kube--controllers--869c68467b--v22j2-eth0" Sep 5 00:32:42.462486 containerd[1446]: 2025-09-05 00:32:42.448 [INFO][5810] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:32:42.462486 containerd[1446]: 2025-09-05 00:32:42.448 [INFO][5810] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:32:42.462486 containerd[1446]: 2025-09-05 00:32:42.457 [WARNING][5810] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8313429ede94278ad5c5a1ab799089fc13106b9f22101dace780893eeab73a35" HandleID="k8s-pod-network.8313429ede94278ad5c5a1ab799089fc13106b9f22101dace780893eeab73a35" Workload="localhost-k8s-calico--kube--controllers--869c68467b--v22j2-eth0" Sep 5 00:32:42.462486 containerd[1446]: 2025-09-05 00:32:42.457 [INFO][5810] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8313429ede94278ad5c5a1ab799089fc13106b9f22101dace780893eeab73a35" HandleID="k8s-pod-network.8313429ede94278ad5c5a1ab799089fc13106b9f22101dace780893eeab73a35" Workload="localhost-k8s-calico--kube--controllers--869c68467b--v22j2-eth0" Sep 5 00:32:42.462486 containerd[1446]: 2025-09-05 00:32:42.459 [INFO][5810] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:32:42.462486 containerd[1446]: 2025-09-05 00:32:42.460 [INFO][5801] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8313429ede94278ad5c5a1ab799089fc13106b9f22101dace780893eeab73a35" Sep 5 00:32:42.462903 containerd[1446]: time="2025-09-05T00:32:42.462528140Z" level=info msg="TearDown network for sandbox \"8313429ede94278ad5c5a1ab799089fc13106b9f22101dace780893eeab73a35\" successfully" Sep 5 00:32:42.462903 containerd[1446]: time="2025-09-05T00:32:42.462553939Z" level=info msg="StopPodSandbox for \"8313429ede94278ad5c5a1ab799089fc13106b9f22101dace780893eeab73a35\" returns successfully" Sep 5 00:32:42.463085 containerd[1446]: time="2025-09-05T00:32:42.463042770Z" level=info msg="RemovePodSandbox for \"8313429ede94278ad5c5a1ab799089fc13106b9f22101dace780893eeab73a35\"" Sep 5 00:32:42.463085 containerd[1446]: time="2025-09-05T00:32:42.463081689Z" level=info msg="Forcibly stopping sandbox \"8313429ede94278ad5c5a1ab799089fc13106b9f22101dace780893eeab73a35\"" Sep 5 00:32:42.539392 containerd[1446]: 2025-09-05 00:32:42.501 [WARNING][5826] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8313429ede94278ad5c5a1ab799089fc13106b9f22101dace780893eeab73a35" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--869c68467b--v22j2-eth0", GenerateName:"calico-kube-controllers-869c68467b-", Namespace:"calico-system", SelfLink:"", UID:"19de0c52-9a52-436c-b9e4-3190b3fe0247", ResourceVersion:"1179", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 32, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"869c68467b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6a16c4cb13ab3cc0b92432402b5d3bb45d3a9b72d4a67b000a375cea7965b6e3", Pod:"calico-kube-controllers-869c68467b-v22j2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0b0aa941d75", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:32:42.539392 containerd[1446]: 2025-09-05 00:32:42.503 [INFO][5826] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8313429ede94278ad5c5a1ab799089fc13106b9f22101dace780893eeab73a35" Sep 5 00:32:42.539392 containerd[1446]: 2025-09-05 00:32:42.503 [INFO][5826] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8313429ede94278ad5c5a1ab799089fc13106b9f22101dace780893eeab73a35" iface="eth0" netns="" Sep 5 00:32:42.539392 containerd[1446]: 2025-09-05 00:32:42.503 [INFO][5826] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8313429ede94278ad5c5a1ab799089fc13106b9f22101dace780893eeab73a35" Sep 5 00:32:42.539392 containerd[1446]: 2025-09-05 00:32:42.503 [INFO][5826] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8313429ede94278ad5c5a1ab799089fc13106b9f22101dace780893eeab73a35" Sep 5 00:32:42.539392 containerd[1446]: 2025-09-05 00:32:42.523 [INFO][5835] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8313429ede94278ad5c5a1ab799089fc13106b9f22101dace780893eeab73a35" HandleID="k8s-pod-network.8313429ede94278ad5c5a1ab799089fc13106b9f22101dace780893eeab73a35" Workload="localhost-k8s-calico--kube--controllers--869c68467b--v22j2-eth0" Sep 5 00:32:42.539392 containerd[1446]: 2025-09-05 00:32:42.523 [INFO][5835] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:32:42.539392 containerd[1446]: 2025-09-05 00:32:42.523 [INFO][5835] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:32:42.539392 containerd[1446]: 2025-09-05 00:32:42.532 [WARNING][5835] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8313429ede94278ad5c5a1ab799089fc13106b9f22101dace780893eeab73a35" HandleID="k8s-pod-network.8313429ede94278ad5c5a1ab799089fc13106b9f22101dace780893eeab73a35" Workload="localhost-k8s-calico--kube--controllers--869c68467b--v22j2-eth0" Sep 5 00:32:42.539392 containerd[1446]: 2025-09-05 00:32:42.532 [INFO][5835] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8313429ede94278ad5c5a1ab799089fc13106b9f22101dace780893eeab73a35" HandleID="k8s-pod-network.8313429ede94278ad5c5a1ab799089fc13106b9f22101dace780893eeab73a35" Workload="localhost-k8s-calico--kube--controllers--869c68467b--v22j2-eth0" Sep 5 00:32:42.539392 containerd[1446]: 2025-09-05 00:32:42.534 [INFO][5835] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:32:42.539392 containerd[1446]: 2025-09-05 00:32:42.537 [INFO][5826] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8313429ede94278ad5c5a1ab799089fc13106b9f22101dace780893eeab73a35" Sep 5 00:32:42.539848 containerd[1446]: time="2025-09-05T00:32:42.539454126Z" level=info msg="TearDown network for sandbox \"8313429ede94278ad5c5a1ab799089fc13106b9f22101dace780893eeab73a35\" successfully" Sep 5 00:32:42.542809 containerd[1446]: time="2025-09-05T00:32:42.542771980Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8313429ede94278ad5c5a1ab799089fc13106b9f22101dace780893eeab73a35\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 00:32:42.542942 containerd[1446]: time="2025-09-05T00:32:42.542845979Z" level=info msg="RemovePodSandbox \"8313429ede94278ad5c5a1ab799089fc13106b9f22101dace780893eeab73a35\" returns successfully" Sep 5 00:32:42.543198 containerd[1446]: time="2025-09-05T00:32:42.543176252Z" level=info msg="StopPodSandbox for \"1003f23274bc0b838f0f917a029aba5c6d871ebc11763e025681afdab086d9aa\"" Sep 5 00:32:42.550230 sshd[5709]: pam_unix(sshd:session): session closed for user core Sep 5 00:32:42.559380 systemd[1]: sshd@12-10.0.0.114:22-10.0.0.1:52570.service: Deactivated successfully. Sep 5 00:32:42.561125 systemd[1]: session-13.scope: Deactivated successfully. Sep 5 00:32:42.564059 systemd-logind[1422]: Session 13 logged out. Waiting for processes to exit. Sep 5 00:32:42.575210 systemd[1]: Started sshd@13-10.0.0.114:22-10.0.0.1:52584.service - OpenSSH per-connection server daemon (10.0.0.1:52584). Sep 5 00:32:42.585166 systemd-logind[1422]: Removed session 13. Sep 5 00:32:42.613148 sshd[5863]: Accepted publickey for core from 10.0.0.1 port 52584 ssh2: RSA SHA256:AGG14PALIK3Z6LO0MtFBMGyAQes3xIvrcXGuLmkN81k Sep 5 00:32:42.615363 sshd[5863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:32:42.621440 systemd-logind[1422]: New session 14 of user core. Sep 5 00:32:42.628037 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 5 00:32:42.645002 containerd[1446]: 2025-09-05 00:32:42.604 [WARNING][5854] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1003f23274bc0b838f0f917a029aba5c6d871ebc11763e025681afdab086d9aa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--c8wqh-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"192e7eea-9c31-4354-894f-59feed59071c", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 31, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f5348d4e2605d36e5a2c69547d4619fb8ee00cdbd1b6f9a9906cb0e1265f1e4c", Pod:"coredns-7c65d6cfc9-c8wqh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali899fb435dd9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:32:42.645002 containerd[1446]: 2025-09-05 00:32:42.604 [INFO][5854] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1003f23274bc0b838f0f917a029aba5c6d871ebc11763e025681afdab086d9aa" Sep 5 00:32:42.645002 containerd[1446]: 2025-09-05 00:32:42.604 [INFO][5854] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1003f23274bc0b838f0f917a029aba5c6d871ebc11763e025681afdab086d9aa" iface="eth0" netns="" Sep 5 00:32:42.645002 containerd[1446]: 2025-09-05 00:32:42.604 [INFO][5854] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1003f23274bc0b838f0f917a029aba5c6d871ebc11763e025681afdab086d9aa" Sep 5 00:32:42.645002 containerd[1446]: 2025-09-05 00:32:42.604 [INFO][5854] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1003f23274bc0b838f0f917a029aba5c6d871ebc11763e025681afdab086d9aa" Sep 5 00:32:42.645002 containerd[1446]: 2025-09-05 00:32:42.627 [INFO][5867] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1003f23274bc0b838f0f917a029aba5c6d871ebc11763e025681afdab086d9aa" HandleID="k8s-pod-network.1003f23274bc0b838f0f917a029aba5c6d871ebc11763e025681afdab086d9aa" Workload="localhost-k8s-coredns--7c65d6cfc9--c8wqh-eth0" Sep 5 00:32:42.645002 containerd[1446]: 2025-09-05 00:32:42.627 [INFO][5867] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:32:42.645002 containerd[1446]: 2025-09-05 00:32:42.627 [INFO][5867] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:32:42.645002 containerd[1446]: 2025-09-05 00:32:42.639 [WARNING][5867] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1003f23274bc0b838f0f917a029aba5c6d871ebc11763e025681afdab086d9aa" HandleID="k8s-pod-network.1003f23274bc0b838f0f917a029aba5c6d871ebc11763e025681afdab086d9aa" Workload="localhost-k8s-coredns--7c65d6cfc9--c8wqh-eth0" Sep 5 00:32:42.645002 containerd[1446]: 2025-09-05 00:32:42.639 [INFO][5867] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1003f23274bc0b838f0f917a029aba5c6d871ebc11763e025681afdab086d9aa" HandleID="k8s-pod-network.1003f23274bc0b838f0f917a029aba5c6d871ebc11763e025681afdab086d9aa" Workload="localhost-k8s-coredns--7c65d6cfc9--c8wqh-eth0" Sep 5 00:32:42.645002 containerd[1446]: 2025-09-05 00:32:42.641 [INFO][5867] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:32:42.645002 containerd[1446]: 2025-09-05 00:32:42.643 [INFO][5854] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1003f23274bc0b838f0f917a029aba5c6d871ebc11763e025681afdab086d9aa" Sep 5 00:32:42.645417 containerd[1446]: time="2025-09-05T00:32:42.645044662Z" level=info msg="TearDown network for sandbox \"1003f23274bc0b838f0f917a029aba5c6d871ebc11763e025681afdab086d9aa\" successfully" Sep 5 00:32:42.645417 containerd[1446]: time="2025-09-05T00:32:42.645069221Z" level=info msg="StopPodSandbox for \"1003f23274bc0b838f0f917a029aba5c6d871ebc11763e025681afdab086d9aa\" returns successfully" Sep 5 00:32:42.645683 containerd[1446]: time="2025-09-05T00:32:42.645658209Z" level=info msg="RemovePodSandbox for \"1003f23274bc0b838f0f917a029aba5c6d871ebc11763e025681afdab086d9aa\"" Sep 5 00:32:42.645739 containerd[1446]: time="2025-09-05T00:32:42.645691929Z" level=info msg="Forcibly stopping sandbox \"1003f23274bc0b838f0f917a029aba5c6d871ebc11763e025681afdab086d9aa\"" Sep 5 00:32:42.754217 containerd[1446]: 2025-09-05 00:32:42.688 [WARNING][5888] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1003f23274bc0b838f0f917a029aba5c6d871ebc11763e025681afdab086d9aa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--c8wqh-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"192e7eea-9c31-4354-894f-59feed59071c", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 31, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f5348d4e2605d36e5a2c69547d4619fb8ee00cdbd1b6f9a9906cb0e1265f1e4c", Pod:"coredns-7c65d6cfc9-c8wqh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali899fb435dd9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:32:42.754217 containerd[1446]: 2025-09-05 00:32:42.688 [INFO][5888] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1003f23274bc0b838f0f917a029aba5c6d871ebc11763e025681afdab086d9aa" Sep 5 00:32:42.754217 containerd[1446]: 2025-09-05 00:32:42.688 [INFO][5888] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1003f23274bc0b838f0f917a029aba5c6d871ebc11763e025681afdab086d9aa" iface="eth0" netns="" Sep 5 00:32:42.754217 containerd[1446]: 2025-09-05 00:32:42.688 [INFO][5888] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1003f23274bc0b838f0f917a029aba5c6d871ebc11763e025681afdab086d9aa" Sep 5 00:32:42.754217 containerd[1446]: 2025-09-05 00:32:42.688 [INFO][5888] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1003f23274bc0b838f0f917a029aba5c6d871ebc11763e025681afdab086d9aa" Sep 5 00:32:42.754217 containerd[1446]: 2025-09-05 00:32:42.712 [INFO][5919] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1003f23274bc0b838f0f917a029aba5c6d871ebc11763e025681afdab086d9aa" HandleID="k8s-pod-network.1003f23274bc0b838f0f917a029aba5c6d871ebc11763e025681afdab086d9aa" Workload="localhost-k8s-coredns--7c65d6cfc9--c8wqh-eth0" Sep 5 00:32:42.754217 containerd[1446]: 2025-09-05 00:32:42.714 [INFO][5919] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:32:42.754217 containerd[1446]: 2025-09-05 00:32:42.714 [INFO][5919] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:32:42.754217 containerd[1446]: 2025-09-05 00:32:42.725 [WARNING][5919] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1003f23274bc0b838f0f917a029aba5c6d871ebc11763e025681afdab086d9aa" HandleID="k8s-pod-network.1003f23274bc0b838f0f917a029aba5c6d871ebc11763e025681afdab086d9aa" Workload="localhost-k8s-coredns--7c65d6cfc9--c8wqh-eth0" Sep 5 00:32:42.754217 containerd[1446]: 2025-09-05 00:32:42.725 [INFO][5919] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1003f23274bc0b838f0f917a029aba5c6d871ebc11763e025681afdab086d9aa" HandleID="k8s-pod-network.1003f23274bc0b838f0f917a029aba5c6d871ebc11763e025681afdab086d9aa" Workload="localhost-k8s-coredns--7c65d6cfc9--c8wqh-eth0" Sep 5 00:32:42.754217 containerd[1446]: 2025-09-05 00:32:42.728 [INFO][5919] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:32:42.754217 containerd[1446]: 2025-09-05 00:32:42.730 [INFO][5888] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1003f23274bc0b838f0f917a029aba5c6d871ebc11763e025681afdab086d9aa" Sep 5 00:32:42.754905 containerd[1446]: time="2025-09-05T00:32:42.754277804Z" level=info msg="TearDown network for sandbox \"1003f23274bc0b838f0f917a029aba5c6d871ebc11763e025681afdab086d9aa\" successfully" Sep 5 00:32:42.761441 containerd[1446]: time="2025-09-05T00:32:42.761387502Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1003f23274bc0b838f0f917a029aba5c6d871ebc11763e025681afdab086d9aa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 00:32:42.761550 containerd[1446]: time="2025-09-05T00:32:42.761473541Z" level=info msg="RemovePodSandbox \"1003f23274bc0b838f0f917a029aba5c6d871ebc11763e025681afdab086d9aa\" returns successfully" Sep 5 00:32:42.923187 sshd[5863]: pam_unix(sshd:session): session closed for user core Sep 5 00:32:42.933451 systemd[1]: sshd@13-10.0.0.114:22-10.0.0.1:52584.service: Deactivated successfully. Sep 5 00:32:42.935476 systemd[1]: session-14.scope: Deactivated successfully. Sep 5 00:32:42.936972 systemd-logind[1422]: Session 14 logged out. Waiting for processes to exit. Sep 5 00:32:42.938362 systemd[1]: Started sshd@14-10.0.0.114:22-10.0.0.1:52590.service - OpenSSH per-connection server daemon (10.0.0.1:52590). Sep 5 00:32:42.939086 systemd-logind[1422]: Removed session 14. Sep 5 00:32:42.980202 sshd[5935]: Accepted publickey for core from 10.0.0.1 port 52590 ssh2: RSA SHA256:AGG14PALIK3Z6LO0MtFBMGyAQes3xIvrcXGuLmkN81k Sep 5 00:32:42.981514 sshd[5935]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:32:42.985549 systemd-logind[1422]: New session 15 of user core. Sep 5 00:32:42.993010 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 5 00:32:43.712234 kubelet[2465]: I0905 00:32:43.712191 2465 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 00:32:43.736603 kubelet[2465]: I0905 00:32:43.735531 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-7wt2m" podStartSLOduration=25.558443658 podStartE2EDuration="43.7355128s" podCreationTimestamp="2025-09-05 00:32:00 +0000 UTC" firstStartedPulling="2025-09-05 00:32:23.810389915 +0000 UTC m=+42.681440169" lastFinishedPulling="2025-09-05 00:32:41.987459057 +0000 UTC m=+60.858509311" observedRunningTime="2025-09-05 00:32:42.56789276 +0000 UTC m=+61.438943014" watchObservedRunningTime="2025-09-05 00:32:43.7355128 +0000 UTC m=+62.606563054" Sep 5 00:32:44.355763 sshd[5935]: pam_unix(sshd:session): session closed for user core Sep 5 00:32:44.369589 systemd[1]: sshd@14-10.0.0.114:22-10.0.0.1:52590.service: Deactivated successfully. Sep 5 00:32:44.371710 systemd[1]: session-15.scope: Deactivated successfully. Sep 5 00:32:44.373145 systemd-logind[1422]: Session 15 logged out. Waiting for processes to exit. Sep 5 00:32:44.383226 systemd[1]: Started sshd@15-10.0.0.114:22-10.0.0.1:52596.service - OpenSSH per-connection server daemon (10.0.0.1:52596). Sep 5 00:32:44.384282 systemd-logind[1422]: Removed session 15. Sep 5 00:32:44.422118 sshd[5956]: Accepted publickey for core from 10.0.0.1 port 52596 ssh2: RSA SHA256:AGG14PALIK3Z6LO0MtFBMGyAQes3xIvrcXGuLmkN81k Sep 5 00:32:44.423392 sshd[5956]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:32:44.427778 systemd-logind[1422]: New session 16 of user core. Sep 5 00:32:44.437034 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 5 00:32:44.855987 sshd[5956]: pam_unix(sshd:session): session closed for user core Sep 5 00:32:44.871144 systemd[1]: sshd@15-10.0.0.114:22-10.0.0.1:52596.service: Deactivated successfully. Sep 5 00:32:44.872948 systemd[1]: session-16.scope: Deactivated successfully. Sep 5 00:32:44.876200 systemd-logind[1422]: Session 16 logged out. Waiting for processes to exit. Sep 5 00:32:44.880193 systemd[1]: Started sshd@16-10.0.0.114:22-10.0.0.1:52600.service - OpenSSH per-connection server daemon (10.0.0.1:52600). Sep 5 00:32:44.881677 systemd-logind[1422]: Removed session 16. Sep 5 00:32:44.919081 sshd[5969]: Accepted publickey for core from 10.0.0.1 port 52600 ssh2: RSA SHA256:AGG14PALIK3Z6LO0MtFBMGyAQes3xIvrcXGuLmkN81k Sep 5 00:32:44.920645 sshd[5969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:32:44.924750 systemd-logind[1422]: New session 17 of user core. Sep 5 00:32:44.935037 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 5 00:32:45.055248 sshd[5969]: pam_unix(sshd:session): session closed for user core Sep 5 00:32:45.058644 systemd[1]: sshd@16-10.0.0.114:22-10.0.0.1:52600.service: Deactivated successfully. Sep 5 00:32:45.060566 systemd[1]: session-17.scope: Deactivated successfully. Sep 5 00:32:45.062571 systemd-logind[1422]: Session 17 logged out. Waiting for processes to exit. Sep 5 00:32:45.063407 systemd-logind[1422]: Removed session 17. Sep 5 00:32:50.074927 systemd[1]: Started sshd@17-10.0.0.114:22-10.0.0.1:48258.service - OpenSSH per-connection server daemon (10.0.0.1:48258). Sep 5 00:32:50.127646 sshd[5992]: Accepted publickey for core from 10.0.0.1 port 48258 ssh2: RSA SHA256:AGG14PALIK3Z6LO0MtFBMGyAQes3xIvrcXGuLmkN81k Sep 5 00:32:50.129147 sshd[5992]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:32:50.132804 systemd-logind[1422]: New session 18 of user core. Sep 5 00:32:50.140026 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 5 00:32:50.256934 sshd[5992]: pam_unix(sshd:session): session closed for user core Sep 5 00:32:50.262772 systemd[1]: sshd@17-10.0.0.114:22-10.0.0.1:48258.service: Deactivated successfully. Sep 5 00:32:50.263664 systemd-logind[1422]: Session 18 logged out. Waiting for processes to exit. Sep 5 00:32:50.265519 systemd[1]: session-18.scope: Deactivated successfully. Sep 5 00:32:50.266343 systemd-logind[1422]: Removed session 18. Sep 5 00:32:53.233231 kubelet[2465]: E0905 00:32:53.230887 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:32:53.233231 kubelet[2465]: E0905 00:32:53.231188 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:32:55.267734 systemd[1]: Started sshd@18-10.0.0.114:22-10.0.0.1:48260.service - OpenSSH per-connection server daemon (10.0.0.1:48260). Sep 5 00:32:55.310666 sshd[6031]: Accepted publickey for core from 10.0.0.1 port 48260 ssh2: RSA SHA256:AGG14PALIK3Z6LO0MtFBMGyAQes3xIvrcXGuLmkN81k Sep 5 00:32:55.311950 sshd[6031]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:32:55.315978 systemd-logind[1422]: New session 19 of user core. Sep 5 00:32:55.327019 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 5 00:32:55.458245 sshd[6031]: pam_unix(sshd:session): session closed for user core Sep 5 00:32:55.460921 systemd[1]: sshd@18-10.0.0.114:22-10.0.0.1:48260.service: Deactivated successfully. Sep 5 00:32:55.463086 systemd[1]: session-19.scope: Deactivated successfully. Sep 5 00:32:55.464384 systemd-logind[1422]: Session 19 logged out. Waiting for processes to exit. Sep 5 00:32:55.465414 systemd-logind[1422]: Removed session 19. Sep 5 00:32:57.232692 kubelet[2465]: E0905 00:32:57.232456 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:32:57.232692 kubelet[2465]: E0905 00:32:57.232614 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:33:00.473633 systemd[1]: Started sshd@19-10.0.0.114:22-10.0.0.1:41272.service - OpenSSH per-connection server daemon (10.0.0.1:41272). Sep 5 00:33:00.516631 sshd[6051]: Accepted publickey for core from 10.0.0.1 port 41272 ssh2: RSA SHA256:AGG14PALIK3Z6LO0MtFBMGyAQes3xIvrcXGuLmkN81k Sep 5 00:33:00.518189 sshd[6051]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:33:00.523059 systemd-logind[1422]: New session 20 of user core. Sep 5 00:33:00.530040 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 5 00:33:00.671588 sshd[6051]: pam_unix(sshd:session): session closed for user core Sep 5 00:33:00.677356 systemd[1]: sshd@19-10.0.0.114:22-10.0.0.1:41272.service: Deactivated successfully. Sep 5 00:33:00.678922 systemd[1]: session-20.scope: Deactivated successfully. Sep 5 00:33:00.680414 systemd-logind[1422]: Session 20 logged out. Waiting for processes to exit. Sep 5 00:33:00.682486 systemd-logind[1422]: Removed session 20.