Sep 13 00:15:39.851828 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 13 00:15:39.851850 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Sep 12 22:36:20 -00 2025 Sep 13 00:15:39.851861 kernel: KASLR enabled Sep 13 00:15:39.851867 kernel: efi: EFI v2.7 by EDK II Sep 13 00:15:39.851874 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Sep 13 00:15:39.851880 kernel: random: crng init done Sep 13 00:15:39.851887 kernel: ACPI: Early table checksum verification disabled Sep 13 00:15:39.851894 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Sep 13 00:15:39.851900 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 13 00:15:39.851909 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:15:39.851915 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:15:39.851922 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:15:39.851929 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:15:39.851935 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:15:39.851943 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:15:39.851952 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:15:39.851959 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:15:39.851966 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:15:39.851973 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 13 00:15:39.851980 kernel: NUMA: Failed to initialise from firmware Sep 13 00:15:39.851987 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 13 00:15:39.851994 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Sep 13 00:15:39.852001 kernel: Zone ranges: Sep 13 00:15:39.852008 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 13 00:15:39.852014 kernel: DMA32 empty Sep 13 00:15:39.852023 kernel: Normal empty Sep 13 00:15:39.852029 kernel: Movable zone start for each node Sep 13 00:15:39.852036 kernel: Early memory node ranges Sep 13 00:15:39.852043 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Sep 13 00:15:39.852051 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Sep 13 00:15:39.852057 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Sep 13 00:15:39.852064 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 13 00:15:39.852071 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 13 00:15:39.852078 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 13 00:15:39.852085 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 13 00:15:39.852092 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 13 00:15:39.852099 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 13 00:15:39.852107 kernel: psci: probing for conduit method from ACPI. Sep 13 00:15:39.852114 kernel: psci: PSCIv1.1 detected in firmware. Sep 13 00:15:39.852120 kernel: psci: Using standard PSCI v0.2 function IDs Sep 13 00:15:39.852129 kernel: psci: Trusted OS migration not required Sep 13 00:15:39.852137 kernel: psci: SMC Calling Convention v1.1 Sep 13 00:15:39.852144 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 13 00:15:39.852153 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 13 00:15:39.852160 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 13 00:15:39.852167 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 13 00:15:39.852175 kernel: Detected PIPT I-cache on CPU0 Sep 13 00:15:39.852181 kernel: CPU features: detected: GIC system register CPU interface Sep 13 00:15:39.852188 kernel: CPU features: detected: Hardware dirty bit management Sep 13 00:15:39.852195 kernel: CPU features: detected: Spectre-v4 Sep 13 00:15:39.852203 kernel: CPU features: detected: Spectre-BHB Sep 13 00:15:39.852210 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 13 00:15:39.852217 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 13 00:15:39.852226 kernel: CPU features: detected: ARM erratum 1418040 Sep 13 00:15:39.852233 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 13 00:15:39.852239 kernel: alternatives: applying boot alternatives Sep 13 00:15:39.852247 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=e1b46f3c9e154636c32f6cde6e746a00a6b37ca7432cb4e16d172c05f584a8c9 Sep 13 00:15:39.852255 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:15:39.852262 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 13 00:15:39.852269 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 00:15:39.852276 kernel: Fallback order for Node 0: 0 Sep 13 00:15:39.852291 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Sep 13 00:15:39.852299 kernel: Policy zone: DMA Sep 13 00:15:39.852306 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:15:39.852315 kernel: software IO TLB: area num 4. Sep 13 00:15:39.852322 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Sep 13 00:15:39.852330 kernel: Memory: 2386340K/2572288K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39488K init, 897K bss, 185948K reserved, 0K cma-reserved) Sep 13 00:15:39.852338 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 13 00:15:39.852346 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 13 00:15:39.852354 kernel: rcu: RCU event tracing is enabled. Sep 13 00:15:39.852361 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 13 00:15:39.852383 kernel: Trampoline variant of Tasks RCU enabled. Sep 13 00:15:39.852390 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:15:39.852397 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:15:39.852405 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 13 00:15:39.852414 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 13 00:15:39.852430 kernel: GICv3: 256 SPIs implemented Sep 13 00:15:39.852438 kernel: GICv3: 0 Extended SPIs implemented Sep 13 00:15:39.852448 kernel: Root IRQ handler: gic_handle_irq Sep 13 00:15:39.852457 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 13 00:15:39.852464 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 13 00:15:39.852471 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 13 00:15:39.852479 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Sep 13 00:15:39.852486 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Sep 13 00:15:39.852494 kernel: GICv3: using LPI property table @0x00000000400f0000 Sep 13 00:15:39.852521 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Sep 13 00:15:39.852528 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 13 00:15:39.852539 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 13 00:15:39.852546 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 13 00:15:39.852554 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 13 00:15:39.852561 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 13 00:15:39.852569 kernel: arm-pv: using stolen time PV Sep 13 00:15:39.852576 kernel: Console: colour dummy device 80x25 Sep 13 00:15:39.852584 kernel: ACPI: Core revision 20230628 Sep 13 00:15:39.852592 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 13 00:15:39.852599 kernel: pid_max: default: 32768 minimum: 301 Sep 13 00:15:39.852607 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 13 00:15:39.852617 kernel: landlock: Up and running. Sep 13 00:15:39.852624 kernel: SELinux: Initializing. Sep 13 00:15:39.852631 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:15:39.852638 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:15:39.852645 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 13 00:15:39.852653 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 13 00:15:39.852660 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:15:39.852667 kernel: rcu: Max phase no-delay instances is 400. Sep 13 00:15:39.852674 kernel: Platform MSI: ITS@0x8080000 domain created Sep 13 00:15:39.852683 kernel: PCI/MSI: ITS@0x8080000 domain created Sep 13 00:15:39.852690 kernel: Remapping and enabling EFI services. Sep 13 00:15:39.852697 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:15:39.852704 kernel: Detected PIPT I-cache on CPU1 Sep 13 00:15:39.852711 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 13 00:15:39.852718 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Sep 13 00:15:39.852725 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 13 00:15:39.852733 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 13 00:15:39.852740 kernel: Detected PIPT I-cache on CPU2 Sep 13 00:15:39.852747 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 13 00:15:39.852756 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Sep 13 00:15:39.852763 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 13 00:15:39.852775 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 13 00:15:39.852784 kernel: Detected PIPT I-cache on CPU3 Sep 13 00:15:39.852791 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 13 00:15:39.852799 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Sep 13 00:15:39.852806 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 13 00:15:39.852813 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 13 00:15:39.852821 kernel: smp: Brought up 1 node, 4 CPUs Sep 13 00:15:39.852830 kernel: SMP: Total of 4 processors activated. Sep 13 00:15:39.852837 kernel: CPU features: detected: 32-bit EL0 Support Sep 13 00:15:39.852845 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 13 00:15:39.852852 kernel: CPU features: detected: Common not Private translations Sep 13 00:15:39.852860 kernel: CPU features: detected: CRC32 instructions Sep 13 00:15:39.852867 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 13 00:15:39.852874 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 13 00:15:39.852882 kernel: CPU features: detected: LSE atomic instructions Sep 13 00:15:39.852891 kernel: CPU features: detected: Privileged Access Never Sep 13 00:15:39.852898 kernel: CPU features: detected: RAS Extension Support Sep 13 00:15:39.852906 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 13 00:15:39.852913 kernel: CPU: All CPU(s) started at EL1 Sep 13 00:15:39.852921 kernel: alternatives: applying system-wide alternatives Sep 13 00:15:39.852928 kernel: devtmpfs: initialized Sep 13 00:15:39.852936 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:15:39.852943 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 13 00:15:39.852951 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:15:39.852960 kernel: SMBIOS 3.0.0 present. Sep 13 00:15:39.852967 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Sep 13 00:15:39.852975 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:15:39.852982 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 13 00:15:39.852989 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 13 00:15:39.852997 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 13 00:15:39.853004 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:15:39.853018 kernel: audit: type=2000 audit(0.022:1): state=initialized audit_enabled=0 res=1 Sep 13 00:15:39.853025 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:15:39.853034 kernel: cpuidle: using governor menu Sep 13 00:15:39.853042 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 13 00:15:39.853049 kernel: ASID allocator initialised with 32768 entries Sep 13 00:15:39.853057 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:15:39.853064 kernel: Serial: AMBA PL011 UART driver Sep 13 00:15:39.853072 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 13 00:15:39.853079 kernel: Modules: 0 pages in range for non-PLT usage Sep 13 00:15:39.853086 kernel: Modules: 508992 pages in range for PLT usage Sep 13 00:15:39.853094 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 00:15:39.853103 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 13 00:15:39.853110 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 13 00:15:39.853118 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 13 00:15:39.853125 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:15:39.853133 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 13 00:15:39.853140 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 13 00:15:39.853148 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 13 00:15:39.853155 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:15:39.853162 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:15:39.853171 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:15:39.853179 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 00:15:39.853186 kernel: ACPI: Interpreter enabled Sep 13 00:15:39.853193 kernel: ACPI: Using GIC for interrupt routing Sep 13 00:15:39.853201 kernel: ACPI: MCFG table detected, 1 entries Sep 13 00:15:39.853208 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 13 00:15:39.853216 kernel: printk: console [ttyAMA0] enabled Sep 13 00:15:39.853223 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 13 00:15:39.853373 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 13 00:15:39.853473 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 13 00:15:39.853544 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 13 00:15:39.853613 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 13 00:15:39.853684 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 13 00:15:39.853694 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 13 00:15:39.853702 kernel: PCI host bridge to bus 0000:00 Sep 13 00:15:39.853778 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 13 00:15:39.853843 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 13 00:15:39.853905 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 13 00:15:39.853968 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 13 00:15:39.854055 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Sep 13 00:15:39.854136 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Sep 13 00:15:39.854209 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Sep 13 00:15:39.854289 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Sep 13 00:15:39.854361 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Sep 13 00:15:39.854444 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Sep 13 00:15:39.854516 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Sep 13 00:15:39.854585 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Sep 13 00:15:39.854648 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 13 00:15:39.854710 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 13 00:15:39.854776 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 13 00:15:39.854786 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 13 00:15:39.854794 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 13 00:15:39.854801 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 13 00:15:39.854809 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 13 00:15:39.854816 kernel: iommu: Default domain type: Translated Sep 13 00:15:39.854823 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 13 00:15:39.854831 kernel: efivars: Registered efivars operations Sep 13 00:15:39.854840 kernel: vgaarb: loaded Sep 13 00:15:39.854848 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 13 00:15:39.854855 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:15:39.854863 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:15:39.854870 kernel: pnp: PnP ACPI init Sep 13 00:15:39.854949 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 13 00:15:39.854960 kernel: pnp: PnP ACPI: found 1 devices Sep 13 00:15:39.854968 kernel: NET: Registered PF_INET protocol family Sep 13 00:15:39.854975 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 13 00:15:39.854985 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 13 00:15:39.854992 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:15:39.855000 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 00:15:39.855007 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 13 00:15:39.855015 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 13 00:15:39.855022 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:15:39.855030 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:15:39.855037 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:15:39.855046 kernel: PCI: CLS 0 bytes, default 64 Sep 13 00:15:39.855054 kernel: kvm [1]: HYP mode not available Sep 13 00:15:39.855061 kernel: Initialise system trusted keyrings Sep 13 00:15:39.855069 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 13 00:15:39.855077 kernel: Key type asymmetric registered Sep 13 00:15:39.855084 kernel: Asymmetric key parser 'x509' registered Sep 13 00:15:39.855092 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 13 00:15:39.855099 kernel: io scheduler mq-deadline registered Sep 13 00:15:39.855107 kernel: io scheduler kyber registered Sep 13 00:15:39.855114 kernel: io scheduler bfq registered Sep 13 00:15:39.855123 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 13 00:15:39.855130 kernel: ACPI: button: Power Button [PWRB] Sep 13 00:15:39.855138 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 13 00:15:39.855208 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 13 00:15:39.855218 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:15:39.855225 kernel: thunder_xcv, ver 1.0 Sep 13 00:15:39.855232 kernel: thunder_bgx, ver 1.0 Sep 13 00:15:39.855239 kernel: nicpf, ver 1.0 Sep 13 00:15:39.855247 kernel: nicvf, ver 1.0 Sep 13 00:15:39.855333 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 13 00:15:39.855402 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-13T00:15:39 UTC (1757722539) Sep 13 00:15:39.855412 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 13 00:15:39.855448 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Sep 13 00:15:39.855456 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 13 00:15:39.855463 kernel: watchdog: Hard watchdog permanently disabled Sep 13 00:15:39.855471 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:15:39.855478 kernel: Segment Routing with IPv6 Sep 13 00:15:39.855489 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:15:39.855497 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:15:39.855504 kernel: Key type dns_resolver registered Sep 13 00:15:39.855511 kernel: registered taskstats version 1 Sep 13 00:15:39.855519 kernel: Loading compiled-in X.509 certificates Sep 13 00:15:39.855527 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 036ad4721a31543be5c000f2896b40d1e5515c6e' Sep 13 00:15:39.855534 kernel: Key type .fscrypt registered Sep 13 00:15:39.855541 kernel: Key type fscrypt-provisioning registered Sep 13 00:15:39.855549 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 00:15:39.855558 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:15:39.855565 kernel: ima: No architecture policies found Sep 13 00:15:39.855573 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 13 00:15:39.855580 kernel: clk: Disabling unused clocks Sep 13 00:15:39.855587 kernel: Freeing unused kernel memory: 39488K Sep 13 00:15:39.855595 kernel: Run /init as init process Sep 13 00:15:39.855602 kernel: with arguments: Sep 13 00:15:39.855609 kernel: /init Sep 13 00:15:39.855616 kernel: with environment: Sep 13 00:15:39.855625 kernel: HOME=/ Sep 13 00:15:39.855633 kernel: TERM=linux Sep 13 00:15:39.855640 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:15:39.855649 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:15:39.855659 systemd[1]: Detected virtualization kvm. Sep 13 00:15:39.855668 systemd[1]: Detected architecture arm64. Sep 13 00:15:39.855675 systemd[1]: Running in initrd. Sep 13 00:15:39.855684 systemd[1]: No hostname configured, using default hostname. Sep 13 00:15:39.855692 systemd[1]: Hostname set to . Sep 13 00:15:39.855701 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:15:39.855709 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:15:39.855717 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:15:39.855725 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:15:39.855734 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 13 00:15:39.855742 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:15:39.855751 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 13 00:15:39.855760 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 13 00:15:39.855769 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 13 00:15:39.855778 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 13 00:15:39.855786 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:15:39.855794 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:15:39.855802 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:15:39.855812 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:15:39.855820 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:15:39.855828 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:15:39.855836 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:15:39.855844 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:15:39.855852 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 13 00:15:39.855861 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 13 00:15:39.855869 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:15:39.855877 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:15:39.855887 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:15:39.855895 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:15:39.855903 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 13 00:15:39.855911 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:15:39.855920 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 13 00:15:39.855928 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:15:39.855936 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:15:39.855944 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:15:39.855964 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:15:39.855974 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 13 00:15:39.855982 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:15:39.855990 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:15:39.856017 systemd-journald[238]: Collecting audit messages is disabled. Sep 13 00:15:39.856038 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 00:15:39.856047 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:15:39.856055 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:15:39.856064 systemd-journald[238]: Journal started Sep 13 00:15:39.856084 systemd-journald[238]: Runtime Journal (/run/log/journal/78ef9b20e7e2401d88f277a2b5a7d13b) is 5.9M, max 47.3M, 41.4M free. Sep 13 00:15:39.845432 systemd-modules-load[239]: Inserted module 'overlay' Sep 13 00:15:39.858364 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:15:39.861441 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:15:39.861858 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:15:39.864183 systemd-modules-load[239]: Inserted module 'br_netfilter' Sep 13 00:15:39.864957 kernel: Bridge firewalling registered Sep 13 00:15:39.865601 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:15:39.867595 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:15:39.871859 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:15:39.875596 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:15:39.880771 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:15:39.884189 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:15:39.885867 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:15:39.894661 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:15:39.896592 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:15:39.898546 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 13 00:15:39.912273 dracut-cmdline[281]: dracut-dracut-053 Sep 13 00:15:39.915025 dracut-cmdline[281]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=e1b46f3c9e154636c32f6cde6e746a00a6b37ca7432cb4e16d172c05f584a8c9 Sep 13 00:15:39.919402 systemd-resolved[277]: Positive Trust Anchors: Sep 13 00:15:39.919412 systemd-resolved[277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:15:39.919459 systemd-resolved[277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:15:39.925196 systemd-resolved[277]: Defaulting to hostname 'linux'. Sep 13 00:15:39.926368 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:15:39.928954 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:15:39.983454 kernel: SCSI subsystem initialized Sep 13 00:15:39.987439 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:15:39.996451 kernel: iscsi: registered transport (tcp) Sep 13 00:15:40.007681 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:15:40.007712 kernel: QLogic iSCSI HBA Driver Sep 13 00:15:40.051503 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 13 00:15:40.064609 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 13 00:15:40.093542 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:15:40.093615 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:15:40.094451 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 13 00:15:40.145461 kernel: raid6: neonx8 gen() 15741 MB/s Sep 13 00:15:40.162448 kernel: raid6: neonx4 gen() 15656 MB/s Sep 13 00:15:40.179443 kernel: raid6: neonx2 gen() 13212 MB/s Sep 13 00:15:40.196456 kernel: raid6: neonx1 gen() 10494 MB/s Sep 13 00:15:40.213460 kernel: raid6: int64x8 gen() 6947 MB/s Sep 13 00:15:40.230443 kernel: raid6: int64x4 gen() 7353 MB/s Sep 13 00:15:40.247442 kernel: raid6: int64x2 gen() 6125 MB/s Sep 13 00:15:40.264455 kernel: raid6: int64x1 gen() 5058 MB/s Sep 13 00:15:40.264489 kernel: raid6: using algorithm neonx8 gen() 15741 MB/s Sep 13 00:15:40.281484 kernel: raid6: .... xor() 12041 MB/s, rmw enabled Sep 13 00:15:40.281512 kernel: raid6: using neon recovery algorithm Sep 13 00:15:40.286651 kernel: xor: measuring software checksum speed Sep 13 00:15:40.286678 kernel: 8regs : 19812 MB/sec Sep 13 00:15:40.287698 kernel: 32regs : 19674 MB/sec Sep 13 00:15:40.287711 kernel: arm64_neon : 26213 MB/sec Sep 13 00:15:40.287720 kernel: xor: using function: arm64_neon (26213 MB/sec) Sep 13 00:15:40.342509 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 13 00:15:40.353046 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:15:40.363595 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:15:40.374818 systemd-udevd[463]: Using default interface naming scheme 'v255'. Sep 13 00:15:40.378001 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:15:40.380706 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 13 00:15:40.401953 dracut-pre-trigger[469]: rd.md=0: removing MD RAID activation Sep 13 00:15:40.428314 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:15:40.442586 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:15:40.493873 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:15:40.506625 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 13 00:15:40.518469 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 13 00:15:40.519649 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:15:40.521605 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:15:40.524601 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:15:40.532649 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 13 00:15:40.547347 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 13 00:15:40.547571 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 13 00:15:40.547472 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:15:40.550576 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:15:40.550686 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:15:40.553886 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:15:40.554734 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:15:40.554855 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:15:40.564152 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 00:15:40.564173 kernel: GPT:9289727 != 19775487 Sep 13 00:15:40.564184 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 00:15:40.564193 kernel: GPT:9289727 != 19775487 Sep 13 00:15:40.564201 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 00:15:40.564211 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:15:40.557007 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:15:40.572967 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:15:40.580599 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (507) Sep 13 00:15:40.580633 kernel: BTRFS: device fsid 29bc4da8-c689-46a2-a16a-b7bbc722db77 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (517) Sep 13 00:15:40.586556 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 13 00:15:40.587623 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:15:40.598473 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 13 00:15:40.602667 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 13 00:15:40.606183 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 13 00:15:40.607133 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 13 00:15:40.624553 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 13 00:15:40.626444 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:15:40.629876 disk-uuid[552]: Primary Header is updated. Sep 13 00:15:40.629876 disk-uuid[552]: Secondary Entries is updated. Sep 13 00:15:40.629876 disk-uuid[552]: Secondary Header is updated. Sep 13 00:15:40.633444 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:15:40.636454 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:15:40.639430 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:15:40.647695 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:15:41.641469 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:15:41.642064 disk-uuid[553]: The operation has completed successfully. Sep 13 00:15:41.658597 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:15:41.658695 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 13 00:15:41.680609 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 13 00:15:41.684801 sh[574]: Success Sep 13 00:15:41.696440 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 13 00:15:41.736590 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 13 00:15:41.754985 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 13 00:15:41.756550 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 13 00:15:41.767119 kernel: BTRFS info (device dm-0): first mount of filesystem 29bc4da8-c689-46a2-a16a-b7bbc722db77 Sep 13 00:15:41.767159 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 13 00:15:41.767170 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 13 00:15:41.767951 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 13 00:15:41.768541 kernel: BTRFS info (device dm-0): using free space tree Sep 13 00:15:41.773242 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 13 00:15:41.775094 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 13 00:15:41.787585 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 13 00:15:41.788945 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 13 00:15:41.798993 kernel: BTRFS info (device vda6): first mount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 13 00:15:41.799043 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 13 00:15:41.799055 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:15:41.801440 kernel: BTRFS info (device vda6): auto enabling async discard Sep 13 00:15:41.809236 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 13 00:15:41.811074 kernel: BTRFS info (device vda6): last unmount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 13 00:15:41.817953 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 13 00:15:41.823578 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 13 00:15:41.891350 ignition[666]: Ignition 2.19.0 Sep 13 00:15:41.891363 ignition[666]: Stage: fetch-offline Sep 13 00:15:41.891400 ignition[666]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:15:41.891409 ignition[666]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:15:41.891578 ignition[666]: parsed url from cmdline: "" Sep 13 00:15:41.891582 ignition[666]: no config URL provided Sep 13 00:15:41.891587 ignition[666]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:15:41.891595 ignition[666]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:15:41.891619 ignition[666]: op(1): [started] loading QEMU firmware config module Sep 13 00:15:41.898176 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:15:41.891624 ignition[666]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 13 00:15:41.897810 ignition[666]: op(1): [finished] loading QEMU firmware config module Sep 13 00:15:41.911585 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:15:41.932435 systemd-networkd[767]: lo: Link UP Sep 13 00:15:41.932446 systemd-networkd[767]: lo: Gained carrier Sep 13 00:15:41.933111 systemd-networkd[767]: Enumeration completed Sep 13 00:15:41.933227 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:15:41.933568 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:15:41.933572 systemd-networkd[767]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:15:41.934553 systemd-networkd[767]: eth0: Link UP Sep 13 00:15:41.934556 systemd-networkd[767]: eth0: Gained carrier Sep 13 00:15:41.934562 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:15:41.934914 systemd[1]: Reached target network.target - Network. Sep 13 00:15:41.955541 ignition[666]: parsing config with SHA512: f53c9f5c31802e86c410cba631883da43238fa015b5eafe0c5a88b80acc5ff232753102a9cd902a52bfb1d5cd860e6ee8075c911a682b133b412020d245edfbe Sep 13 00:15:41.959674 unknown[666]: fetched base config from "system" Sep 13 00:15:41.959684 unknown[666]: fetched user config from "qemu" Sep 13 00:15:41.960238 ignition[666]: fetch-offline: fetch-offline passed Sep 13 00:15:41.961466 systemd-networkd[767]: eth0: DHCPv4 address 10.0.0.130/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 13 00:15:41.960382 ignition[666]: Ignition finished successfully Sep 13 00:15:41.961835 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:15:41.963248 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 13 00:15:41.973608 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 13 00:15:41.983693 ignition[771]: Ignition 2.19.0 Sep 13 00:15:41.983703 ignition[771]: Stage: kargs Sep 13 00:15:41.983867 ignition[771]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:15:41.983876 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:15:41.986772 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 13 00:15:41.984730 ignition[771]: kargs: kargs passed Sep 13 00:15:41.984781 ignition[771]: Ignition finished successfully Sep 13 00:15:41.998564 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 13 00:15:42.008272 ignition[779]: Ignition 2.19.0 Sep 13 00:15:42.008283 ignition[779]: Stage: disks Sep 13 00:15:42.008519 ignition[779]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:15:42.010839 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 13 00:15:42.008529 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:15:42.012103 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 13 00:15:42.009372 ignition[779]: disks: disks passed Sep 13 00:15:42.013379 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 13 00:15:42.009432 ignition[779]: Ignition finished successfully Sep 13 00:15:42.015134 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:15:42.016645 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:15:42.017878 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:15:42.026572 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 13 00:15:42.037186 systemd-fsck[790]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 13 00:15:42.040482 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 13 00:15:42.043552 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 13 00:15:42.087315 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 13 00:15:42.088685 kernel: EXT4-fs (vda9): mounted filesystem d35fd879-6758-447b-9fdd-bb21dd7c5b2b r/w with ordered data mode. Quota mode: none. Sep 13 00:15:42.089185 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 13 00:15:42.098521 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:15:42.100941 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 13 00:15:42.102174 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 13 00:15:42.107011 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (798) Sep 13 00:15:42.107034 kernel: BTRFS info (device vda6): first mount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 13 00:15:42.107045 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 13 00:15:42.102215 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:15:42.110796 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:15:42.110815 kernel: BTRFS info (device vda6): auto enabling async discard Sep 13 00:15:42.102237 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:15:42.107655 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 13 00:15:42.112224 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 13 00:15:42.113928 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:15:42.145649 initrd-setup-root[822]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:15:42.148710 initrd-setup-root[829]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:15:42.153135 initrd-setup-root[836]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:15:42.157159 initrd-setup-root[843]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:15:42.223694 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 13 00:15:42.230528 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 13 00:15:42.231824 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 13 00:15:42.236439 kernel: BTRFS info (device vda6): last unmount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 13 00:15:42.251391 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 13 00:15:42.254071 ignition[910]: INFO : Ignition 2.19.0 Sep 13 00:15:42.254071 ignition[910]: INFO : Stage: mount Sep 13 00:15:42.256482 ignition[910]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:15:42.256482 ignition[910]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:15:42.256482 ignition[910]: INFO : mount: mount passed Sep 13 00:15:42.256482 ignition[910]: INFO : Ignition finished successfully Sep 13 00:15:42.257073 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 13 00:15:42.267549 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 13 00:15:42.766240 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 13 00:15:42.777801 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:15:42.784443 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (924) Sep 13 00:15:42.786712 kernel: BTRFS info (device vda6): first mount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 13 00:15:42.786731 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 13 00:15:42.786742 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:15:42.791467 kernel: BTRFS info (device vda6): auto enabling async discard Sep 13 00:15:42.793599 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:15:42.821403 ignition[941]: INFO : Ignition 2.19.0 Sep 13 00:15:42.821403 ignition[941]: INFO : Stage: files Sep 13 00:15:42.823304 ignition[941]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:15:42.823304 ignition[941]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:15:42.823304 ignition[941]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:15:42.827095 ignition[941]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:15:42.827095 ignition[941]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:15:42.827095 ignition[941]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:15:42.827095 ignition[941]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:15:42.827095 ignition[941]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:15:42.826304 unknown[941]: wrote ssh authorized keys file for user: core Sep 13 00:15:42.833512 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 13 00:15:42.833512 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Sep 13 00:15:42.911302 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 13 00:15:43.386548 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 13 00:15:43.388224 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:15:43.388224 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:15:43.388224 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:15:43.388224 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:15:43.388224 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:15:43.388224 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:15:43.388224 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:15:43.388224 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:15:43.388224 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:15:43.388224 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:15:43.388224 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 13 00:15:43.388224 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 13 00:15:43.388224 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 13 00:15:43.388224 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Sep 13 00:15:43.553648 systemd-networkd[767]: eth0: Gained IPv6LL Sep 13 00:15:43.742142 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 13 00:15:44.277873 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 13 00:15:44.277873 ignition[941]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 13 00:15:44.280806 ignition[941]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:15:44.280806 ignition[941]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:15:44.280806 ignition[941]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 13 00:15:44.280806 ignition[941]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 13 00:15:44.280806 ignition[941]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 00:15:44.280806 ignition[941]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 00:15:44.280806 ignition[941]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 13 00:15:44.280806 ignition[941]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 13 00:15:44.315802 ignition[941]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 00:15:44.320859 ignition[941]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 00:15:44.323151 ignition[941]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 13 00:15:44.323151 ignition[941]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 13 00:15:44.323151 ignition[941]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 00:15:44.323151 ignition[941]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:15:44.323151 ignition[941]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:15:44.323151 ignition[941]: INFO : files: files passed Sep 13 00:15:44.323151 ignition[941]: INFO : Ignition finished successfully Sep 13 00:15:44.324774 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 13 00:15:44.341942 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 13 00:15:44.345849 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 13 00:15:44.349367 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:15:44.349483 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 13 00:15:44.353675 initrd-setup-root-after-ignition[969]: grep: /sysroot/oem/oem-release: No such file or directory Sep 13 00:15:44.356841 initrd-setup-root-after-ignition[971]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:15:44.356841 initrd-setup-root-after-ignition[971]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:15:44.359910 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:15:44.361015 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:15:44.362960 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 13 00:15:44.371638 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 13 00:15:44.390087 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:15:44.390195 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 13 00:15:44.392297 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 13 00:15:44.393998 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 13 00:15:44.395499 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 13 00:15:44.396241 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 13 00:15:44.413005 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:15:44.424633 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 13 00:15:44.438734 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:15:44.440003 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:15:44.442118 systemd[1]: Stopped target timers.target - Timer Units. Sep 13 00:15:44.443977 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:15:44.444099 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:15:44.446719 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 13 00:15:44.448783 systemd[1]: Stopped target basic.target - Basic System. Sep 13 00:15:44.450457 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 13 00:15:44.452305 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:15:44.454362 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 13 00:15:44.456487 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 13 00:15:44.458507 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:15:44.460616 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 13 00:15:44.462582 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 13 00:15:44.464362 systemd[1]: Stopped target swap.target - Swaps. Sep 13 00:15:44.465985 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:15:44.466110 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:15:44.468545 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:15:44.470641 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:15:44.472654 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 13 00:15:44.472782 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:15:44.474778 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:15:44.474898 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 13 00:15:44.477755 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:15:44.477874 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:15:44.479960 systemd[1]: Stopped target paths.target - Path Units. Sep 13 00:15:44.481665 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:15:44.481772 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:15:44.484005 systemd[1]: Stopped target slices.target - Slice Units. Sep 13 00:15:44.486228 systemd[1]: Stopped target sockets.target - Socket Units. Sep 13 00:15:44.487910 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:15:44.487998 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:15:44.490515 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:15:44.490603 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:15:44.493081 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:15:44.493189 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:15:44.494968 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:15:44.495068 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 13 00:15:44.506628 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 13 00:15:44.508295 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 13 00:15:44.509291 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:15:44.509451 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:15:44.511488 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:15:44.511595 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:15:44.517441 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:15:44.517534 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 13 00:15:44.522061 ignition[997]: INFO : Ignition 2.19.0 Sep 13 00:15:44.522061 ignition[997]: INFO : Stage: umount Sep 13 00:15:44.526511 ignition[997]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:15:44.526511 ignition[997]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:15:44.526511 ignition[997]: INFO : umount: umount passed Sep 13 00:15:44.526511 ignition[997]: INFO : Ignition finished successfully Sep 13 00:15:44.522924 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:15:44.525257 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:15:44.525358 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 13 00:15:44.527443 systemd[1]: Stopped target network.target - Network. Sep 13 00:15:44.528916 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:15:44.528974 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 13 00:15:44.531009 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:15:44.531052 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 13 00:15:44.532347 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:15:44.532382 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 13 00:15:44.535565 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 13 00:15:44.535606 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 13 00:15:44.537349 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 13 00:15:44.538968 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 13 00:15:44.544453 systemd-networkd[767]: eth0: DHCPv6 lease lost Sep 13 00:15:44.545876 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:15:44.546018 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 13 00:15:44.547621 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:15:44.547742 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 13 00:15:44.550239 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:15:44.550291 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:15:44.561547 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 13 00:15:44.562329 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:15:44.562380 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:15:44.564229 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:15:44.564278 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:15:44.566215 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:15:44.566268 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 13 00:15:44.569455 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 13 00:15:44.569497 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:15:44.571611 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:15:44.580576 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:15:44.580685 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 13 00:15:44.582201 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:15:44.582287 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 13 00:15:44.583808 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:15:44.583897 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 13 00:15:44.600166 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:15:44.600326 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:15:44.602332 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:15:44.602374 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 13 00:15:44.603655 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:15:44.603684 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:15:44.605094 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:15:44.605137 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:15:44.607216 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:15:44.607273 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 13 00:15:44.609389 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:15:44.609444 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:15:44.622583 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 13 00:15:44.623369 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 13 00:15:44.623443 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:15:44.625186 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 13 00:15:44.625228 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:15:44.626807 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:15:44.626844 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:15:44.628443 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:15:44.628480 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:15:44.630480 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:15:44.631956 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 13 00:15:44.634231 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 13 00:15:44.636030 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 13 00:15:44.644910 systemd[1]: Switching root. Sep 13 00:15:44.664306 systemd-journald[238]: Journal stopped Sep 13 00:15:45.414122 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Sep 13 00:15:45.414189 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 00:15:45.414201 kernel: SELinux: policy capability open_perms=1 Sep 13 00:15:45.414213 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 00:15:45.414227 kernel: SELinux: policy capability always_check_network=0 Sep 13 00:15:45.414251 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 00:15:45.414263 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 00:15:45.414277 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 00:15:45.414287 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 00:15:45.414312 kernel: audit: type=1403 audit(1757722544.803:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 00:15:45.414324 systemd[1]: Successfully loaded SELinux policy in 30.902ms. Sep 13 00:15:45.414410 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.405ms. Sep 13 00:15:45.414440 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:15:45.414452 systemd[1]: Detected virtualization kvm. Sep 13 00:15:45.414463 systemd[1]: Detected architecture arm64. Sep 13 00:15:45.414473 systemd[1]: Detected first boot. Sep 13 00:15:45.414502 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:15:45.414516 zram_generator::config[1042]: No configuration found. Sep 13 00:15:45.414528 systemd[1]: Populated /etc with preset unit settings. Sep 13 00:15:45.414539 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 13 00:15:45.414549 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 13 00:15:45.414626 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 13 00:15:45.414643 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 13 00:15:45.414654 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 13 00:15:45.414671 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 13 00:15:45.414682 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 13 00:15:45.414693 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 13 00:15:45.414703 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 13 00:15:45.414714 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 13 00:15:45.414725 systemd[1]: Created slice user.slice - User and Session Slice. Sep 13 00:15:45.414735 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:15:45.414746 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:15:45.414757 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 13 00:15:45.414769 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 13 00:15:45.414780 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 13 00:15:45.414791 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:15:45.414802 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 13 00:15:45.414812 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:15:45.414823 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 13 00:15:45.414833 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 13 00:15:45.414844 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 13 00:15:45.414856 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 13 00:15:45.414867 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:15:45.414877 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:15:45.414888 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:15:45.414900 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:15:45.414910 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 13 00:15:45.414921 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 13 00:15:45.414932 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:15:45.414942 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:15:45.414955 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:15:45.414966 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 13 00:15:45.414976 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 13 00:15:45.414987 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 13 00:15:45.414998 systemd[1]: Mounting media.mount - External Media Directory... Sep 13 00:15:45.415008 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 13 00:15:45.415018 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 13 00:15:45.415029 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 13 00:15:45.415041 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 00:15:45.415053 systemd[1]: Reached target machines.target - Containers. Sep 13 00:15:45.415063 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 13 00:15:45.415074 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:15:45.415085 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:15:45.415096 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 13 00:15:45.415106 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:15:45.415117 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:15:45.415128 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:15:45.415140 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 13 00:15:45.415151 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:15:45.415162 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:15:45.415173 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 13 00:15:45.415184 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 13 00:15:45.415194 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 13 00:15:45.415205 systemd[1]: Stopped systemd-fsck-usr.service. Sep 13 00:15:45.415217 kernel: ACPI: bus type drm_connector registered Sep 13 00:15:45.415229 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:15:45.415246 kernel: loop: module loaded Sep 13 00:15:45.415258 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:15:45.415269 kernel: fuse: init (API version 7.39) Sep 13 00:15:45.415279 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 13 00:15:45.415289 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 13 00:15:45.415300 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:15:45.415310 systemd[1]: verity-setup.service: Deactivated successfully. Sep 13 00:15:45.415346 systemd-journald[1109]: Collecting audit messages is disabled. Sep 13 00:15:45.415371 systemd[1]: Stopped verity-setup.service. Sep 13 00:15:45.415382 systemd-journald[1109]: Journal started Sep 13 00:15:45.415403 systemd-journald[1109]: Runtime Journal (/run/log/journal/78ef9b20e7e2401d88f277a2b5a7d13b) is 5.9M, max 47.3M, 41.4M free. Sep 13 00:15:45.192687 systemd[1]: Queued start job for default target multi-user.target. Sep 13 00:15:45.212016 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 13 00:15:45.212382 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 13 00:15:45.417446 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:15:45.418724 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 13 00:15:45.419955 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 13 00:15:45.422104 systemd[1]: Mounted media.mount - External Media Directory. Sep 13 00:15:45.423010 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 13 00:15:45.424104 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 13 00:15:45.425152 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 13 00:15:45.426720 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 13 00:15:45.428508 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:15:45.430051 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 00:15:45.430191 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 13 00:15:45.431716 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:15:45.431860 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:15:45.433294 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:15:45.433451 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:15:45.434847 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:15:45.434985 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:15:45.436507 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 00:15:45.436633 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 13 00:15:45.438089 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:15:45.438220 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:15:45.439694 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:15:45.441142 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 13 00:15:45.442745 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 13 00:15:45.455625 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 13 00:15:45.465535 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 13 00:15:45.467769 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 13 00:15:45.468986 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:15:45.469030 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:15:45.471161 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 13 00:15:45.473742 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 13 00:15:45.476030 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 13 00:15:45.477294 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:15:45.481624 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 13 00:15:45.484681 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 13 00:15:45.485958 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:15:45.489678 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 13 00:15:45.490790 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:15:45.494289 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:15:45.496892 systemd-journald[1109]: Time spent on flushing to /var/log/journal/78ef9b20e7e2401d88f277a2b5a7d13b is 19.884ms for 855 entries. Sep 13 00:15:45.496892 systemd-journald[1109]: System Journal (/var/log/journal/78ef9b20e7e2401d88f277a2b5a7d13b) is 8.0M, max 195.6M, 187.6M free. Sep 13 00:15:45.529572 systemd-journald[1109]: Received client request to flush runtime journal. Sep 13 00:15:45.529631 kernel: loop0: detected capacity change from 0 to 211168 Sep 13 00:15:45.499326 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 13 00:15:45.508655 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 00:15:45.515225 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:15:45.516729 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 13 00:15:45.517855 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 13 00:15:45.519289 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 13 00:15:45.521344 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 13 00:15:45.524813 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:15:45.530557 systemd-tmpfiles[1154]: ACLs are not supported, ignoring. Sep 13 00:15:45.530568 systemd-tmpfiles[1154]: ACLs are not supported, ignoring. Sep 13 00:15:45.535578 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 13 00:15:45.537948 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:15:45.541263 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 13 00:15:45.549724 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 13 00:15:45.554615 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 13 00:15:45.555687 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 00:15:45.558673 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 13 00:15:45.565461 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 00:15:45.569476 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 13 00:15:45.573609 udevadm[1171]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 13 00:15:45.593902 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 13 00:15:45.599519 kernel: loop1: detected capacity change from 0 to 114432 Sep 13 00:15:45.609593 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:15:45.623632 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Sep 13 00:15:45.623651 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Sep 13 00:15:45.628545 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:15:45.638526 kernel: loop2: detected capacity change from 0 to 114328 Sep 13 00:15:45.681458 kernel: loop3: detected capacity change from 0 to 211168 Sep 13 00:15:45.700458 kernel: loop4: detected capacity change from 0 to 114432 Sep 13 00:15:45.709438 kernel: loop5: detected capacity change from 0 to 114328 Sep 13 00:15:45.715155 (sd-merge)[1183]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 13 00:15:45.715600 (sd-merge)[1183]: Merged extensions into '/usr'. Sep 13 00:15:45.720124 systemd[1]: Reloading requested from client PID 1153 ('systemd-sysext') (unit systemd-sysext.service)... Sep 13 00:15:45.720145 systemd[1]: Reloading... Sep 13 00:15:45.776442 zram_generator::config[1211]: No configuration found. Sep 13 00:15:45.807146 ldconfig[1148]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 00:15:45.889703 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:15:45.926739 systemd[1]: Reloading finished in 205 ms. Sep 13 00:15:45.953584 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 13 00:15:45.959593 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 13 00:15:45.973614 systemd[1]: Starting ensure-sysext.service... Sep 13 00:15:45.975626 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:15:45.986448 systemd[1]: Reloading requested from client PID 1244 ('systemctl') (unit ensure-sysext.service)... Sep 13 00:15:45.986468 systemd[1]: Reloading... Sep 13 00:15:45.996928 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 00:15:45.997217 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 13 00:15:45.997896 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 00:15:45.998117 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. Sep 13 00:15:45.998175 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. Sep 13 00:15:46.000938 systemd-tmpfiles[1245]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:15:46.000953 systemd-tmpfiles[1245]: Skipping /boot Sep 13 00:15:46.008356 systemd-tmpfiles[1245]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:15:46.008373 systemd-tmpfiles[1245]: Skipping /boot Sep 13 00:15:46.033461 zram_generator::config[1272]: No configuration found. Sep 13 00:15:46.122123 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:15:46.159698 systemd[1]: Reloading finished in 172 ms. Sep 13 00:15:46.173967 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 13 00:15:46.186841 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:15:46.195553 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 00:15:46.198450 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 13 00:15:46.201729 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 13 00:15:46.206842 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:15:46.209724 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:15:46.212189 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 13 00:15:46.216572 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:15:46.228062 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:15:46.230771 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:15:46.235835 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:15:46.236988 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:15:46.238814 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 13 00:15:46.242448 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 13 00:15:46.244284 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:15:46.244710 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:15:46.247890 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:15:46.253161 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:15:46.255520 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:15:46.255885 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:15:46.265788 augenrules[1334]: No rules Sep 13 00:15:46.268510 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 13 00:15:46.270558 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 00:15:46.271962 systemd-udevd[1313]: Using default interface naming scheme 'v255'. Sep 13 00:15:46.275442 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 13 00:15:46.279562 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:15:46.287721 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:15:46.292729 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:15:46.296515 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:15:46.309806 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:15:46.311455 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:15:46.314672 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 13 00:15:46.315557 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:15:46.316480 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 13 00:15:46.321053 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:15:46.322670 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:15:46.322849 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:15:46.324311 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:15:46.324464 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:15:46.325881 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:15:46.326007 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:15:46.329174 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:15:46.329329 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:15:46.333725 systemd[1]: Finished ensure-sysext.service. Sep 13 00:15:46.343162 systemd-resolved[1312]: Positive Trust Anchors: Sep 13 00:15:46.343183 systemd-resolved[1312]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:15:46.343219 systemd-resolved[1312]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:15:46.355146 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 13 00:15:46.358908 systemd-resolved[1312]: Defaulting to hostname 'linux'. Sep 13 00:15:46.374436 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1362) Sep 13 00:15:46.374791 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:15:46.376077 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:15:46.378630 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:15:46.380904 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 13 00:15:46.382042 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:15:46.385982 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 13 00:15:46.394537 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 13 00:15:46.398055 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:15:46.401115 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 13 00:15:46.430632 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 13 00:15:46.436304 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 13 00:15:46.438531 systemd[1]: Reached target time-set.target - System Time Set. Sep 13 00:15:46.443611 systemd-networkd[1380]: lo: Link UP Sep 13 00:15:46.443621 systemd-networkd[1380]: lo: Gained carrier Sep 13 00:15:46.444408 systemd-networkd[1380]: Enumeration completed Sep 13 00:15:46.444522 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:15:46.445466 systemd[1]: Reached target network.target - Network. Sep 13 00:15:46.447841 systemd-networkd[1380]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:15:46.447849 systemd-networkd[1380]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:15:46.448569 systemd-networkd[1380]: eth0: Link UP Sep 13 00:15:46.448578 systemd-networkd[1380]: eth0: Gained carrier Sep 13 00:15:46.448592 systemd-networkd[1380]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:15:46.456015 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 13 00:15:46.464759 systemd-networkd[1380]: eth0: DHCPv4 address 10.0.0.130/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 13 00:15:46.465812 systemd-timesyncd[1385]: Network configuration changed, trying to establish connection. Sep 13 00:15:46.889503 systemd-timesyncd[1385]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 13 00:15:46.889582 systemd-timesyncd[1385]: Initial clock synchronization to Sat 2025-09-13 00:15:46.889373 UTC. Sep 13 00:15:46.889640 systemd-resolved[1312]: Clock change detected. Flushing caches. Sep 13 00:15:46.904887 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:15:46.912738 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 13 00:15:46.924900 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 13 00:15:46.934631 lvm[1400]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:15:46.944121 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:15:46.972113 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 13 00:15:46.973314 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:15:46.974228 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:15:46.975144 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 13 00:15:46.976121 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 13 00:15:46.977237 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 13 00:15:46.978155 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 13 00:15:46.979131 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 13 00:15:46.980206 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 00:15:46.980243 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:15:46.980943 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:15:46.982430 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 13 00:15:46.984590 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 13 00:15:46.992545 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 13 00:15:46.994642 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 13 00:15:46.995885 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 13 00:15:46.996814 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:15:46.997517 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:15:46.998320 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:15:46.998349 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:15:46.999281 systemd[1]: Starting containerd.service - containerd container runtime... Sep 13 00:15:47.001045 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 13 00:15:47.002391 lvm[1407]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:15:47.004477 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 13 00:15:47.008822 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 13 00:15:47.010871 jq[1410]: false Sep 13 00:15:47.010427 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 13 00:15:47.011574 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 13 00:15:47.014736 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 13 00:15:47.019313 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 13 00:15:47.022078 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 13 00:15:47.022917 extend-filesystems[1411]: Found loop3 Sep 13 00:15:47.024390 extend-filesystems[1411]: Found loop4 Sep 13 00:15:47.024390 extend-filesystems[1411]: Found loop5 Sep 13 00:15:47.024390 extend-filesystems[1411]: Found vda Sep 13 00:15:47.024390 extend-filesystems[1411]: Found vda1 Sep 13 00:15:47.024390 extend-filesystems[1411]: Found vda2 Sep 13 00:15:47.024390 extend-filesystems[1411]: Found vda3 Sep 13 00:15:47.024390 extend-filesystems[1411]: Found usr Sep 13 00:15:47.024390 extend-filesystems[1411]: Found vda4 Sep 13 00:15:47.024390 extend-filesystems[1411]: Found vda6 Sep 13 00:15:47.024390 extend-filesystems[1411]: Found vda7 Sep 13 00:15:47.024390 extend-filesystems[1411]: Found vda9 Sep 13 00:15:47.024390 extend-filesystems[1411]: Checking size of /dev/vda9 Sep 13 00:15:47.025822 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 13 00:15:47.035072 dbus-daemon[1409]: [system] SELinux support is enabled Sep 13 00:15:47.028259 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 00:15:47.028954 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 13 00:15:47.030032 systemd[1]: Starting update-engine.service - Update Engine... Sep 13 00:15:47.037891 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 13 00:15:47.043322 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 13 00:15:47.047768 extend-filesystems[1411]: Resized partition /dev/vda9 Sep 13 00:15:47.050730 extend-filesystems[1432]: resize2fs 1.47.1 (20-May-2024) Sep 13 00:15:47.050379 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 13 00:15:47.053351 jq[1423]: true Sep 13 00:15:47.055809 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 00:15:47.055981 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 13 00:15:47.056233 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 00:15:47.056363 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 13 00:15:47.057611 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 13 00:15:47.058582 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 00:15:47.058756 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 13 00:15:47.063825 update_engine[1420]: I20250913 00:15:47.063539 1420 main.cc:92] Flatcar Update Engine starting Sep 13 00:15:47.069888 update_engine[1420]: I20250913 00:15:47.069830 1420 update_check_scheduler.cc:74] Next update check in 5m57s Sep 13 00:15:47.079029 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 00:15:47.079067 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 13 00:15:47.081886 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 00:15:47.081910 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 13 00:15:47.083134 systemd[1]: Started update-engine.service - Update Engine. Sep 13 00:15:47.084639 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1357) Sep 13 00:15:47.084066 (ntainerd)[1444]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 13 00:15:47.086751 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 13 00:15:47.098608 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 13 00:15:47.099211 tar[1434]: linux-arm64/LICENSE Sep 13 00:15:47.106655 jq[1435]: true Sep 13 00:15:47.125422 extend-filesystems[1432]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 13 00:15:47.125422 extend-filesystems[1432]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 13 00:15:47.125422 extend-filesystems[1432]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 13 00:15:47.134741 tar[1434]: linux-arm64/helm Sep 13 00:15:47.127133 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 00:15:47.134929 extend-filesystems[1411]: Resized filesystem in /dev/vda9 Sep 13 00:15:47.129691 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 13 00:15:47.135707 systemd-logind[1419]: Watching system buttons on /dev/input/event0 (Power Button) Sep 13 00:15:47.136720 systemd-logind[1419]: New seat seat0. Sep 13 00:15:47.137349 systemd[1]: Started systemd-logind.service - User Login Management. Sep 13 00:15:47.169553 bash[1465]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:15:47.174066 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 13 00:15:47.175837 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 13 00:15:47.181072 locksmithd[1447]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 00:15:47.253895 containerd[1444]: time="2025-09-13T00:15:47.253800873Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 13 00:15:47.285410 containerd[1444]: time="2025-09-13T00:15:47.285353073Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:15:47.286959 containerd[1444]: time="2025-09-13T00:15:47.286914873Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:15:47.286994 containerd[1444]: time="2025-09-13T00:15:47.286967513Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 00:15:47.287013 containerd[1444]: time="2025-09-13T00:15:47.286997113Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 00:15:47.287191 containerd[1444]: time="2025-09-13T00:15:47.287168393Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 13 00:15:47.287215 containerd[1444]: time="2025-09-13T00:15:47.287197713Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 13 00:15:47.287276 containerd[1444]: time="2025-09-13T00:15:47.287256513Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:15:47.287276 containerd[1444]: time="2025-09-13T00:15:47.287273233Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:15:47.287474 containerd[1444]: time="2025-09-13T00:15:47.287450753Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:15:47.287500 containerd[1444]: time="2025-09-13T00:15:47.287472753Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 00:15:47.287667 containerd[1444]: time="2025-09-13T00:15:47.287650513Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:15:47.287693 containerd[1444]: time="2025-09-13T00:15:47.287668353Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 00:15:47.287778 containerd[1444]: time="2025-09-13T00:15:47.287759673Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:15:47.287984 containerd[1444]: time="2025-09-13T00:15:47.287962473Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:15:47.288086 containerd[1444]: time="2025-09-13T00:15:47.288066753Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:15:47.288086 containerd[1444]: time="2025-09-13T00:15:47.288084433Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 00:15:47.288183 containerd[1444]: time="2025-09-13T00:15:47.288165433Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 00:15:47.288230 containerd[1444]: time="2025-09-13T00:15:47.288214073Z" level=info msg="metadata content store policy set" policy=shared Sep 13 00:15:47.293338 containerd[1444]: time="2025-09-13T00:15:47.293286753Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 00:15:47.293396 containerd[1444]: time="2025-09-13T00:15:47.293375153Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 00:15:47.293421 containerd[1444]: time="2025-09-13T00:15:47.293397233Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 13 00:15:47.293442 containerd[1444]: time="2025-09-13T00:15:47.293424793Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 13 00:15:47.293479 containerd[1444]: time="2025-09-13T00:15:47.293440593Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 00:15:47.293818 containerd[1444]: time="2025-09-13T00:15:47.293793393Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 00:15:47.294280 containerd[1444]: time="2025-09-13T00:15:47.294254713Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 00:15:47.294534 containerd[1444]: time="2025-09-13T00:15:47.294509753Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 13 00:15:47.294560 containerd[1444]: time="2025-09-13T00:15:47.294551873Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 13 00:15:47.294585 containerd[1444]: time="2025-09-13T00:15:47.294574153Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 13 00:15:47.295600 containerd[1444]: time="2025-09-13T00:15:47.294667833Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 00:15:47.295600 containerd[1444]: time="2025-09-13T00:15:47.294692873Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 00:15:47.295600 containerd[1444]: time="2025-09-13T00:15:47.294707313Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 00:15:47.295600 containerd[1444]: time="2025-09-13T00:15:47.294734073Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 00:15:47.295600 containerd[1444]: time="2025-09-13T00:15:47.294751073Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 00:15:47.295600 containerd[1444]: time="2025-09-13T00:15:47.294765193Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 00:15:47.295600 containerd[1444]: time="2025-09-13T00:15:47.294779873Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 00:15:47.295600 containerd[1444]: time="2025-09-13T00:15:47.294791633Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 00:15:47.295600 containerd[1444]: time="2025-09-13T00:15:47.294857273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 00:15:47.295600 containerd[1444]: time="2025-09-13T00:15:47.294925393Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 00:15:47.295600 containerd[1444]: time="2025-09-13T00:15:47.294942033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 00:15:47.295600 containerd[1444]: time="2025-09-13T00:15:47.294955473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 00:15:47.295600 containerd[1444]: time="2025-09-13T00:15:47.294968153Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 00:15:47.295600 containerd[1444]: time="2025-09-13T00:15:47.295031193Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 00:15:47.295871 containerd[1444]: time="2025-09-13T00:15:47.295050033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 00:15:47.295871 containerd[1444]: time="2025-09-13T00:15:47.295063553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 00:15:47.295871 containerd[1444]: time="2025-09-13T00:15:47.295076473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 13 00:15:47.295871 containerd[1444]: time="2025-09-13T00:15:47.295108113Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 13 00:15:47.295871 containerd[1444]: time="2025-09-13T00:15:47.295126633Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 00:15:47.295871 containerd[1444]: time="2025-09-13T00:15:47.295139073Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 13 00:15:47.295871 containerd[1444]: time="2025-09-13T00:15:47.295156273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 00:15:47.295871 containerd[1444]: time="2025-09-13T00:15:47.295173873Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 13 00:15:47.295871 containerd[1444]: time="2025-09-13T00:15:47.295262113Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 13 00:15:47.295871 containerd[1444]: time="2025-09-13T00:15:47.295276273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 00:15:47.295871 containerd[1444]: time="2025-09-13T00:15:47.295295873Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 00:15:47.296346 containerd[1444]: time="2025-09-13T00:15:47.296314233Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 00:15:47.296690 containerd[1444]: time="2025-09-13T00:15:47.296556553Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 13 00:15:47.296719 containerd[1444]: time="2025-09-13T00:15:47.296689833Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 00:15:47.296782 containerd[1444]: time="2025-09-13T00:15:47.296707433Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 13 00:15:47.296808 containerd[1444]: time="2025-09-13T00:15:47.296780753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 00:15:47.296808 containerd[1444]: time="2025-09-13T00:15:47.296797593Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 13 00:15:47.296842 containerd[1444]: time="2025-09-13T00:15:47.296808713Z" level=info msg="NRI interface is disabled by configuration." Sep 13 00:15:47.296894 containerd[1444]: time="2025-09-13T00:15:47.296873793Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 00:15:47.297416 containerd[1444]: time="2025-09-13T00:15:47.297347953Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 00:15:47.297567 containerd[1444]: time="2025-09-13T00:15:47.297503913Z" level=info msg="Connect containerd service" Sep 13 00:15:47.297620 containerd[1444]: time="2025-09-13T00:15:47.297591593Z" level=info msg="using legacy CRI server" Sep 13 00:15:47.297656 containerd[1444]: time="2025-09-13T00:15:47.297618393Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 13 00:15:47.297881 containerd[1444]: time="2025-09-13T00:15:47.297824433Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 00:15:47.301916 containerd[1444]: time="2025-09-13T00:15:47.299900193Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:15:47.301916 containerd[1444]: time="2025-09-13T00:15:47.300043313Z" level=info msg="Start subscribing containerd event" Sep 13 00:15:47.301916 containerd[1444]: time="2025-09-13T00:15:47.300103153Z" level=info msg="Start recovering state" Sep 13 00:15:47.301916 containerd[1444]: time="2025-09-13T00:15:47.300170593Z" level=info msg="Start event monitor" Sep 13 00:15:47.301916 containerd[1444]: time="2025-09-13T00:15:47.300190033Z" level=info msg="Start snapshots syncer" Sep 13 00:15:47.301916 containerd[1444]: time="2025-09-13T00:15:47.300199353Z" level=info msg="Start cni network conf syncer for default" Sep 13 00:15:47.301916 containerd[1444]: time="2025-09-13T00:15:47.300206953Z" level=info msg="Start streaming server" Sep 13 00:15:47.301916 containerd[1444]: time="2025-09-13T00:15:47.300784313Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 00:15:47.301916 containerd[1444]: time="2025-09-13T00:15:47.300822993Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 00:15:47.301916 containerd[1444]: time="2025-09-13T00:15:47.300874633Z" level=info msg="containerd successfully booted in 0.047927s" Sep 13 00:15:47.300972 systemd[1]: Started containerd.service - containerd container runtime. Sep 13 00:15:47.501787 tar[1434]: linux-arm64/README.md Sep 13 00:15:47.515306 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 13 00:15:47.709945 sshd_keygen[1431]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 00:15:47.730685 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 13 00:15:47.741882 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 13 00:15:47.748731 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 00:15:47.748926 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 13 00:15:47.751450 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 13 00:15:47.766886 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 13 00:15:47.776952 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 13 00:15:47.778974 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 13 00:15:47.780074 systemd[1]: Reached target getty.target - Login Prompts. Sep 13 00:15:48.391856 systemd-networkd[1380]: eth0: Gained IPv6LL Sep 13 00:15:48.395682 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 13 00:15:48.398220 systemd[1]: Reached target network-online.target - Network is Online. Sep 13 00:15:48.408933 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 13 00:15:48.412245 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:15:48.414845 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 13 00:15:48.445020 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 13 00:15:48.445196 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 13 00:15:48.448616 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 13 00:15:48.450665 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 13 00:15:49.025514 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:15:49.027284 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 13 00:15:49.029424 systemd[1]: Startup finished in 541ms (kernel) + 5.124s (initrd) + 3.835s (userspace) = 9.500s. Sep 13 00:15:49.031807 (kubelet)[1523]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:15:49.427846 kubelet[1523]: E0913 00:15:49.427715 1523 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:15:49.430593 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:15:49.430831 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:15:53.329163 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 13 00:15:53.330197 systemd[1]: Started sshd@0-10.0.0.130:22-10.0.0.1:54242.service - OpenSSH per-connection server daemon (10.0.0.1:54242). Sep 13 00:15:53.383200 sshd[1536]: Accepted publickey for core from 10.0.0.1 port 54242 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:15:53.384868 sshd[1536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:15:53.393537 systemd-logind[1419]: New session 1 of user core. Sep 13 00:15:53.394610 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 13 00:15:53.401852 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 13 00:15:53.412538 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 13 00:15:53.415728 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 13 00:15:53.422539 (systemd)[1540]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:15:53.510410 systemd[1540]: Queued start job for default target default.target. Sep 13 00:15:53.521846 systemd[1540]: Created slice app.slice - User Application Slice. Sep 13 00:15:53.521876 systemd[1540]: Reached target paths.target - Paths. Sep 13 00:15:53.521889 systemd[1540]: Reached target timers.target - Timers. Sep 13 00:15:53.523129 systemd[1540]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 13 00:15:53.536061 systemd[1540]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 13 00:15:53.536170 systemd[1540]: Reached target sockets.target - Sockets. Sep 13 00:15:53.536187 systemd[1540]: Reached target basic.target - Basic System. Sep 13 00:15:53.536219 systemd[1540]: Reached target default.target - Main User Target. Sep 13 00:15:53.536244 systemd[1540]: Startup finished in 108ms. Sep 13 00:15:53.536431 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 13 00:15:53.537786 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 13 00:15:53.604902 systemd[1]: Started sshd@1-10.0.0.130:22-10.0.0.1:54254.service - OpenSSH per-connection server daemon (10.0.0.1:54254). Sep 13 00:15:53.648010 sshd[1551]: Accepted publickey for core from 10.0.0.1 port 54254 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:15:53.649235 sshd[1551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:15:53.652952 systemd-logind[1419]: New session 2 of user core. Sep 13 00:15:53.661758 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 13 00:15:53.714588 sshd[1551]: pam_unix(sshd:session): session closed for user core Sep 13 00:15:53.731930 systemd[1]: sshd@1-10.0.0.130:22-10.0.0.1:54254.service: Deactivated successfully. Sep 13 00:15:53.733226 systemd[1]: session-2.scope: Deactivated successfully. Sep 13 00:15:53.735617 systemd-logind[1419]: Session 2 logged out. Waiting for processes to exit. Sep 13 00:15:53.736820 systemd[1]: Started sshd@2-10.0.0.130:22-10.0.0.1:54268.service - OpenSSH per-connection server daemon (10.0.0.1:54268). Sep 13 00:15:53.737572 systemd-logind[1419]: Removed session 2. Sep 13 00:15:53.770948 sshd[1558]: Accepted publickey for core from 10.0.0.1 port 54268 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:15:53.772173 sshd[1558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:15:53.775904 systemd-logind[1419]: New session 3 of user core. Sep 13 00:15:53.790736 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 13 00:15:53.840262 sshd[1558]: pam_unix(sshd:session): session closed for user core Sep 13 00:15:53.862672 systemd[1]: sshd@2-10.0.0.130:22-10.0.0.1:54268.service: Deactivated successfully. Sep 13 00:15:53.866195 systemd[1]: session-3.scope: Deactivated successfully. Sep 13 00:15:53.868966 systemd-logind[1419]: Session 3 logged out. Waiting for processes to exit. Sep 13 00:15:53.878883 systemd[1]: Started sshd@3-10.0.0.130:22-10.0.0.1:54272.service - OpenSSH per-connection server daemon (10.0.0.1:54272). Sep 13 00:15:53.880078 systemd-logind[1419]: Removed session 3. Sep 13 00:15:53.909022 sshd[1565]: Accepted publickey for core from 10.0.0.1 port 54272 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:15:53.910267 sshd[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:15:53.914536 systemd-logind[1419]: New session 4 of user core. Sep 13 00:15:53.926788 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 13 00:15:53.979724 sshd[1565]: pam_unix(sshd:session): session closed for user core Sep 13 00:15:53.992967 systemd[1]: sshd@3-10.0.0.130:22-10.0.0.1:54272.service: Deactivated successfully. Sep 13 00:15:53.994226 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 00:15:53.998842 systemd-logind[1419]: Session 4 logged out. Waiting for processes to exit. Sep 13 00:15:54.013178 systemd[1]: Started sshd@4-10.0.0.130:22-10.0.0.1:54274.service - OpenSSH per-connection server daemon (10.0.0.1:54274). Sep 13 00:15:54.018618 systemd-logind[1419]: Removed session 4. Sep 13 00:15:54.047144 sshd[1572]: Accepted publickey for core from 10.0.0.1 port 54274 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:15:54.048407 sshd[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:15:54.052555 systemd-logind[1419]: New session 5 of user core. Sep 13 00:15:54.067759 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 13 00:15:54.125844 sudo[1575]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 13 00:15:54.126242 sudo[1575]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:15:54.145245 sudo[1575]: pam_unix(sudo:session): session closed for user root Sep 13 00:15:54.149149 sshd[1572]: pam_unix(sshd:session): session closed for user core Sep 13 00:15:54.163314 systemd[1]: sshd@4-10.0.0.130:22-10.0.0.1:54274.service: Deactivated successfully. Sep 13 00:15:54.165001 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 00:15:54.166605 systemd-logind[1419]: Session 5 logged out. Waiting for processes to exit. Sep 13 00:15:54.177980 systemd[1]: Started sshd@5-10.0.0.130:22-10.0.0.1:54280.service - OpenSSH per-connection server daemon (10.0.0.1:54280). Sep 13 00:15:54.178780 systemd-logind[1419]: Removed session 5. Sep 13 00:15:54.208260 sshd[1580]: Accepted publickey for core from 10.0.0.1 port 54280 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:15:54.209515 sshd[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:15:54.213952 systemd-logind[1419]: New session 6 of user core. Sep 13 00:15:54.221737 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 13 00:15:54.272207 sudo[1584]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 13 00:15:54.272490 sudo[1584]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:15:54.275397 sudo[1584]: pam_unix(sudo:session): session closed for user root Sep 13 00:15:54.279851 sudo[1583]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 13 00:15:54.280113 sudo[1583]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:15:54.305820 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 13 00:15:54.307403 auditctl[1587]: No rules Sep 13 00:15:54.308197 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 00:15:54.308397 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 13 00:15:54.310931 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 00:15:54.333274 augenrules[1605]: No rules Sep 13 00:15:54.336500 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 00:15:54.340864 sudo[1583]: pam_unix(sudo:session): session closed for user root Sep 13 00:15:54.342240 sshd[1580]: pam_unix(sshd:session): session closed for user core Sep 13 00:15:54.344997 systemd[1]: sshd@5-10.0.0.130:22-10.0.0.1:54280.service: Deactivated successfully. Sep 13 00:15:54.347876 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 00:15:54.361337 systemd-logind[1419]: Session 6 logged out. Waiting for processes to exit. Sep 13 00:15:54.363040 systemd[1]: Started sshd@6-10.0.0.130:22-10.0.0.1:54294.service - OpenSSH per-connection server daemon (10.0.0.1:54294). Sep 13 00:15:54.364159 systemd-logind[1419]: Removed session 6. Sep 13 00:15:54.400819 sshd[1613]: Accepted publickey for core from 10.0.0.1 port 54294 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:15:54.401105 sshd[1613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:15:54.406789 systemd-logind[1419]: New session 7 of user core. Sep 13 00:15:54.416758 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 13 00:15:54.469487 sudo[1616]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 00:15:54.470110 sudo[1616]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:15:54.738977 (dockerd)[1635]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 13 00:15:54.739199 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 13 00:15:54.951464 dockerd[1635]: time="2025-09-13T00:15:54.951369073Z" level=info msg="Starting up" Sep 13 00:15:55.090495 dockerd[1635]: time="2025-09-13T00:15:55.090182953Z" level=info msg="Loading containers: start." Sep 13 00:15:55.167627 kernel: Initializing XFRM netlink socket Sep 13 00:15:55.224789 systemd-networkd[1380]: docker0: Link UP Sep 13 00:15:55.246938 dockerd[1635]: time="2025-09-13T00:15:55.246882113Z" level=info msg="Loading containers: done." Sep 13 00:15:55.258003 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1247796876-merged.mount: Deactivated successfully. Sep 13 00:15:55.259037 dockerd[1635]: time="2025-09-13T00:15:55.258990473Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 00:15:55.259094 dockerd[1635]: time="2025-09-13T00:15:55.259081873Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 13 00:15:55.259201 dockerd[1635]: time="2025-09-13T00:15:55.259183353Z" level=info msg="Daemon has completed initialization" Sep 13 00:15:55.286009 dockerd[1635]: time="2025-09-13T00:15:55.285795553Z" level=info msg="API listen on /run/docker.sock" Sep 13 00:15:55.286102 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 13 00:15:56.027957 containerd[1444]: time="2025-09-13T00:15:56.027920713Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Sep 13 00:15:56.628116 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3140185340.mount: Deactivated successfully. Sep 13 00:15:57.523316 containerd[1444]: time="2025-09-13T00:15:57.523262073Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:15:57.525212 containerd[1444]: time="2025-09-13T00:15:57.525172633Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=27390230" Sep 13 00:15:57.526243 containerd[1444]: time="2025-09-13T00:15:57.526207913Z" level=info msg="ImageCreate event name:\"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:15:57.529053 containerd[1444]: time="2025-09-13T00:15:57.529004673Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:15:57.530101 containerd[1444]: time="2025-09-13T00:15:57.530057753Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"27386827\" in 1.50209824s" Sep 13 00:15:57.530166 containerd[1444]: time="2025-09-13T00:15:57.530102633Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\"" Sep 13 00:15:57.531721 containerd[1444]: time="2025-09-13T00:15:57.531693713Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Sep 13 00:15:58.621555 containerd[1444]: time="2025-09-13T00:15:58.621478393Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:15:58.635093 containerd[1444]: time="2025-09-13T00:15:58.635050153Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=23547919" Sep 13 00:15:58.636221 containerd[1444]: time="2025-09-13T00:15:58.636185593Z" level=info msg="ImageCreate event name:\"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:15:58.639056 containerd[1444]: time="2025-09-13T00:15:58.639007393Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:15:58.640365 containerd[1444]: time="2025-09-13T00:15:58.640241273Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"25135832\" in 1.10849932s" Sep 13 00:15:58.640365 containerd[1444]: time="2025-09-13T00:15:58.640274513Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\"" Sep 13 00:15:58.640858 containerd[1444]: time="2025-09-13T00:15:58.640828713Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Sep 13 00:15:59.670966 containerd[1444]: time="2025-09-13T00:15:59.670907273Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:15:59.672001 containerd[1444]: time="2025-09-13T00:15:59.671970153Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=18295979" Sep 13 00:15:59.672538 containerd[1444]: time="2025-09-13T00:15:59.672505473Z" level=info msg="ImageCreate event name:\"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:15:59.676722 containerd[1444]: time="2025-09-13T00:15:59.676683113Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:15:59.677723 containerd[1444]: time="2025-09-13T00:15:59.677688833Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"19883910\" in 1.036752s" Sep 13 00:15:59.677723 containerd[1444]: time="2025-09-13T00:15:59.677724513Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\"" Sep 13 00:15:59.678400 containerd[1444]: time="2025-09-13T00:15:59.678375593Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Sep 13 00:15:59.681075 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 00:15:59.692826 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:15:59.799367 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:15:59.803391 (kubelet)[1856]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:15:59.839327 kubelet[1856]: E0913 00:15:59.839267 1856 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:15:59.842739 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:15:59.842895 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:16:00.809682 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3709241540.mount: Deactivated successfully. Sep 13 00:16:01.231522 containerd[1444]: time="2025-09-13T00:16:01.231385473Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:01.232616 containerd[1444]: time="2025-09-13T00:16:01.232417673Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=28240108" Sep 13 00:16:01.233352 containerd[1444]: time="2025-09-13T00:16:01.233294593Z" level=info msg="ImageCreate event name:\"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:01.235114 containerd[1444]: time="2025-09-13T00:16:01.235081193Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:01.235842 containerd[1444]: time="2025-09-13T00:16:01.235817273Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"28239125\" in 1.55740748s" Sep 13 00:16:01.235877 containerd[1444]: time="2025-09-13T00:16:01.235849193Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\"" Sep 13 00:16:01.236373 containerd[1444]: time="2025-09-13T00:16:01.236349753Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 13 00:16:01.836911 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2343649359.mount: Deactivated successfully. Sep 13 00:16:02.636269 containerd[1444]: time="2025-09-13T00:16:02.636220833Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:02.637249 containerd[1444]: time="2025-09-13T00:16:02.636978793Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Sep 13 00:16:02.638006 containerd[1444]: time="2025-09-13T00:16:02.637978393Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:02.641537 containerd[1444]: time="2025-09-13T00:16:02.641500353Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:02.642808 containerd[1444]: time="2025-09-13T00:16:02.642777633Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.4063936s" Sep 13 00:16:02.642869 containerd[1444]: time="2025-09-13T00:16:02.642810353Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Sep 13 00:16:02.644057 containerd[1444]: time="2025-09-13T00:16:02.643900913Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 00:16:03.079093 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1134553869.mount: Deactivated successfully. Sep 13 00:16:03.083795 containerd[1444]: time="2025-09-13T00:16:03.083754153Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:03.084920 containerd[1444]: time="2025-09-13T00:16:03.084886993Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 13 00:16:03.087137 containerd[1444]: time="2025-09-13T00:16:03.087107433Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:03.089506 containerd[1444]: time="2025-09-13T00:16:03.089461073Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:03.090701 containerd[1444]: time="2025-09-13T00:16:03.090669593Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 446.73872ms" Sep 13 00:16:03.090755 containerd[1444]: time="2025-09-13T00:16:03.090707033Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 13 00:16:03.091454 containerd[1444]: time="2025-09-13T00:16:03.091438913Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 13 00:16:03.601313 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3265586677.mount: Deactivated successfully. Sep 13 00:16:05.646056 containerd[1444]: time="2025-09-13T00:16:05.646005753Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:05.647756 containerd[1444]: time="2025-09-13T00:16:05.647722553Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69465859" Sep 13 00:16:05.650645 containerd[1444]: time="2025-09-13T00:16:05.649238953Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:05.652255 containerd[1444]: time="2025-09-13T00:16:05.652208273Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:05.653735 containerd[1444]: time="2025-09-13T00:16:05.653706313Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.56224064s" Sep 13 00:16:05.653781 containerd[1444]: time="2025-09-13T00:16:05.653739593Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Sep 13 00:16:10.093183 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 13 00:16:10.102773 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:16:10.200151 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:16:10.204068 (kubelet)[2017]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:16:10.235693 kubelet[2017]: E0913 00:16:10.235625 2017 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:16:10.238323 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:16:10.238464 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:16:10.765361 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:16:10.781809 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:16:10.804931 systemd[1]: Reloading requested from client PID 2033 ('systemctl') (unit session-7.scope)... Sep 13 00:16:10.804945 systemd[1]: Reloading... Sep 13 00:16:10.875636 zram_generator::config[2072]: No configuration found. Sep 13 00:16:11.038066 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:16:11.093386 systemd[1]: Reloading finished in 288 ms. Sep 13 00:16:11.140925 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:16:11.143651 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:16:11.143848 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:16:11.145385 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:16:11.247823 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:16:11.252474 (kubelet)[2119]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:16:11.283003 kubelet[2119]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:16:11.283003 kubelet[2119]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 00:16:11.283003 kubelet[2119]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:16:11.283344 kubelet[2119]: I0913 00:16:11.283047 2119 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:16:12.260446 kubelet[2119]: I0913 00:16:12.260387 2119 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 13 00:16:12.260446 kubelet[2119]: I0913 00:16:12.260419 2119 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:16:12.260650 kubelet[2119]: I0913 00:16:12.260636 2119 server.go:956] "Client rotation is on, will bootstrap in background" Sep 13 00:16:12.283319 kubelet[2119]: E0913 00:16:12.282856 2119 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.130:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 13 00:16:12.283984 kubelet[2119]: I0913 00:16:12.283962 2119 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:16:12.295295 kubelet[2119]: E0913 00:16:12.295218 2119 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:16:12.295295 kubelet[2119]: I0913 00:16:12.295284 2119 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:16:12.298127 kubelet[2119]: I0913 00:16:12.298095 2119 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:16:12.298455 kubelet[2119]: I0913 00:16:12.298417 2119 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:16:12.298629 kubelet[2119]: I0913 00:16:12.298448 2119 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:16:12.298728 kubelet[2119]: I0913 00:16:12.298693 2119 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:16:12.298728 kubelet[2119]: I0913 00:16:12.298704 2119 container_manager_linux.go:303] "Creating device plugin manager" Sep 13 00:16:12.300112 kubelet[2119]: I0913 00:16:12.300083 2119 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:16:12.303134 kubelet[2119]: I0913 00:16:12.303097 2119 kubelet.go:480] "Attempting to sync node with API server" Sep 13 00:16:12.303134 kubelet[2119]: I0913 00:16:12.303124 2119 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:16:12.304046 kubelet[2119]: I0913 00:16:12.303154 2119 kubelet.go:386] "Adding apiserver pod source" Sep 13 00:16:12.304046 kubelet[2119]: I0913 00:16:12.303165 2119 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:16:12.304046 kubelet[2119]: E0913 00:16:12.303832 2119 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.130:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 13 00:16:12.304467 kubelet[2119]: I0913 00:16:12.304430 2119 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 00:16:12.304694 kubelet[2119]: E0913 00:16:12.304666 2119 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 13 00:16:12.305340 kubelet[2119]: I0913 00:16:12.305301 2119 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 13 00:16:12.305447 kubelet[2119]: W0913 00:16:12.305435 2119 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 00:16:12.308278 kubelet[2119]: I0913 00:16:12.308241 2119 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 00:16:12.308350 kubelet[2119]: I0913 00:16:12.308291 2119 server.go:1289] "Started kubelet" Sep 13 00:16:12.309214 kubelet[2119]: I0913 00:16:12.308716 2119 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:16:12.309214 kubelet[2119]: I0913 00:16:12.309026 2119 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:16:12.309214 kubelet[2119]: I0913 00:16:12.309075 2119 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:16:12.309503 kubelet[2119]: I0913 00:16:12.309478 2119 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:16:12.309990 kubelet[2119]: I0913 00:16:12.309966 2119 server.go:317] "Adding debug handlers to kubelet server" Sep 13 00:16:12.314627 kubelet[2119]: I0913 00:16:12.313060 2119 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:16:12.314627 kubelet[2119]: E0913 00:16:12.313387 2119 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:16:12.314627 kubelet[2119]: I0913 00:16:12.313414 2119 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 00:16:12.314627 kubelet[2119]: I0913 00:16:12.314352 2119 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 00:16:12.314627 kubelet[2119]: I0913 00:16:12.314451 2119 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:16:12.314893 kubelet[2119]: E0913 00:16:12.314858 2119 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 13 00:16:12.314893 kubelet[2119]: E0913 00:16:12.313287 2119 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.130:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.130:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864af5de01ea539 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-13 00:16:12.308260153 +0000 UTC m=+1.052424241,LastTimestamp:2025-09-13 00:16:12.308260153 +0000 UTC m=+1.052424241,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 13 00:16:12.315193 kubelet[2119]: E0913 00:16:12.315141 2119 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.130:6443: connect: connection refused" interval="200ms" Sep 13 00:16:12.315731 kubelet[2119]: I0913 00:16:12.315703 2119 factory.go:223] Registration of the systemd container factory successfully Sep 13 00:16:12.315830 kubelet[2119]: I0913 00:16:12.315812 2119 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:16:12.317131 kubelet[2119]: E0913 00:16:12.317107 2119 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:16:12.317419 kubelet[2119]: I0913 00:16:12.317138 2119 factory.go:223] Registration of the containerd container factory successfully Sep 13 00:16:12.319015 kubelet[2119]: I0913 00:16:12.318970 2119 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 13 00:16:12.333102 kubelet[2119]: I0913 00:16:12.333076 2119 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 00:16:12.333102 kubelet[2119]: I0913 00:16:12.333091 2119 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 00:16:12.333102 kubelet[2119]: I0913 00:16:12.333109 2119 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:16:12.334667 kubelet[2119]: I0913 00:16:12.334358 2119 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 13 00:16:12.334667 kubelet[2119]: I0913 00:16:12.334389 2119 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 13 00:16:12.334667 kubelet[2119]: I0913 00:16:12.334407 2119 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 00:16:12.334667 kubelet[2119]: I0913 00:16:12.334415 2119 kubelet.go:2436] "Starting kubelet main sync loop" Sep 13 00:16:12.334667 kubelet[2119]: E0913 00:16:12.334464 2119 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:16:12.409450 kubelet[2119]: E0913 00:16:12.409397 2119 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 13 00:16:12.409629 kubelet[2119]: I0913 00:16:12.409612 2119 policy_none.go:49] "None policy: Start" Sep 13 00:16:12.409671 kubelet[2119]: I0913 00:16:12.409637 2119 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 00:16:12.409671 kubelet[2119]: I0913 00:16:12.409650 2119 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:16:12.414469 kubelet[2119]: E0913 00:16:12.414438 2119 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:16:12.414723 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 13 00:16:12.431697 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 13 00:16:12.434424 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 13 00:16:12.434632 kubelet[2119]: E0913 00:16:12.434589 2119 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 13 00:16:12.445454 kubelet[2119]: E0913 00:16:12.445416 2119 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 13 00:16:12.445676 kubelet[2119]: I0913 00:16:12.445653 2119 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:16:12.445735 kubelet[2119]: I0913 00:16:12.445671 2119 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:16:12.446083 kubelet[2119]: I0913 00:16:12.445950 2119 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:16:12.447144 kubelet[2119]: E0913 00:16:12.447066 2119 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 00:16:12.447144 kubelet[2119]: E0913 00:16:12.447110 2119 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 13 00:16:12.516694 kubelet[2119]: E0913 00:16:12.516552 2119 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.130:6443: connect: connection refused" interval="400ms" Sep 13 00:16:12.547695 kubelet[2119]: I0913 00:16:12.547608 2119 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 00:16:12.548122 kubelet[2119]: E0913 00:16:12.548076 2119 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.130:6443/api/v1/nodes\": dial tcp 10.0.0.130:6443: connect: connection refused" node="localhost" Sep 13 00:16:12.645380 systemd[1]: Created slice kubepods-burstable-pod43a4666890f64ea901ad1a607b6ddcc4.slice - libcontainer container kubepods-burstable-pod43a4666890f64ea901ad1a607b6ddcc4.slice. Sep 13 00:16:12.664958 kubelet[2119]: E0913 00:16:12.664865 2119 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:16:12.667297 systemd[1]: Created slice kubepods-burstable-podb678d5c6713e936e66aa5bb73166297e.slice - libcontainer container kubepods-burstable-podb678d5c6713e936e66aa5bb73166297e.slice. Sep 13 00:16:12.680639 kubelet[2119]: E0913 00:16:12.680550 2119 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:16:12.683020 systemd[1]: Created slice kubepods-burstable-pod7b968cf906b2d9d713a362c43868bef2.slice - libcontainer container kubepods-burstable-pod7b968cf906b2d9d713a362c43868bef2.slice. Sep 13 00:16:12.684447 kubelet[2119]: E0913 00:16:12.684422 2119 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:16:12.715716 kubelet[2119]: I0913 00:16:12.715684 2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b968cf906b2d9d713a362c43868bef2-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"7b968cf906b2d9d713a362c43868bef2\") " pod="kube-system/kube-scheduler-localhost" Sep 13 00:16:12.715716 kubelet[2119]: I0913 00:16:12.715717 2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/43a4666890f64ea901ad1a607b6ddcc4-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"43a4666890f64ea901ad1a607b6ddcc4\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:16:12.715831 kubelet[2119]: I0913 00:16:12.715749 2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/43a4666890f64ea901ad1a607b6ddcc4-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"43a4666890f64ea901ad1a607b6ddcc4\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:16:12.715831 kubelet[2119]: I0913 00:16:12.715767 2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:16:12.715831 kubelet[2119]: I0913 00:16:12.715787 2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/43a4666890f64ea901ad1a607b6ddcc4-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"43a4666890f64ea901ad1a607b6ddcc4\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:16:12.715831 kubelet[2119]: I0913 00:16:12.715803 2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:16:12.715831 kubelet[2119]: I0913 00:16:12.715818 2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:16:12.715935 kubelet[2119]: I0913 00:16:12.715833 2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:16:12.715935 kubelet[2119]: I0913 00:16:12.715850 2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:16:12.750013 kubelet[2119]: I0913 00:16:12.749840 2119 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 00:16:12.750216 kubelet[2119]: E0913 00:16:12.750186 2119 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.130:6443/api/v1/nodes\": dial tcp 10.0.0.130:6443: connect: connection refused" node="localhost" Sep 13 00:16:12.917494 kubelet[2119]: E0913 00:16:12.917350 2119 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.130:6443: connect: connection refused" interval="800ms" Sep 13 00:16:12.965607 kubelet[2119]: E0913 00:16:12.965562 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:12.966281 containerd[1444]: time="2025-09-13T00:16:12.966163873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:43a4666890f64ea901ad1a607b6ddcc4,Namespace:kube-system,Attempt:0,}" Sep 13 00:16:12.981643 kubelet[2119]: E0913 00:16:12.981381 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:12.981796 containerd[1444]: time="2025-09-13T00:16:12.981729433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b678d5c6713e936e66aa5bb73166297e,Namespace:kube-system,Attempt:0,}" Sep 13 00:16:12.985109 kubelet[2119]: E0913 00:16:12.985074 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:12.985622 containerd[1444]: time="2025-09-13T00:16:12.985416113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:7b968cf906b2d9d713a362c43868bef2,Namespace:kube-system,Attempt:0,}" Sep 13 00:16:13.152039 kubelet[2119]: I0913 00:16:13.151996 2119 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 00:16:13.152374 kubelet[2119]: E0913 00:16:13.152351 2119 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.130:6443/api/v1/nodes\": dial tcp 10.0.0.130:6443: connect: connection refused" node="localhost" Sep 13 00:16:13.276469 kubelet[2119]: E0913 00:16:13.276354 2119 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.130:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 13 00:16:13.445908 kubelet[2119]: E0913 00:16:13.445865 2119 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 13 00:16:13.521264 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1477300411.mount: Deactivated successfully. Sep 13 00:16:13.528047 containerd[1444]: time="2025-09-13T00:16:13.527940153Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:16:13.529357 containerd[1444]: time="2025-09-13T00:16:13.529318113Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Sep 13 00:16:13.529813 containerd[1444]: time="2025-09-13T00:16:13.529774153Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:16:13.530636 containerd[1444]: time="2025-09-13T00:16:13.530606433Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:16:13.530803 containerd[1444]: time="2025-09-13T00:16:13.530770793Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 00:16:13.531783 containerd[1444]: time="2025-09-13T00:16:13.531748753Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:16:13.532449 containerd[1444]: time="2025-09-13T00:16:13.532410273Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 00:16:13.534497 containerd[1444]: time="2025-09-13T00:16:13.534069073Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:16:13.537313 containerd[1444]: time="2025-09-13T00:16:13.537196873Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 570.94432ms" Sep 13 00:16:13.538636 containerd[1444]: time="2025-09-13T00:16:13.538601433Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 553.12096ms" Sep 13 00:16:13.541048 containerd[1444]: time="2025-09-13T00:16:13.541014913Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 559.22396ms" Sep 13 00:16:13.543254 kubelet[2119]: E0913 00:16:13.543200 2119 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 13 00:16:13.649540 containerd[1444]: time="2025-09-13T00:16:13.649433393Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:16:13.649770 containerd[1444]: time="2025-09-13T00:16:13.649549833Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:16:13.649770 containerd[1444]: time="2025-09-13T00:16:13.649568633Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:16:13.649770 containerd[1444]: time="2025-09-13T00:16:13.649692513Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:16:13.650315 containerd[1444]: time="2025-09-13T00:16:13.649979953Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:16:13.650550 containerd[1444]: time="2025-09-13T00:16:13.650501113Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:16:13.652088 containerd[1444]: time="2025-09-13T00:16:13.652025433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:16:13.652240 containerd[1444]: time="2025-09-13T00:16:13.652178553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:16:13.652757 containerd[1444]: time="2025-09-13T00:16:13.652506193Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:16:13.652757 containerd[1444]: time="2025-09-13T00:16:13.652556313Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:16:13.652757 containerd[1444]: time="2025-09-13T00:16:13.652578713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:16:13.652757 containerd[1444]: time="2025-09-13T00:16:13.652695833Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:16:13.678812 systemd[1]: Started cri-containerd-6956b54333ba1d8559e7eb838af8b71117c4bb4d97aa0ec5ad2031c367b5dc91.scope - libcontainer container 6956b54333ba1d8559e7eb838af8b71117c4bb4d97aa0ec5ad2031c367b5dc91. Sep 13 00:16:13.680454 systemd[1]: Started cri-containerd-7cc6da5d0448f618974ba604c977f8b80e3f5d3804b2e9cab50e4028e94458d2.scope - libcontainer container 7cc6da5d0448f618974ba604c977f8b80e3f5d3804b2e9cab50e4028e94458d2. Sep 13 00:16:13.682334 systemd[1]: Started cri-containerd-c1d3376d02ee2478fb3d1c069e66e9956d8d6f81c90eb0b07ceed246be644970.scope - libcontainer container c1d3376d02ee2478fb3d1c069e66e9956d8d6f81c90eb0b07ceed246be644970. Sep 13 00:16:13.717407 containerd[1444]: time="2025-09-13T00:16:13.717366553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:7b968cf906b2d9d713a362c43868bef2,Namespace:kube-system,Attempt:0,} returns sandbox id \"6956b54333ba1d8559e7eb838af8b71117c4bb4d97aa0ec5ad2031c367b5dc91\"" Sep 13 00:16:13.717984 kubelet[2119]: E0913 00:16:13.717956 2119 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.130:6443: connect: connection refused" interval="1.6s" Sep 13 00:16:13.718948 kubelet[2119]: E0913 00:16:13.718927 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:13.723040 containerd[1444]: time="2025-09-13T00:16:13.723007593Z" level=info msg="CreateContainer within sandbox \"6956b54333ba1d8559e7eb838af8b71117c4bb4d97aa0ec5ad2031c367b5dc91\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 00:16:13.723832 containerd[1444]: time="2025-09-13T00:16:13.723807113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:43a4666890f64ea901ad1a607b6ddcc4,Namespace:kube-system,Attempt:0,} returns sandbox id \"7cc6da5d0448f618974ba604c977f8b80e3f5d3804b2e9cab50e4028e94458d2\"" Sep 13 00:16:13.725340 kubelet[2119]: E0913 00:16:13.725319 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:13.726898 containerd[1444]: time="2025-09-13T00:16:13.726856633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b678d5c6713e936e66aa5bb73166297e,Namespace:kube-system,Attempt:0,} returns sandbox id \"c1d3376d02ee2478fb3d1c069e66e9956d8d6f81c90eb0b07ceed246be644970\"" Sep 13 00:16:13.727422 kubelet[2119]: E0913 00:16:13.727401 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:13.729440 containerd[1444]: time="2025-09-13T00:16:13.729412033Z" level=info msg="CreateContainer within sandbox \"7cc6da5d0448f618974ba604c977f8b80e3f5d3804b2e9cab50e4028e94458d2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 00:16:13.730818 containerd[1444]: time="2025-09-13T00:16:13.730792233Z" level=info msg="CreateContainer within sandbox \"c1d3376d02ee2478fb3d1c069e66e9956d8d6f81c90eb0b07ceed246be644970\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 00:16:13.742141 containerd[1444]: time="2025-09-13T00:16:13.742096993Z" level=info msg="CreateContainer within sandbox \"6956b54333ba1d8559e7eb838af8b71117c4bb4d97aa0ec5ad2031c367b5dc91\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"beaac277c5ee128c567efa08a911e6219b32382ea8df314f05495758fb70b4f2\"" Sep 13 00:16:13.742874 containerd[1444]: time="2025-09-13T00:16:13.742847353Z" level=info msg="StartContainer for \"beaac277c5ee128c567efa08a911e6219b32382ea8df314f05495758fb70b4f2\"" Sep 13 00:16:13.745585 containerd[1444]: time="2025-09-13T00:16:13.745551193Z" level=info msg="CreateContainer within sandbox \"7cc6da5d0448f618974ba604c977f8b80e3f5d3804b2e9cab50e4028e94458d2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"45c1d38e177cc77681203e3fcf2b958a0ede2b1beac0d0249df8630f33bae093\"" Sep 13 00:16:13.746143 containerd[1444]: time="2025-09-13T00:16:13.746116553Z" level=info msg="StartContainer for \"45c1d38e177cc77681203e3fcf2b958a0ede2b1beac0d0249df8630f33bae093\"" Sep 13 00:16:13.756409 containerd[1444]: time="2025-09-13T00:16:13.755442433Z" level=info msg="CreateContainer within sandbox \"c1d3376d02ee2478fb3d1c069e66e9956d8d6f81c90eb0b07ceed246be644970\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b10c86939cf9e0f0d214c32853bc45860d1c147ae7443024ff3fdc0123072f54\"" Sep 13 00:16:13.756409 containerd[1444]: time="2025-09-13T00:16:13.755988793Z" level=info msg="StartContainer for \"b10c86939cf9e0f0d214c32853bc45860d1c147ae7443024ff3fdc0123072f54\"" Sep 13 00:16:13.775769 systemd[1]: Started cri-containerd-45c1d38e177cc77681203e3fcf2b958a0ede2b1beac0d0249df8630f33bae093.scope - libcontainer container 45c1d38e177cc77681203e3fcf2b958a0ede2b1beac0d0249df8630f33bae093. Sep 13 00:16:13.777398 systemd[1]: Started cri-containerd-beaac277c5ee128c567efa08a911e6219b32382ea8df314f05495758fb70b4f2.scope - libcontainer container beaac277c5ee128c567efa08a911e6219b32382ea8df314f05495758fb70b4f2. Sep 13 00:16:13.784742 systemd[1]: Started cri-containerd-b10c86939cf9e0f0d214c32853bc45860d1c147ae7443024ff3fdc0123072f54.scope - libcontainer container b10c86939cf9e0f0d214c32853bc45860d1c147ae7443024ff3fdc0123072f54. Sep 13 00:16:13.811989 containerd[1444]: time="2025-09-13T00:16:13.811742913Z" level=info msg="StartContainer for \"45c1d38e177cc77681203e3fcf2b958a0ede2b1beac0d0249df8630f33bae093\" returns successfully" Sep 13 00:16:13.822680 containerd[1444]: time="2025-09-13T00:16:13.821944193Z" level=info msg="StartContainer for \"beaac277c5ee128c567efa08a911e6219b32382ea8df314f05495758fb70b4f2\" returns successfully" Sep 13 00:16:13.835896 containerd[1444]: time="2025-09-13T00:16:13.835786113Z" level=info msg="StartContainer for \"b10c86939cf9e0f0d214c32853bc45860d1c147ae7443024ff3fdc0123072f54\" returns successfully" Sep 13 00:16:13.953807 kubelet[2119]: I0913 00:16:13.953776 2119 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 00:16:14.341398 kubelet[2119]: E0913 00:16:14.341341 2119 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:16:14.341590 kubelet[2119]: E0913 00:16:14.341471 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:14.344379 kubelet[2119]: E0913 00:16:14.343305 2119 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:16:14.344379 kubelet[2119]: E0913 00:16:14.343413 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:14.345768 kubelet[2119]: E0913 00:16:14.345610 2119 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:16:14.345768 kubelet[2119]: E0913 00:16:14.345716 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:15.086144 kubelet[2119]: I0913 00:16:15.086106 2119 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 13 00:16:15.086144 kubelet[2119]: E0913 00:16:15.086148 2119 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 13 00:16:15.095575 kubelet[2119]: E0913 00:16:15.095520 2119 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:16:15.196655 kubelet[2119]: E0913 00:16:15.196616 2119 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:16:15.296938 kubelet[2119]: E0913 00:16:15.296904 2119 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:16:15.346210 kubelet[2119]: E0913 00:16:15.346105 2119 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:16:15.346210 kubelet[2119]: E0913 00:16:15.346183 2119 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:16:15.346338 kubelet[2119]: E0913 00:16:15.346240 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:15.346338 kubelet[2119]: E0913 00:16:15.346302 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:15.397943 kubelet[2119]: E0913 00:16:15.397890 2119 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:16:15.498776 kubelet[2119]: E0913 00:16:15.498722 2119 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:16:15.599681 kubelet[2119]: E0913 00:16:15.599547 2119 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:16:15.715534 kubelet[2119]: I0913 00:16:15.715029 2119 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 13 00:16:15.725198 kubelet[2119]: E0913 00:16:15.725169 2119 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 13 00:16:15.725198 kubelet[2119]: I0913 00:16:15.725197 2119 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 13 00:16:15.727922 kubelet[2119]: E0913 00:16:15.727898 2119 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 13 00:16:15.727922 kubelet[2119]: I0913 00:16:15.727924 2119 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 13 00:16:15.730804 kubelet[2119]: E0913 00:16:15.730782 2119 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 13 00:16:16.304861 kubelet[2119]: I0913 00:16:16.304819 2119 apiserver.go:52] "Watching apiserver" Sep 13 00:16:16.315310 kubelet[2119]: I0913 00:16:16.315265 2119 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 00:16:16.346506 kubelet[2119]: I0913 00:16:16.346257 2119 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 13 00:16:16.384351 kubelet[2119]: E0913 00:16:16.384269 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:17.190156 systemd[1]: Reloading requested from client PID 2409 ('systemctl') (unit session-7.scope)... Sep 13 00:16:17.190170 systemd[1]: Reloading... Sep 13 00:16:17.258633 zram_generator::config[2451]: No configuration found. Sep 13 00:16:17.341146 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:16:17.348093 kubelet[2119]: E0913 00:16:17.347746 2119 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:17.411393 systemd[1]: Reloading finished in 220 ms. Sep 13 00:16:17.442728 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:16:17.454548 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:16:17.455673 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:16:17.455736 systemd[1]: kubelet.service: Consumed 1.408s CPU time, 132.5M memory peak, 0B memory swap peak. Sep 13 00:16:17.471949 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:16:17.572848 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:16:17.578641 (kubelet)[2490]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:16:17.620689 kubelet[2490]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:16:17.620689 kubelet[2490]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 00:16:17.620689 kubelet[2490]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:16:17.621014 kubelet[2490]: I0913 00:16:17.620720 2490 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:16:17.626379 kubelet[2490]: I0913 00:16:17.626332 2490 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 13 00:16:17.626379 kubelet[2490]: I0913 00:16:17.626363 2490 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:16:17.626569 kubelet[2490]: I0913 00:16:17.626543 2490 server.go:956] "Client rotation is on, will bootstrap in background" Sep 13 00:16:17.627771 kubelet[2490]: I0913 00:16:17.627749 2490 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 13 00:16:17.629866 kubelet[2490]: I0913 00:16:17.629845 2490 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:16:17.632792 kubelet[2490]: E0913 00:16:17.632741 2490 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:16:17.632792 kubelet[2490]: I0913 00:16:17.632789 2490 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:16:17.635642 kubelet[2490]: I0913 00:16:17.635616 2490 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:16:17.635837 kubelet[2490]: I0913 00:16:17.635810 2490 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:16:17.635965 kubelet[2490]: I0913 00:16:17.635836 2490 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:16:17.636043 kubelet[2490]: I0913 00:16:17.635974 2490 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:16:17.636043 kubelet[2490]: I0913 00:16:17.635984 2490 container_manager_linux.go:303] "Creating device plugin manager" Sep 13 00:16:17.636043 kubelet[2490]: I0913 00:16:17.636023 2490 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:16:17.636549 kubelet[2490]: I0913 00:16:17.636150 2490 kubelet.go:480] "Attempting to sync node with API server" Sep 13 00:16:17.636549 kubelet[2490]: I0913 00:16:17.636164 2490 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:16:17.636549 kubelet[2490]: I0913 00:16:17.636199 2490 kubelet.go:386] "Adding apiserver pod source" Sep 13 00:16:17.636549 kubelet[2490]: I0913 00:16:17.636214 2490 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:16:17.639663 kubelet[2490]: I0913 00:16:17.639638 2490 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 00:16:17.640214 kubelet[2490]: I0913 00:16:17.640178 2490 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 13 00:16:17.645175 kubelet[2490]: I0913 00:16:17.644704 2490 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 00:16:17.645175 kubelet[2490]: I0913 00:16:17.644749 2490 server.go:1289] "Started kubelet" Sep 13 00:16:17.645848 kubelet[2490]: I0913 00:16:17.645828 2490 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:16:17.650846 kubelet[2490]: I0913 00:16:17.647111 2490 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:16:17.650846 kubelet[2490]: I0913 00:16:17.648412 2490 server.go:317] "Adding debug handlers to kubelet server" Sep 13 00:16:17.651286 kubelet[2490]: I0913 00:16:17.651240 2490 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:16:17.651430 kubelet[2490]: I0913 00:16:17.651414 2490 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:16:17.651629 kubelet[2490]: I0913 00:16:17.651577 2490 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:16:17.652570 kubelet[2490]: I0913 00:16:17.652546 2490 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 00:16:17.652696 kubelet[2490]: I0913 00:16:17.652676 2490 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 00:16:17.653397 kubelet[2490]: I0913 00:16:17.652868 2490 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:16:17.653397 kubelet[2490]: E0913 00:16:17.653142 2490 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:16:17.654629 kubelet[2490]: I0913 00:16:17.653842 2490 factory.go:223] Registration of the systemd container factory successfully Sep 13 00:16:17.654940 kubelet[2490]: I0913 00:16:17.654795 2490 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:16:17.658540 kubelet[2490]: E0913 00:16:17.658521 2490 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:16:17.658959 kubelet[2490]: I0913 00:16:17.658944 2490 factory.go:223] Registration of the containerd container factory successfully Sep 13 00:16:17.667385 kubelet[2490]: I0913 00:16:17.667340 2490 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 13 00:16:17.668254 kubelet[2490]: I0913 00:16:17.668226 2490 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 13 00:16:17.668254 kubelet[2490]: I0913 00:16:17.668248 2490 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 13 00:16:17.668334 kubelet[2490]: I0913 00:16:17.668266 2490 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 00:16:17.668334 kubelet[2490]: I0913 00:16:17.668273 2490 kubelet.go:2436] "Starting kubelet main sync loop" Sep 13 00:16:17.668334 kubelet[2490]: E0913 00:16:17.668314 2490 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:16:17.688988 kubelet[2490]: I0913 00:16:17.688947 2490 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 00:16:17.688988 kubelet[2490]: I0913 00:16:17.688984 2490 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 00:16:17.689100 kubelet[2490]: I0913 00:16:17.689007 2490 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:16:17.689142 kubelet[2490]: I0913 00:16:17.689122 2490 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 00:16:17.689167 kubelet[2490]: I0913 00:16:17.689137 2490 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 00:16:17.689167 kubelet[2490]: I0913 00:16:17.689154 2490 policy_none.go:49] "None policy: Start" Sep 13 00:16:17.689167 kubelet[2490]: I0913 00:16:17.689163 2490 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 00:16:17.689243 kubelet[2490]: I0913 00:16:17.689171 2490 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:16:17.689268 kubelet[2490]: I0913 00:16:17.689263 2490 state_mem.go:75] "Updated machine memory state" Sep 13 00:16:17.692745 kubelet[2490]: E0913 00:16:17.692642 2490 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 13 00:16:17.692866 kubelet[2490]: I0913 00:16:17.692795 2490 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:16:17.692866 kubelet[2490]: I0913 00:16:17.692806 2490 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:16:17.693231 kubelet[2490]: I0913 00:16:17.693207 2490 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:16:17.694635 kubelet[2490]: E0913 00:16:17.694399 2490 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 00:16:17.769465 kubelet[2490]: I0913 00:16:17.769429 2490 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 13 00:16:17.769465 kubelet[2490]: I0913 00:16:17.769459 2490 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 13 00:16:17.769988 kubelet[2490]: I0913 00:16:17.769789 2490 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 13 00:16:17.775842 kubelet[2490]: E0913 00:16:17.775813 2490 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 13 00:16:17.796472 kubelet[2490]: I0913 00:16:17.796451 2490 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 00:16:17.803026 kubelet[2490]: I0913 00:16:17.802684 2490 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 13 00:16:17.803026 kubelet[2490]: I0913 00:16:17.802823 2490 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 13 00:16:17.854351 kubelet[2490]: I0913 00:16:17.854315 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/43a4666890f64ea901ad1a607b6ddcc4-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"43a4666890f64ea901ad1a607b6ddcc4\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:16:17.854723 kubelet[2490]: I0913 00:16:17.854525 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:16:17.854723 kubelet[2490]: I0913 00:16:17.854552 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:16:17.854723 kubelet[2490]: I0913 00:16:17.854569 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:16:17.854723 kubelet[2490]: I0913 00:16:17.854587 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:16:17.854723 kubelet[2490]: I0913 00:16:17.854622 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b968cf906b2d9d713a362c43868bef2-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"7b968cf906b2d9d713a362c43868bef2\") " pod="kube-system/kube-scheduler-localhost" Sep 13 00:16:17.854907 kubelet[2490]: I0913 00:16:17.854654 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/43a4666890f64ea901ad1a607b6ddcc4-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"43a4666890f64ea901ad1a607b6ddcc4\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:16:17.854907 kubelet[2490]: I0913 00:16:17.854673 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/43a4666890f64ea901ad1a607b6ddcc4-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"43a4666890f64ea901ad1a607b6ddcc4\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:16:17.854907 kubelet[2490]: I0913 00:16:17.854692 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:16:18.075146 kubelet[2490]: E0913 00:16:18.075099 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:18.075566 kubelet[2490]: E0913 00:16:18.075509 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:18.076616 kubelet[2490]: E0913 00:16:18.076562 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:18.637148 kubelet[2490]: I0913 00:16:18.636897 2490 apiserver.go:52] "Watching apiserver" Sep 13 00:16:18.652867 kubelet[2490]: I0913 00:16:18.652828 2490 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 00:16:18.679266 kubelet[2490]: I0913 00:16:18.679225 2490 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 13 00:16:18.681299 kubelet[2490]: I0913 00:16:18.680974 2490 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 13 00:16:18.681299 kubelet[2490]: I0913 00:16:18.681092 2490 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 13 00:16:18.688573 kubelet[2490]: E0913 00:16:18.687458 2490 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 13 00:16:18.688573 kubelet[2490]: E0913 00:16:18.687639 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:18.688958 kubelet[2490]: E0913 00:16:18.688932 2490 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 13 00:16:18.689101 kubelet[2490]: E0913 00:16:18.689055 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:18.689400 kubelet[2490]: E0913 00:16:18.689317 2490 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 13 00:16:18.689725 kubelet[2490]: E0913 00:16:18.689701 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:18.716046 kubelet[2490]: I0913 00:16:18.715982 2490 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.715965473 podStartE2EDuration="1.715965473s" podCreationTimestamp="2025-09-13 00:16:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:16:18.704738913 +0000 UTC m=+1.122446441" watchObservedRunningTime="2025-09-13 00:16:18.715965473 +0000 UTC m=+1.133673001" Sep 13 00:16:18.725198 kubelet[2490]: I0913 00:16:18.725130 2490 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.725115593 podStartE2EDuration="1.725115593s" podCreationTimestamp="2025-09-13 00:16:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:16:18.716537313 +0000 UTC m=+1.134244841" watchObservedRunningTime="2025-09-13 00:16:18.725115593 +0000 UTC m=+1.142823121" Sep 13 00:16:18.733874 kubelet[2490]: I0913 00:16:18.733731 2490 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.733715273 podStartE2EDuration="2.733715273s" podCreationTimestamp="2025-09-13 00:16:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:16:18.725387473 +0000 UTC m=+1.143095001" watchObservedRunningTime="2025-09-13 00:16:18.733715273 +0000 UTC m=+1.151422801" Sep 13 00:16:19.680639 kubelet[2490]: E0913 00:16:19.680335 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:19.680639 kubelet[2490]: E0913 00:16:19.680396 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:19.681220 kubelet[2490]: E0913 00:16:19.680661 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:20.682114 kubelet[2490]: E0913 00:16:20.682073 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:20.682435 kubelet[2490]: E0913 00:16:20.682163 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:22.197007 kubelet[2490]: E0913 00:16:22.196959 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:22.613311 kubelet[2490]: I0913 00:16:22.613228 2490 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 00:16:22.613558 containerd[1444]: time="2025-09-13T00:16:22.613513037Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 00:16:22.613964 kubelet[2490]: I0913 00:16:22.613682 2490 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 00:16:23.491822 systemd[1]: Created slice kubepods-besteffort-pod464ff1cd_0925_45e6_9d5f_686c046965bc.slice - libcontainer container kubepods-besteffort-pod464ff1cd_0925_45e6_9d5f_686c046965bc.slice. Sep 13 00:16:23.495703 kubelet[2490]: I0913 00:16:23.493962 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/464ff1cd-0925-45e6-9d5f-686c046965bc-kube-proxy\") pod \"kube-proxy-4qxqx\" (UID: \"464ff1cd-0925-45e6-9d5f-686c046965bc\") " pod="kube-system/kube-proxy-4qxqx" Sep 13 00:16:23.495703 kubelet[2490]: I0913 00:16:23.494039 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/464ff1cd-0925-45e6-9d5f-686c046965bc-lib-modules\") pod \"kube-proxy-4qxqx\" (UID: \"464ff1cd-0925-45e6-9d5f-686c046965bc\") " pod="kube-system/kube-proxy-4qxqx" Sep 13 00:16:23.495703 kubelet[2490]: I0913 00:16:23.494060 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6k4fh\" (UniqueName: \"kubernetes.io/projected/464ff1cd-0925-45e6-9d5f-686c046965bc-kube-api-access-6k4fh\") pod \"kube-proxy-4qxqx\" (UID: \"464ff1cd-0925-45e6-9d5f-686c046965bc\") " pod="kube-system/kube-proxy-4qxqx" Sep 13 00:16:23.495703 kubelet[2490]: I0913 00:16:23.494084 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/464ff1cd-0925-45e6-9d5f-686c046965bc-xtables-lock\") pod \"kube-proxy-4qxqx\" (UID: \"464ff1cd-0925-45e6-9d5f-686c046965bc\") " pod="kube-system/kube-proxy-4qxqx" Sep 13 00:16:23.610694 kubelet[2490]: E0913 00:16:23.610653 2490 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 13 00:16:23.610694 kubelet[2490]: E0913 00:16:23.610689 2490 projected.go:194] Error preparing data for projected volume kube-api-access-6k4fh for pod kube-system/kube-proxy-4qxqx: configmap "kube-root-ca.crt" not found Sep 13 00:16:23.611428 kubelet[2490]: E0913 00:16:23.610752 2490 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/464ff1cd-0925-45e6-9d5f-686c046965bc-kube-api-access-6k4fh podName:464ff1cd-0925-45e6-9d5f-686c046965bc nodeName:}" failed. No retries permitted until 2025-09-13 00:16:24.110730618 +0000 UTC m=+6.528438146 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6k4fh" (UniqueName: "kubernetes.io/projected/464ff1cd-0925-45e6-9d5f-686c046965bc-kube-api-access-6k4fh") pod "kube-proxy-4qxqx" (UID: "464ff1cd-0925-45e6-9d5f-686c046965bc") : configmap "kube-root-ca.crt" not found Sep 13 00:16:23.845837 systemd[1]: Created slice kubepods-besteffort-pod454c45a3_cf27_4fd0_a966_70e38027a016.slice - libcontainer container kubepods-besteffort-pod454c45a3_cf27_4fd0_a966_70e38027a016.slice. Sep 13 00:16:23.897653 kubelet[2490]: I0913 00:16:23.897589 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b498j\" (UniqueName: \"kubernetes.io/projected/454c45a3-cf27-4fd0-a966-70e38027a016-kube-api-access-b498j\") pod \"tigera-operator-755d956888-gd27j\" (UID: \"454c45a3-cf27-4fd0-a966-70e38027a016\") " pod="tigera-operator/tigera-operator-755d956888-gd27j" Sep 13 00:16:23.897653 kubelet[2490]: I0913 00:16:23.897648 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/454c45a3-cf27-4fd0-a966-70e38027a016-var-lib-calico\") pod \"tigera-operator-755d956888-gd27j\" (UID: \"454c45a3-cf27-4fd0-a966-70e38027a016\") " pod="tigera-operator/tigera-operator-755d956888-gd27j" Sep 13 00:16:24.151591 containerd[1444]: time="2025-09-13T00:16:24.151243192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-gd27j,Uid:454c45a3-cf27-4fd0-a966-70e38027a016,Namespace:tigera-operator,Attempt:0,}" Sep 13 00:16:24.171158 containerd[1444]: time="2025-09-13T00:16:24.171081849Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:16:24.171158 containerd[1444]: time="2025-09-13T00:16:24.171129929Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:16:24.171158 containerd[1444]: time="2025-09-13T00:16:24.171148089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:16:24.171341 containerd[1444]: time="2025-09-13T00:16:24.171227610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:16:24.192792 systemd[1]: Started cri-containerd-1edd921e97252d7fd8bf5f8118d4206a10c46c566a81cb9f21a1e9e86b20cb51.scope - libcontainer container 1edd921e97252d7fd8bf5f8118d4206a10c46c566a81cb9f21a1e9e86b20cb51. Sep 13 00:16:24.220443 containerd[1444]: time="2025-09-13T00:16:24.220283427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-gd27j,Uid:454c45a3-cf27-4fd0-a966-70e38027a016,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"1edd921e97252d7fd8bf5f8118d4206a10c46c566a81cb9f21a1e9e86b20cb51\"" Sep 13 00:16:24.222929 containerd[1444]: time="2025-09-13T00:16:24.222882845Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 13 00:16:24.402955 kubelet[2490]: E0913 00:16:24.402350 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:24.403062 containerd[1444]: time="2025-09-13T00:16:24.402993163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4qxqx,Uid:464ff1cd-0925-45e6-9d5f-686c046965bc,Namespace:kube-system,Attempt:0,}" Sep 13 00:16:24.428009 containerd[1444]: time="2025-09-13T00:16:24.426494764Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:16:24.428009 containerd[1444]: time="2025-09-13T00:16:24.426546205Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:16:24.428009 containerd[1444]: time="2025-09-13T00:16:24.426556205Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:16:24.428009 containerd[1444]: time="2025-09-13T00:16:24.426669006Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:16:24.452797 systemd[1]: Started cri-containerd-a8b70708979d21b567b7152c79d415ab419598acd1391f5c9af599d35b5a9189.scope - libcontainer container a8b70708979d21b567b7152c79d415ab419598acd1391f5c9af599d35b5a9189. Sep 13 00:16:24.473675 containerd[1444]: time="2025-09-13T00:16:24.473580808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4qxqx,Uid:464ff1cd-0925-45e6-9d5f-686c046965bc,Namespace:kube-system,Attempt:0,} returns sandbox id \"a8b70708979d21b567b7152c79d415ab419598acd1391f5c9af599d35b5a9189\"" Sep 13 00:16:24.474315 kubelet[2490]: E0913 00:16:24.474290 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:24.478325 containerd[1444]: time="2025-09-13T00:16:24.478211520Z" level=info msg="CreateContainer within sandbox \"a8b70708979d21b567b7152c79d415ab419598acd1391f5c9af599d35b5a9189\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 00:16:24.543730 containerd[1444]: time="2025-09-13T00:16:24.543676690Z" level=info msg="CreateContainer within sandbox \"a8b70708979d21b567b7152c79d415ab419598acd1391f5c9af599d35b5a9189\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"edb980b2acc1c2f954bb9cbcf542a0ab6f48b196c8b00aab7655ce373b0a98a6\"" Sep 13 00:16:24.544257 containerd[1444]: time="2025-09-13T00:16:24.544234574Z" level=info msg="StartContainer for \"edb980b2acc1c2f954bb9cbcf542a0ab6f48b196c8b00aab7655ce373b0a98a6\"" Sep 13 00:16:24.585792 systemd[1]: Started cri-containerd-edb980b2acc1c2f954bb9cbcf542a0ab6f48b196c8b00aab7655ce373b0a98a6.scope - libcontainer container edb980b2acc1c2f954bb9cbcf542a0ab6f48b196c8b00aab7655ce373b0a98a6. Sep 13 00:16:24.614170 containerd[1444]: time="2025-09-13T00:16:24.614122294Z" level=info msg="StartContainer for \"edb980b2acc1c2f954bb9cbcf542a0ab6f48b196c8b00aab7655ce373b0a98a6\" returns successfully" Sep 13 00:16:24.690590 kubelet[2490]: E0913 00:16:24.690475 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:25.848027 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3635700686.mount: Deactivated successfully. Sep 13 00:16:28.141523 containerd[1444]: time="2025-09-13T00:16:28.140757725Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:28.141523 containerd[1444]: time="2025-09-13T00:16:28.141465209Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 13 00:16:28.142163 containerd[1444]: time="2025-09-13T00:16:28.142118652Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:28.144204 containerd[1444]: time="2025-09-13T00:16:28.144153743Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:28.145454 containerd[1444]: time="2025-09-13T00:16:28.145274949Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 3.922317424s" Sep 13 00:16:28.145454 containerd[1444]: time="2025-09-13T00:16:28.145306229Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 13 00:16:28.170816 containerd[1444]: time="2025-09-13T00:16:28.170759885Z" level=info msg="CreateContainer within sandbox \"1edd921e97252d7fd8bf5f8118d4206a10c46c566a81cb9f21a1e9e86b20cb51\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 13 00:16:28.178975 containerd[1444]: time="2025-09-13T00:16:28.178933808Z" level=info msg="CreateContainer within sandbox \"1edd921e97252d7fd8bf5f8118d4206a10c46c566a81cb9f21a1e9e86b20cb51\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"c1f915b232b32cbdc7ca7ba0d458732ab2ac99be3430e526bbb0919a879fc65a\"" Sep 13 00:16:28.179384 containerd[1444]: time="2025-09-13T00:16:28.179347130Z" level=info msg="StartContainer for \"c1f915b232b32cbdc7ca7ba0d458732ab2ac99be3430e526bbb0919a879fc65a\"" Sep 13 00:16:28.197727 systemd[1]: run-containerd-runc-k8s.io-c1f915b232b32cbdc7ca7ba0d458732ab2ac99be3430e526bbb0919a879fc65a-runc.2P93mT.mount: Deactivated successfully. Sep 13 00:16:28.205750 systemd[1]: Started cri-containerd-c1f915b232b32cbdc7ca7ba0d458732ab2ac99be3430e526bbb0919a879fc65a.scope - libcontainer container c1f915b232b32cbdc7ca7ba0d458732ab2ac99be3430e526bbb0919a879fc65a. Sep 13 00:16:28.235345 containerd[1444]: time="2025-09-13T00:16:28.235240507Z" level=info msg="StartContainer for \"c1f915b232b32cbdc7ca7ba0d458732ab2ac99be3430e526bbb0919a879fc65a\" returns successfully" Sep 13 00:16:28.708394 kubelet[2490]: I0913 00:16:28.708330 2490 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4qxqx" podStartSLOduration=5.708312459 podStartE2EDuration="5.708312459s" podCreationTimestamp="2025-09-13 00:16:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:16:24.700364287 +0000 UTC m=+7.118071815" watchObservedRunningTime="2025-09-13 00:16:28.708312459 +0000 UTC m=+11.126019947" Sep 13 00:16:28.708899 kubelet[2490]: I0913 00:16:28.708423 2490 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-gd27j" podStartSLOduration=1.782892978 podStartE2EDuration="5.70841902s" podCreationTimestamp="2025-09-13 00:16:23 +0000 UTC" firstStartedPulling="2025-09-13 00:16:24.221977519 +0000 UTC m=+6.639685047" lastFinishedPulling="2025-09-13 00:16:28.147503561 +0000 UTC m=+10.565211089" observedRunningTime="2025-09-13 00:16:28.708222259 +0000 UTC m=+11.125929787" watchObservedRunningTime="2025-09-13 00:16:28.70841902 +0000 UTC m=+11.126126548" Sep 13 00:16:29.113052 kubelet[2490]: E0913 00:16:29.112016 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:29.605748 kubelet[2490]: E0913 00:16:29.605700 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:32.203926 kubelet[2490]: E0913 00:16:32.203817 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:32.533224 update_engine[1420]: I20250913 00:16:32.533020 1420 update_attempter.cc:509] Updating boot flags... Sep 13 00:16:32.609794 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2886) Sep 13 00:16:32.657661 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2888) Sep 13 00:16:32.687639 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2888) Sep 13 00:16:33.382623 sudo[1616]: pam_unix(sudo:session): session closed for user root Sep 13 00:16:33.387917 sshd[1613]: pam_unix(sshd:session): session closed for user core Sep 13 00:16:33.391819 systemd-logind[1419]: Session 7 logged out. Waiting for processes to exit. Sep 13 00:16:33.392103 systemd[1]: sshd@6-10.0.0.130:22-10.0.0.1:54294.service: Deactivated successfully. Sep 13 00:16:33.395228 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 00:16:33.395389 systemd[1]: session-7.scope: Consumed 6.528s CPU time, 151.1M memory peak, 0B memory swap peak. Sep 13 00:16:33.397144 systemd-logind[1419]: Removed session 7. Sep 13 00:16:37.719549 systemd[1]: Created slice kubepods-besteffort-pod0ee79c26_fa43_477f_bc83_8005847c35dd.slice - libcontainer container kubepods-besteffort-pod0ee79c26_fa43_477f_bc83_8005847c35dd.slice. Sep 13 00:16:37.793293 kubelet[2490]: I0913 00:16:37.793106 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ee79c26-fa43-477f-bc83-8005847c35dd-tigera-ca-bundle\") pod \"calico-typha-6d789794d6-kljfm\" (UID: \"0ee79c26-fa43-477f-bc83-8005847c35dd\") " pod="calico-system/calico-typha-6d789794d6-kljfm" Sep 13 00:16:37.793293 kubelet[2490]: I0913 00:16:37.793161 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/0ee79c26-fa43-477f-bc83-8005847c35dd-typha-certs\") pod \"calico-typha-6d789794d6-kljfm\" (UID: \"0ee79c26-fa43-477f-bc83-8005847c35dd\") " pod="calico-system/calico-typha-6d789794d6-kljfm" Sep 13 00:16:37.793293 kubelet[2490]: I0913 00:16:37.793183 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q42q\" (UniqueName: \"kubernetes.io/projected/0ee79c26-fa43-477f-bc83-8005847c35dd-kube-api-access-5q42q\") pod \"calico-typha-6d789794d6-kljfm\" (UID: \"0ee79c26-fa43-477f-bc83-8005847c35dd\") " pod="calico-system/calico-typha-6d789794d6-kljfm" Sep 13 00:16:37.953234 systemd[1]: Created slice kubepods-besteffort-pod539fde8c_f183_489d_b86c_6f967d67f422.slice - libcontainer container kubepods-besteffort-pod539fde8c_f183_489d_b86c_6f967d67f422.slice. Sep 13 00:16:37.995020 kubelet[2490]: I0913 00:16:37.994895 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/539fde8c-f183-489d-b86c-6f967d67f422-policysync\") pod \"calico-node-pfkkf\" (UID: \"539fde8c-f183-489d-b86c-6f967d67f422\") " pod="calico-system/calico-node-pfkkf" Sep 13 00:16:37.995020 kubelet[2490]: I0913 00:16:37.994939 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/539fde8c-f183-489d-b86c-6f967d67f422-tigera-ca-bundle\") pod \"calico-node-pfkkf\" (UID: \"539fde8c-f183-489d-b86c-6f967d67f422\") " pod="calico-system/calico-node-pfkkf" Sep 13 00:16:37.995020 kubelet[2490]: I0913 00:16:37.994997 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/539fde8c-f183-489d-b86c-6f967d67f422-cni-bin-dir\") pod \"calico-node-pfkkf\" (UID: \"539fde8c-f183-489d-b86c-6f967d67f422\") " pod="calico-system/calico-node-pfkkf" Sep 13 00:16:37.995186 kubelet[2490]: I0913 00:16:37.995028 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/539fde8c-f183-489d-b86c-6f967d67f422-lib-modules\") pod \"calico-node-pfkkf\" (UID: \"539fde8c-f183-489d-b86c-6f967d67f422\") " pod="calico-system/calico-node-pfkkf" Sep 13 00:16:37.995186 kubelet[2490]: I0913 00:16:37.995061 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/539fde8c-f183-489d-b86c-6f967d67f422-var-run-calico\") pod \"calico-node-pfkkf\" (UID: \"539fde8c-f183-489d-b86c-6f967d67f422\") " pod="calico-system/calico-node-pfkkf" Sep 13 00:16:37.995186 kubelet[2490]: I0913 00:16:37.995089 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/539fde8c-f183-489d-b86c-6f967d67f422-var-lib-calico\") pod \"calico-node-pfkkf\" (UID: \"539fde8c-f183-489d-b86c-6f967d67f422\") " pod="calico-system/calico-node-pfkkf" Sep 13 00:16:37.995186 kubelet[2490]: I0913 00:16:37.995114 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzh9p\" (UniqueName: \"kubernetes.io/projected/539fde8c-f183-489d-b86c-6f967d67f422-kube-api-access-rzh9p\") pod \"calico-node-pfkkf\" (UID: \"539fde8c-f183-489d-b86c-6f967d67f422\") " pod="calico-system/calico-node-pfkkf" Sep 13 00:16:37.995186 kubelet[2490]: I0913 00:16:37.995140 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/539fde8c-f183-489d-b86c-6f967d67f422-cni-net-dir\") pod \"calico-node-pfkkf\" (UID: \"539fde8c-f183-489d-b86c-6f967d67f422\") " pod="calico-system/calico-node-pfkkf" Sep 13 00:16:37.995296 kubelet[2490]: I0913 00:16:37.995166 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/539fde8c-f183-489d-b86c-6f967d67f422-xtables-lock\") pod \"calico-node-pfkkf\" (UID: \"539fde8c-f183-489d-b86c-6f967d67f422\") " pod="calico-system/calico-node-pfkkf" Sep 13 00:16:37.995296 kubelet[2490]: I0913 00:16:37.995187 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/539fde8c-f183-489d-b86c-6f967d67f422-cni-log-dir\") pod \"calico-node-pfkkf\" (UID: \"539fde8c-f183-489d-b86c-6f967d67f422\") " pod="calico-system/calico-node-pfkkf" Sep 13 00:16:37.995296 kubelet[2490]: I0913 00:16:37.995207 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/539fde8c-f183-489d-b86c-6f967d67f422-flexvol-driver-host\") pod \"calico-node-pfkkf\" (UID: \"539fde8c-f183-489d-b86c-6f967d67f422\") " pod="calico-system/calico-node-pfkkf" Sep 13 00:16:37.995296 kubelet[2490]: I0913 00:16:37.995258 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/539fde8c-f183-489d-b86c-6f967d67f422-node-certs\") pod \"calico-node-pfkkf\" (UID: \"539fde8c-f183-489d-b86c-6f967d67f422\") " pod="calico-system/calico-node-pfkkf" Sep 13 00:16:38.023208 kubelet[2490]: E0913 00:16:38.023144 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:38.023708 containerd[1444]: time="2025-09-13T00:16:38.023659887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d789794d6-kljfm,Uid:0ee79c26-fa43-477f-bc83-8005847c35dd,Namespace:calico-system,Attempt:0,}" Sep 13 00:16:38.051960 containerd[1444]: time="2025-09-13T00:16:38.051869086Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:16:38.051960 containerd[1444]: time="2025-09-13T00:16:38.051918406Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:16:38.051960 containerd[1444]: time="2025-09-13T00:16:38.051940846Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:16:38.052143 containerd[1444]: time="2025-09-13T00:16:38.052075687Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:16:38.070773 systemd[1]: Started cri-containerd-f7d720e8bbac620b9854fa201953f3f7c2e3b96dc4f5599faa707f4909fd8c65.scope - libcontainer container f7d720e8bbac620b9854fa201953f3f7c2e3b96dc4f5599faa707f4909fd8c65. Sep 13 00:16:38.113018 containerd[1444]: time="2025-09-13T00:16:38.112973496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d789794d6-kljfm,Uid:0ee79c26-fa43-477f-bc83-8005847c35dd,Namespace:calico-system,Attempt:0,} returns sandbox id \"f7d720e8bbac620b9854fa201953f3f7c2e3b96dc4f5599faa707f4909fd8c65\"" Sep 13 00:16:38.117905 kubelet[2490]: E0913 00:16:38.117869 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:38.119025 containerd[1444]: time="2025-09-13T00:16:38.118936913Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 13 00:16:38.228822 kubelet[2490]: E0913 00:16:38.228104 2490 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fs48x" podUID="0b49a5c7-d8ed-4263-b267-b04f7372f88c" Sep 13 00:16:38.258186 containerd[1444]: time="2025-09-13T00:16:38.258071500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pfkkf,Uid:539fde8c-f183-489d-b86c-6f967d67f422,Namespace:calico-system,Attempt:0,}" Sep 13 00:16:38.286985 kubelet[2490]: E0913 00:16:38.286954 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:38.286985 kubelet[2490]: W0913 00:16:38.286977 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:38.287136 kubelet[2490]: E0913 00:16:38.286999 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:38.287207 kubelet[2490]: E0913 00:16:38.287193 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:38.287207 kubelet[2490]: W0913 00:16:38.287205 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:38.287258 kubelet[2490]: E0913 00:16:38.287213 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:38.287366 kubelet[2490]: E0913 00:16:38.287356 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:38.287388 kubelet[2490]: W0913 00:16:38.287366 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:38.287388 kubelet[2490]: E0913 00:16:38.287373 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:38.287527 kubelet[2490]: E0913 00:16:38.287516 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:38.287550 kubelet[2490]: W0913 00:16:38.287527 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:38.287550 kubelet[2490]: E0913 00:16:38.287534 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:38.287696 kubelet[2490]: E0913 00:16:38.287684 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:38.287728 kubelet[2490]: W0913 00:16:38.287696 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:38.287728 kubelet[2490]: E0913 00:16:38.287704 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:38.287848 kubelet[2490]: E0913 00:16:38.287837 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:38.287872 kubelet[2490]: W0913 00:16:38.287851 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:38.287872 kubelet[2490]: E0913 00:16:38.287859 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:38.288054 kubelet[2490]: E0913 00:16:38.288045 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:38.288083 kubelet[2490]: W0913 00:16:38.288054 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:38.288083 kubelet[2490]: E0913 00:16:38.288061 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:38.288229 kubelet[2490]: E0913 00:16:38.288216 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:38.288257 kubelet[2490]: W0913 00:16:38.288229 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:38.288257 kubelet[2490]: E0913 00:16:38.288239 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:38.288411 kubelet[2490]: E0913 00:16:38.288402 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:38.288439 kubelet[2490]: W0913 00:16:38.288411 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:38.288439 kubelet[2490]: E0913 00:16:38.288418 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:38.288672 kubelet[2490]: E0913 00:16:38.288648 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:38.288672 kubelet[2490]: W0913 00:16:38.288665 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:38.288672 kubelet[2490]: E0913 00:16:38.288676 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:38.288873 kubelet[2490]: E0913 00:16:38.288861 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:38.288873 kubelet[2490]: W0913 00:16:38.288873 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:38.288941 kubelet[2490]: E0913 00:16:38.288882 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:38.289228 kubelet[2490]: E0913 00:16:38.289211 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:38.289228 kubelet[2490]: W0913 00:16:38.289226 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:38.289299 kubelet[2490]: E0913 00:16:38.289236 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:38.289589 kubelet[2490]: E0913 00:16:38.289575 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:38.289589 kubelet[2490]: W0913 00:16:38.289589 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:38.289677 kubelet[2490]: E0913 00:16:38.289613 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:38.289970 kubelet[2490]: E0913 00:16:38.289895 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:38.289970 kubelet[2490]: W0913 00:16:38.289907 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:38.289970 kubelet[2490]: E0913 00:16:38.289917 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:38.290297 containerd[1444]: time="2025-09-13T00:16:38.289513028Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:16:38.290297 containerd[1444]: time="2025-09-13T00:16:38.290113630Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:16:38.290297 containerd[1444]: time="2025-09-13T00:16:38.290127030Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:16:38.290421 kubelet[2490]: E0913 00:16:38.290356 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:38.290421 kubelet[2490]: W0913 00:16:38.290371 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:38.290421 kubelet[2490]: E0913 00:16:38.290381 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:38.290510 containerd[1444]: time="2025-09-13T00:16:38.290274910Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:16:38.290631 kubelet[2490]: E0913 00:16:38.290616 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:38.290631 kubelet[2490]: W0913 00:16:38.290629 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:38.290692 kubelet[2490]: E0913 00:16:38.290638 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:38.290849 kubelet[2490]: E0913 00:16:38.290835 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:38.290849 kubelet[2490]: W0913 00:16:38.290846 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:38.290907 kubelet[2490]: E0913 00:16:38.290854 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:38.291098 kubelet[2490]: E0913 00:16:38.291081 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:38.291125 kubelet[2490]: W0913 00:16:38.291097 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:38.291125 kubelet[2490]: E0913 00:16:38.291107 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:38.291482 kubelet[2490]: E0913 00:16:38.291466 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:38.291482 kubelet[2490]: W0913 00:16:38.291480 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:38.291538 kubelet[2490]: E0913 00:16:38.291491 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:38.291700 kubelet[2490]: E0913 00:16:38.291684 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:38.291700 kubelet[2490]: W0913 00:16:38.291698 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:38.291764 kubelet[2490]: E0913 00:16:38.291707 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:38.298427 kubelet[2490]: E0913 00:16:38.298214 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:38.298427 kubelet[2490]: W0913 00:16:38.298295 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:38.298427 kubelet[2490]: E0913 00:16:38.298309 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:38.298427 kubelet[2490]: I0913 00:16:38.298329 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0b49a5c7-d8ed-4263-b267-b04f7372f88c-varrun\") pod \"csi-node-driver-fs48x\" (UID: \"0b49a5c7-d8ed-4263-b267-b04f7372f88c\") " pod="calico-system/csi-node-driver-fs48x" Sep 13 00:16:38.298766 kubelet[2490]: E0913 00:16:38.298672 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:38.298766 kubelet[2490]: W0913 00:16:38.298686 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:38.298766 kubelet[2490]: E0913 00:16:38.298698 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:38.298766 kubelet[2490]: I0913 00:16:38.298720 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7khd\" (UniqueName: \"kubernetes.io/projected/0b49a5c7-d8ed-4263-b267-b04f7372f88c-kube-api-access-m7khd\") pod \"csi-node-driver-fs48x\" (UID: \"0b49a5c7-d8ed-4263-b267-b04f7372f88c\") " pod="calico-system/csi-node-driver-fs48x" Sep 13 00:16:38.299675 kubelet[2490]: E0913 00:16:38.299652 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:38.299675 kubelet[2490]: W0913 00:16:38.299669 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:38.299987 kubelet[2490]: E0913 00:16:38.299969 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:38.300024 kubelet[2490]: I0913 00:16:38.300009 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0b49a5c7-d8ed-4263-b267-b04f7372f88c-registration-dir\") pod \"csi-node-driver-fs48x\" (UID: \"0b49a5c7-d8ed-4263-b267-b04f7372f88c\") " pod="calico-system/csi-node-driver-fs48x" Sep 13 00:16:38.300753 kubelet[2490]: E0913 00:16:38.300736 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:38.300802 kubelet[2490]: W0913 00:16:38.300755 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:38.300802 kubelet[2490]: E0913 00:16:38.300767 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:38.300890 kubelet[2490]: I0913 00:16:38.300836 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0b49a5c7-d8ed-4263-b267-b04f7372f88c-kubelet-dir\") pod \"csi-node-driver-fs48x\" (UID: \"0b49a5c7-d8ed-4263-b267-b04f7372f88c\") " pod="calico-system/csi-node-driver-fs48x" Sep 13 00:16:38.301084 kubelet[2490]: E0913 00:16:38.301067 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:38.301084 kubelet[2490]: W0913 00:16:38.301081 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:38.301186 kubelet[2490]: E0913 00:16:38.301092 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:38.302394 kubelet[2490]: E0913 00:16:38.302363 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:38.302394 kubelet[2490]: W0913 00:16:38.302392 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:38.302484 kubelet[2490]: E0913 00:16:38.302405 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:38.302755 kubelet[2490]: E0913 00:16:38.302736 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:38.302755 kubelet[2490]: W0913 00:16:38.302754 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:38.302825 kubelet[2490]: E0913 00:16:38.302773 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:38.303129 kubelet[2490]: E0913 00:16:38.303053 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:38.303165 kubelet[2490]: W0913 00:16:38.303130 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:38.303165 kubelet[2490]: E0913 00:16:38.303155 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:38.303345 kubelet[2490]: E0913 00:16:38.303332 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:38.303375 kubelet[2490]: W0913 00:16:38.303344 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:38.303375 kubelet[2490]: E0913 00:16:38.303353 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:38.303468 kubelet[2490]: I0913 00:16:38.303424 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0b49a5c7-d8ed-4263-b267-b04f7372f88c-socket-dir\") pod \"csi-node-driver-fs48x\" (UID: \"0b49a5c7-d8ed-4263-b267-b04f7372f88c\") " pod="calico-system/csi-node-driver-fs48x" Sep 13 00:16:38.303642 kubelet[2490]: E0913 00:16:38.303629 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:38.303673 kubelet[2490]: W0913 00:16:38.303642 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:38.303673 kubelet[2490]: E0913 00:16:38.303652 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:38.303837 kubelet[2490]: E0913 00:16:38.303825 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:38.303869 kubelet[2490]: W0913 00:16:38.303837 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:38.303869 kubelet[2490]: E0913 00:16:38.303845 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:38.304012 kubelet[2490]: E0913 00:16:38.303981 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:38.304012 kubelet[2490]: W0913 00:16:38.303992 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:38.304097 kubelet[2490]: E0913 00:16:38.304000 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:38.304888 kubelet[2490]: E0913 00:16:38.304868 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:38.305026 kubelet[2490]: W0913 00:16:38.304887 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:38.305026 kubelet[2490]: E0913 00:16:38.304992 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:38.305499 kubelet[2490]: E0913 00:16:38.305479 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:38.305499 kubelet[2490]: W0913 00:16:38.305498 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:38.305564 kubelet[2490]: E0913 00:16:38.305511 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:38.306322 kubelet[2490]: E0913 00:16:38.306302 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:38.306322 kubelet[2490]: W0913 00:16:38.306320 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:38.306399 kubelet[2490]: E0913 00:16:38.306333 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:38.317769 systemd[1]: Started cri-containerd-2ae7fb87ba92d8e1da53515fa498c6679e1ef6468bde802b7738079a0a9c5e63.scope - libcontainer container 2ae7fb87ba92d8e1da53515fa498c6679e1ef6468bde802b7738079a0a9c5e63. Sep 13 00:16:38.340101 containerd[1444]: time="2025-09-13T00:16:38.340003968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pfkkf,Uid:539fde8c-f183-489d-b86c-6f967d67f422,Namespace:calico-system,Attempt:0,} returns sandbox id \"2ae7fb87ba92d8e1da53515fa498c6679e1ef6468bde802b7738079a0a9c5e63\"" Sep 13 00:16:38.404166 kubelet[2490]: E0913 00:16:38.404127 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:38.404166 kubelet[2490]: W0913 00:16:38.404157 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:38.404320 kubelet[2490]: E0913 00:16:38.404183 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:38.404430 kubelet[2490]: E0913 00:16:38.404415 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:38.404430 kubelet[2490]: W0913 00:16:38.404426 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:38.404480 kubelet[2490]: E0913 00:16:38.404442 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:38.404665 kubelet[2490]: E0913 00:16:38.404651 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:38.404665 kubelet[2490]: W0913 00:16:38.404663 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:38.404772 kubelet[2490]: E0913 00:16:38.404675 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:38.404975 kubelet[2490]: E0913 00:16:38.404956 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:38.404975 kubelet[2490]: W0913 00:16:38.404974 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:38.405029 kubelet[2490]: E0913 00:16:38.404987 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:38.405744 kubelet[2490]: E0913 00:16:38.405727 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:38.405744 kubelet[2490]: W0913 00:16:38.405741 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:38.405892 kubelet[2490]: E0913 00:16:38.405753 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:38.406706 kubelet[2490]: E0913 00:16:38.406689 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:38.406752 kubelet[2490]: W0913 00:16:38.406706 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:38.406752 kubelet[2490]: E0913 00:16:38.406719 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:38.406916 kubelet[2490]: E0913 00:16:38.406898 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:38.406916 kubelet[2490]: W0913 00:16:38.406909 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:38.407011 kubelet[2490]: E0913 00:16:38.406918 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:38.407068 kubelet[2490]: E0913 00:16:38.407057 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:38.407068 kubelet[2490]: W0913 00:16:38.407066 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:38.407183 kubelet[2490]: E0913 00:16:38.407074 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:38.407259 kubelet[2490]: E0913 00:16:38.407244 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:38.407259 kubelet[2490]: W0913 00:16:38.407257 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:38.407330 kubelet[2490]: E0913 00:16:38.407267 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:38.407461 kubelet[2490]: E0913 00:16:38.407449 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:38.407496 kubelet[2490]: W0913 00:16:38.407461 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:38.407496 kubelet[2490]: E0913 00:16:38.407469 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:38.407656 kubelet[2490]: E0913 00:16:38.407633 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:38.407656 kubelet[2490]: W0913 00:16:38.407645 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:38.407656 kubelet[2490]: E0913 00:16:38.407653 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:38.407805 kubelet[2490]: E0913 00:16:38.407790 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:38.407805 kubelet[2490]: W0913 00:16:38.407800 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:38.408092 kubelet[2490]: E0913 00:16:38.407809 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:38.408407 kubelet[2490]: E0913 00:16:38.408312 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:38.408407 kubelet[2490]: W0913 00:16:38.408330 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:38.408407 kubelet[2490]: E0913 00:16:38.408344 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:38.408550 kubelet[2490]: E0913 00:16:38.408533 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:38.408550 kubelet[2490]: W0913 00:16:38.408546 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:38.408642 kubelet[2490]: E0913 00:16:38.408556 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:38.408803 kubelet[2490]: E0913 00:16:38.408790 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:38.408803 kubelet[2490]: W0913 00:16:38.408802 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:38.408913 kubelet[2490]: E0913 00:16:38.408811 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:38.409042 kubelet[2490]: E0913 00:16:38.409023 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:38.409042 kubelet[2490]: W0913 00:16:38.409035 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:38.409127 kubelet[2490]: E0913 00:16:38.409044 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:38.409264 kubelet[2490]: E0913 00:16:38.409251 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:38.409264 kubelet[2490]: W0913 00:16:38.409263 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:38.409340 kubelet[2490]: E0913 00:16:38.409273 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:38.410056 kubelet[2490]: E0913 00:16:38.409994 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:38.410056 kubelet[2490]: W0913 00:16:38.410005 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:38.410056 kubelet[2490]: E0913 00:16:38.410015 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:38.410265 kubelet[2490]: E0913 00:16:38.410250 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:38.410265 kubelet[2490]: W0913 00:16:38.410263 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:38.410434 kubelet[2490]: E0913 00:16:38.410273 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:38.410493 kubelet[2490]: E0913 00:16:38.410479 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:38.410493 kubelet[2490]: W0913 00:16:38.410490 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:38.410589 kubelet[2490]: E0913 00:16:38.410498 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:38.410792 kubelet[2490]: E0913 00:16:38.410774 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:38.410822 kubelet[2490]: W0913 00:16:38.410793 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:38.410822 kubelet[2490]: E0913 00:16:38.410805 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:38.411004 kubelet[2490]: E0913 00:16:38.410989 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:38.411027 kubelet[2490]: W0913 00:16:38.411000 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:38.411027 kubelet[2490]: E0913 00:16:38.411012 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:38.419106 kubelet[2490]: E0913 00:16:38.419079 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:38.419106 kubelet[2490]: W0913 00:16:38.419099 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:38.419396 kubelet[2490]: E0913 00:16:38.419122 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:38.419680 kubelet[2490]: E0913 00:16:38.419629 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:38.419680 kubelet[2490]: W0913 00:16:38.419646 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:38.419680 kubelet[2490]: E0913 00:16:38.419659 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:38.419919 kubelet[2490]: E0913 00:16:38.419887 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:38.419919 kubelet[2490]: W0913 00:16:38.419899 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:38.419919 kubelet[2490]: E0913 00:16:38.419909 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:38.431135 kubelet[2490]: E0913 00:16:38.431065 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:38.431135 kubelet[2490]: W0913 00:16:38.431084 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:38.431135 kubelet[2490]: E0913 00:16:38.431100 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:39.217635 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3334001668.mount: Deactivated successfully. Sep 13 00:16:39.669153 kubelet[2490]: E0913 00:16:39.668719 2490 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fs48x" podUID="0b49a5c7-d8ed-4263-b267-b04f7372f88c" Sep 13 00:16:39.947159 containerd[1444]: time="2025-09-13T00:16:39.947041439Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:39.948165 containerd[1444]: time="2025-09-13T00:16:39.948125882Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33105775" Sep 13 00:16:39.948989 containerd[1444]: time="2025-09-13T00:16:39.948962364Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:39.950808 containerd[1444]: time="2025-09-13T00:16:39.950764289Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:39.951681 containerd[1444]: time="2025-09-13T00:16:39.951648371Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 1.832671138s" Sep 13 00:16:39.951729 containerd[1444]: time="2025-09-13T00:16:39.951684291Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 13 00:16:39.952795 containerd[1444]: time="2025-09-13T00:16:39.952756414Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 13 00:16:39.966363 containerd[1444]: time="2025-09-13T00:16:39.966224329Z" level=info msg="CreateContainer within sandbox \"f7d720e8bbac620b9854fa201953f3f7c2e3b96dc4f5599faa707f4909fd8c65\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 13 00:16:39.988301 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2184299920.mount: Deactivated successfully. Sep 13 00:16:39.995053 containerd[1444]: time="2025-09-13T00:16:39.994933564Z" level=info msg="CreateContainer within sandbox \"f7d720e8bbac620b9854fa201953f3f7c2e3b96dc4f5599faa707f4909fd8c65\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"04197719613be1a1bbe9d1ec35f30023bbb8e6a23e4eb9deb51dad1cddd500fa\"" Sep 13 00:16:39.995850 containerd[1444]: time="2025-09-13T00:16:39.995747726Z" level=info msg="StartContainer for \"04197719613be1a1bbe9d1ec35f30023bbb8e6a23e4eb9deb51dad1cddd500fa\"" Sep 13 00:16:40.023773 systemd[1]: Started cri-containerd-04197719613be1a1bbe9d1ec35f30023bbb8e6a23e4eb9deb51dad1cddd500fa.scope - libcontainer container 04197719613be1a1bbe9d1ec35f30023bbb8e6a23e4eb9deb51dad1cddd500fa. Sep 13 00:16:40.061682 containerd[1444]: time="2025-09-13T00:16:40.061642088Z" level=info msg="StartContainer for \"04197719613be1a1bbe9d1ec35f30023bbb8e6a23e4eb9deb51dad1cddd500fa\" returns successfully" Sep 13 00:16:40.725429 kubelet[2490]: E0913 00:16:40.725326 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:40.735939 kubelet[2490]: I0913 00:16:40.735876 2490 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6d789794d6-kljfm" podStartSLOduration=1.901790517 podStartE2EDuration="3.735859019s" podCreationTimestamp="2025-09-13 00:16:37 +0000 UTC" firstStartedPulling="2025-09-13 00:16:38.118536872 +0000 UTC m=+20.536244400" lastFinishedPulling="2025-09-13 00:16:39.952605374 +0000 UTC m=+22.370312902" observedRunningTime="2025-09-13 00:16:40.735822099 +0000 UTC m=+23.153529627" watchObservedRunningTime="2025-09-13 00:16:40.735859019 +0000 UTC m=+23.153566547" Sep 13 00:16:40.810080 kubelet[2490]: E0913 00:16:40.809062 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:40.810080 kubelet[2490]: W0913 00:16:40.809082 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:40.810080 kubelet[2490]: E0913 00:16:40.809102 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:40.810080 kubelet[2490]: E0913 00:16:40.809486 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:40.810080 kubelet[2490]: W0913 00:16:40.809499 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:40.810080 kubelet[2490]: E0913 00:16:40.809510 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:40.810080 kubelet[2490]: E0913 00:16:40.809833 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:40.810080 kubelet[2490]: W0913 00:16:40.809846 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:40.810080 kubelet[2490]: E0913 00:16:40.809860 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:40.811158 kubelet[2490]: E0913 00:16:40.810524 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:40.811158 kubelet[2490]: W0913 00:16:40.810626 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:40.811158 kubelet[2490]: E0913 00:16:40.810670 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:40.811158 kubelet[2490]: E0913 00:16:40.811114 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:40.811158 kubelet[2490]: W0913 00:16:40.811127 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:40.811158 kubelet[2490]: E0913 00:16:40.811146 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:40.811337 kubelet[2490]: E0913 00:16:40.811293 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:40.811337 kubelet[2490]: W0913 00:16:40.811301 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:40.811337 kubelet[2490]: E0913 00:16:40.811308 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:40.811457 kubelet[2490]: E0913 00:16:40.811433 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:40.811457 kubelet[2490]: W0913 00:16:40.811443 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:40.811457 kubelet[2490]: E0913 00:16:40.811452 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:40.811764 kubelet[2490]: E0913 00:16:40.811579 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:40.811764 kubelet[2490]: W0913 00:16:40.811588 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:40.811764 kubelet[2490]: E0913 00:16:40.811608 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:40.811764 kubelet[2490]: E0913 00:16:40.811760 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:40.811764 kubelet[2490]: W0913 00:16:40.811768 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:40.811962 kubelet[2490]: E0913 00:16:40.811776 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:40.811962 kubelet[2490]: E0913 00:16:40.811916 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:40.811962 kubelet[2490]: W0913 00:16:40.811926 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:40.811962 kubelet[2490]: E0913 00:16:40.811934 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:40.812389 kubelet[2490]: E0913 00:16:40.812359 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:40.812389 kubelet[2490]: W0913 00:16:40.812374 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:40.812389 kubelet[2490]: E0913 00:16:40.812384 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:40.812707 kubelet[2490]: E0913 00:16:40.812694 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:40.812707 kubelet[2490]: W0913 00:16:40.812707 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:40.812772 kubelet[2490]: E0913 00:16:40.812716 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:40.813220 kubelet[2490]: E0913 00:16:40.813200 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:40.813220 kubelet[2490]: W0913 00:16:40.813213 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:40.813220 kubelet[2490]: E0913 00:16:40.813223 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:40.813761 kubelet[2490]: E0913 00:16:40.813579 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:40.813761 kubelet[2490]: W0913 00:16:40.813590 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:40.813761 kubelet[2490]: E0913 00:16:40.813615 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:40.815269 kubelet[2490]: E0913 00:16:40.815182 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:40.815269 kubelet[2490]: W0913 00:16:40.815199 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:40.815269 kubelet[2490]: E0913 00:16:40.815211 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:40.824426 kubelet[2490]: E0913 00:16:40.824385 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:40.824426 kubelet[2490]: W0913 00:16:40.824407 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:40.824426 kubelet[2490]: E0913 00:16:40.824424 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:40.825176 kubelet[2490]: E0913 00:16:40.825092 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:40.825176 kubelet[2490]: W0913 00:16:40.825107 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:40.825176 kubelet[2490]: E0913 00:16:40.825120 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:40.825543 kubelet[2490]: E0913 00:16:40.825513 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:40.825543 kubelet[2490]: W0913 00:16:40.825530 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:40.825631 kubelet[2490]: E0913 00:16:40.825542 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:40.826160 kubelet[2490]: E0913 00:16:40.825805 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:40.826160 kubelet[2490]: W0913 00:16:40.825818 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:40.826160 kubelet[2490]: E0913 00:16:40.825829 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:40.826160 kubelet[2490]: E0913 00:16:40.826046 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:40.826160 kubelet[2490]: W0913 00:16:40.826055 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:40.826160 kubelet[2490]: E0913 00:16:40.826065 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:40.827741 kubelet[2490]: E0913 00:16:40.826448 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:40.827741 kubelet[2490]: W0913 00:16:40.826473 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:40.827741 kubelet[2490]: E0913 00:16:40.826487 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:40.827741 kubelet[2490]: E0913 00:16:40.826731 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:40.827741 kubelet[2490]: W0913 00:16:40.826743 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:40.827741 kubelet[2490]: E0913 00:16:40.826754 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:40.827741 kubelet[2490]: E0913 00:16:40.827059 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:40.827741 kubelet[2490]: W0913 00:16:40.827071 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:40.827741 kubelet[2490]: E0913 00:16:40.827081 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:40.827741 kubelet[2490]: E0913 00:16:40.827568 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:40.828036 kubelet[2490]: W0913 00:16:40.827578 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:40.828036 kubelet[2490]: E0913 00:16:40.827588 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:40.828036 kubelet[2490]: E0913 00:16:40.827919 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:40.828036 kubelet[2490]: W0913 00:16:40.827930 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:40.828036 kubelet[2490]: E0913 00:16:40.827940 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:40.828339 kubelet[2490]: E0913 00:16:40.828309 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:40.828339 kubelet[2490]: W0913 00:16:40.828323 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:40.828339 kubelet[2490]: E0913 00:16:40.828333 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:40.828726 kubelet[2490]: E0913 00:16:40.828703 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:40.828726 kubelet[2490]: W0913 00:16:40.828721 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:40.828790 kubelet[2490]: E0913 00:16:40.828732 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:40.829172 kubelet[2490]: E0913 00:16:40.829145 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:40.829172 kubelet[2490]: W0913 00:16:40.829159 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:40.829172 kubelet[2490]: E0913 00:16:40.829169 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:40.829537 kubelet[2490]: E0913 00:16:40.829493 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:40.829537 kubelet[2490]: W0913 00:16:40.829514 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:40.829537 kubelet[2490]: E0913 00:16:40.829529 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:40.829816 kubelet[2490]: E0913 00:16:40.829800 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:40.829816 kubelet[2490]: W0913 00:16:40.829815 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:40.829870 kubelet[2490]: E0913 00:16:40.829826 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:40.830169 kubelet[2490]: E0913 00:16:40.830150 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:40.830169 kubelet[2490]: W0913 00:16:40.830167 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:40.830169 kubelet[2490]: E0913 00:16:40.830177 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:40.830624 kubelet[2490]: E0913 00:16:40.830605 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:40.830624 kubelet[2490]: W0913 00:16:40.830620 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:40.830696 kubelet[2490]: E0913 00:16:40.830631 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:40.830981 kubelet[2490]: E0913 00:16:40.830963 2490 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:16:40.830981 kubelet[2490]: W0913 00:16:40.830979 2490 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:16:40.831030 kubelet[2490]: E0913 00:16:40.830990 2490 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:16:40.866176 containerd[1444]: time="2025-09-13T00:16:40.865683016Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:40.867113 containerd[1444]: time="2025-09-13T00:16:40.867078340Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4266814" Sep 13 00:16:40.868106 containerd[1444]: time="2025-09-13T00:16:40.868076782Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:40.870516 containerd[1444]: time="2025-09-13T00:16:40.870484788Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:40.871897 containerd[1444]: time="2025-09-13T00:16:40.871858152Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 919.057618ms" Sep 13 00:16:40.871938 containerd[1444]: time="2025-09-13T00:16:40.871899272Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 13 00:16:40.876849 containerd[1444]: time="2025-09-13T00:16:40.876799724Z" level=info msg="CreateContainer within sandbox \"2ae7fb87ba92d8e1da53515fa498c6679e1ef6468bde802b7738079a0a9c5e63\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 13 00:16:40.891755 containerd[1444]: time="2025-09-13T00:16:40.891715720Z" level=info msg="CreateContainer within sandbox \"2ae7fb87ba92d8e1da53515fa498c6679e1ef6468bde802b7738079a0a9c5e63\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5e4fac982d8fcb5c210c3ce752dc8e50ab7248fecb3626bb5e2143e851191a44\"" Sep 13 00:16:40.892437 containerd[1444]: time="2025-09-13T00:16:40.892386042Z" level=info msg="StartContainer for \"5e4fac982d8fcb5c210c3ce752dc8e50ab7248fecb3626bb5e2143e851191a44\"" Sep 13 00:16:40.919784 systemd[1]: Started cri-containerd-5e4fac982d8fcb5c210c3ce752dc8e50ab7248fecb3626bb5e2143e851191a44.scope - libcontainer container 5e4fac982d8fcb5c210c3ce752dc8e50ab7248fecb3626bb5e2143e851191a44. Sep 13 00:16:40.946729 containerd[1444]: time="2025-09-13T00:16:40.946687895Z" level=info msg="StartContainer for \"5e4fac982d8fcb5c210c3ce752dc8e50ab7248fecb3626bb5e2143e851191a44\" returns successfully" Sep 13 00:16:40.962696 systemd[1]: cri-containerd-5e4fac982d8fcb5c210c3ce752dc8e50ab7248fecb3626bb5e2143e851191a44.scope: Deactivated successfully. Sep 13 00:16:40.990649 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5e4fac982d8fcb5c210c3ce752dc8e50ab7248fecb3626bb5e2143e851191a44-rootfs.mount: Deactivated successfully. Sep 13 00:16:41.053358 containerd[1444]: time="2025-09-13T00:16:41.053302948Z" level=info msg="shim disconnected" id=5e4fac982d8fcb5c210c3ce752dc8e50ab7248fecb3626bb5e2143e851191a44 namespace=k8s.io Sep 13 00:16:41.053358 containerd[1444]: time="2025-09-13T00:16:41.053353148Z" level=warning msg="cleaning up after shim disconnected" id=5e4fac982d8fcb5c210c3ce752dc8e50ab7248fecb3626bb5e2143e851191a44 namespace=k8s.io Sep 13 00:16:41.053358 containerd[1444]: time="2025-09-13T00:16:41.053362508Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:16:41.676146 kubelet[2490]: E0913 00:16:41.671748 2490 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fs48x" podUID="0b49a5c7-d8ed-4263-b267-b04f7372f88c" Sep 13 00:16:41.731704 containerd[1444]: time="2025-09-13T00:16:41.731291423Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 13 00:16:41.733907 kubelet[2490]: I0913 00:16:41.733885 2490 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:16:41.735015 kubelet[2490]: E0913 00:16:41.734897 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:43.668981 kubelet[2490]: E0913 00:16:43.668920 2490 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fs48x" podUID="0b49a5c7-d8ed-4263-b267-b04f7372f88c" Sep 13 00:16:44.486416 containerd[1444]: time="2025-09-13T00:16:44.486372688Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:44.487317 containerd[1444]: time="2025-09-13T00:16:44.487135009Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 13 00:16:44.488231 containerd[1444]: time="2025-09-13T00:16:44.487996091Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:44.490366 containerd[1444]: time="2025-09-13T00:16:44.490339096Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:44.490913 containerd[1444]: time="2025-09-13T00:16:44.490798816Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 2.759399072s" Sep 13 00:16:44.490913 containerd[1444]: time="2025-09-13T00:16:44.490831496Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 13 00:16:44.494388 containerd[1444]: time="2025-09-13T00:16:44.494350263Z" level=info msg="CreateContainer within sandbox \"2ae7fb87ba92d8e1da53515fa498c6679e1ef6468bde802b7738079a0a9c5e63\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 13 00:16:44.505013 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2122770568.mount: Deactivated successfully. Sep 13 00:16:44.507811 containerd[1444]: time="2025-09-13T00:16:44.507763088Z" level=info msg="CreateContainer within sandbox \"2ae7fb87ba92d8e1da53515fa498c6679e1ef6468bde802b7738079a0a9c5e63\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"178502d9c2b8158c47ea1f09d55e524887088c84cd9adbd0f19755ea77a678de\"" Sep 13 00:16:44.508756 containerd[1444]: time="2025-09-13T00:16:44.508277009Z" level=info msg="StartContainer for \"178502d9c2b8158c47ea1f09d55e524887088c84cd9adbd0f19755ea77a678de\"" Sep 13 00:16:44.537766 systemd[1]: Started cri-containerd-178502d9c2b8158c47ea1f09d55e524887088c84cd9adbd0f19755ea77a678de.scope - libcontainer container 178502d9c2b8158c47ea1f09d55e524887088c84cd9adbd0f19755ea77a678de. Sep 13 00:16:44.560122 containerd[1444]: time="2025-09-13T00:16:44.560005427Z" level=info msg="StartContainer for \"178502d9c2b8158c47ea1f09d55e524887088c84cd9adbd0f19755ea77a678de\" returns successfully" Sep 13 00:16:45.104426 systemd[1]: cri-containerd-178502d9c2b8158c47ea1f09d55e524887088c84cd9adbd0f19755ea77a678de.scope: Deactivated successfully. Sep 13 00:16:45.123129 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-178502d9c2b8158c47ea1f09d55e524887088c84cd9adbd0f19755ea77a678de-rootfs.mount: Deactivated successfully. Sep 13 00:16:45.177564 kubelet[2490]: I0913 00:16:45.177445 2490 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 13 00:16:45.179470 containerd[1444]: time="2025-09-13T00:16:45.178934096Z" level=info msg="shim disconnected" id=178502d9c2b8158c47ea1f09d55e524887088c84cd9adbd0f19755ea77a678de namespace=k8s.io Sep 13 00:16:45.179470 containerd[1444]: time="2025-09-13T00:16:45.178986136Z" level=warning msg="cleaning up after shim disconnected" id=178502d9c2b8158c47ea1f09d55e524887088c84cd9adbd0f19755ea77a678de namespace=k8s.io Sep 13 00:16:45.179470 containerd[1444]: time="2025-09-13T00:16:45.178994537Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:16:45.240284 systemd[1]: Created slice kubepods-burstable-pod95502567_42ee_47eb_b6a6_72d10242f778.slice - libcontainer container kubepods-burstable-pod95502567_42ee_47eb_b6a6_72d10242f778.slice. Sep 13 00:16:45.249486 systemd[1]: Created slice kubepods-burstable-podda7a220d_d282_4f5a_9d7b_1fe40051e284.slice - libcontainer container kubepods-burstable-podda7a220d_d282_4f5a_9d7b_1fe40051e284.slice. Sep 13 00:16:45.254807 kubelet[2490]: I0913 00:16:45.254773 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/68701d44-98db-4175-b680-f21da2b19c48-tigera-ca-bundle\") pod \"calico-kube-controllers-fd6458858-dk8sk\" (UID: \"68701d44-98db-4175-b680-f21da2b19c48\") " pod="calico-system/calico-kube-controllers-fd6458858-dk8sk" Sep 13 00:16:45.254807 kubelet[2490]: I0913 00:16:45.254810 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnqcd\" (UniqueName: \"kubernetes.io/projected/68701d44-98db-4175-b680-f21da2b19c48-kube-api-access-jnqcd\") pod \"calico-kube-controllers-fd6458858-dk8sk\" (UID: \"68701d44-98db-4175-b680-f21da2b19c48\") " pod="calico-system/calico-kube-controllers-fd6458858-dk8sk" Sep 13 00:16:45.254948 kubelet[2490]: I0913 00:16:45.254832 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f97gr\" (UniqueName: \"kubernetes.io/projected/91ec7ae0-fffa-447c-b970-cf0f2591c90d-kube-api-access-f97gr\") pod \"calico-apiserver-859b9fb76c-hnqvt\" (UID: \"91ec7ae0-fffa-447c-b970-cf0f2591c90d\") " pod="calico-apiserver/calico-apiserver-859b9fb76c-hnqvt" Sep 13 00:16:45.254948 kubelet[2490]: I0913 00:16:45.254850 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/91ec7ae0-fffa-447c-b970-cf0f2591c90d-calico-apiserver-certs\") pod \"calico-apiserver-859b9fb76c-hnqvt\" (UID: \"91ec7ae0-fffa-447c-b970-cf0f2591c90d\") " pod="calico-apiserver/calico-apiserver-859b9fb76c-hnqvt" Sep 13 00:16:45.254948 kubelet[2490]: I0913 00:16:45.254865 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzngk\" (UniqueName: \"kubernetes.io/projected/95502567-42ee-47eb-b6a6-72d10242f778-kube-api-access-vzngk\") pod \"coredns-674b8bbfcf-sczp2\" (UID: \"95502567-42ee-47eb-b6a6-72d10242f778\") " pod="kube-system/coredns-674b8bbfcf-sczp2" Sep 13 00:16:45.254948 kubelet[2490]: I0913 00:16:45.254889 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c8e6805e-32c3-4cf6-8f89-9d8311bf375d-calico-apiserver-certs\") pod \"calico-apiserver-859b9fb76c-tctwm\" (UID: \"c8e6805e-32c3-4cf6-8f89-9d8311bf375d\") " pod="calico-apiserver/calico-apiserver-859b9fb76c-tctwm" Sep 13 00:16:45.254948 kubelet[2490]: I0913 00:16:45.254906 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljzsc\" (UniqueName: \"kubernetes.io/projected/c8e6805e-32c3-4cf6-8f89-9d8311bf375d-kube-api-access-ljzsc\") pod \"calico-apiserver-859b9fb76c-tctwm\" (UID: \"c8e6805e-32c3-4cf6-8f89-9d8311bf375d\") " pod="calico-apiserver/calico-apiserver-859b9fb76c-tctwm" Sep 13 00:16:45.255067 kubelet[2490]: I0913 00:16:45.254922 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5c2p\" (UniqueName: \"kubernetes.io/projected/da7a220d-d282-4f5a-9d7b-1fe40051e284-kube-api-access-x5c2p\") pod \"coredns-674b8bbfcf-qj2mh\" (UID: \"da7a220d-d282-4f5a-9d7b-1fe40051e284\") " pod="kube-system/coredns-674b8bbfcf-qj2mh" Sep 13 00:16:45.255067 kubelet[2490]: I0913 00:16:45.254939 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da7a220d-d282-4f5a-9d7b-1fe40051e284-config-volume\") pod \"coredns-674b8bbfcf-qj2mh\" (UID: \"da7a220d-d282-4f5a-9d7b-1fe40051e284\") " pod="kube-system/coredns-674b8bbfcf-qj2mh" Sep 13 00:16:45.255067 kubelet[2490]: I0913 00:16:45.254955 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/95502567-42ee-47eb-b6a6-72d10242f778-config-volume\") pod \"coredns-674b8bbfcf-sczp2\" (UID: \"95502567-42ee-47eb-b6a6-72d10242f778\") " pod="kube-system/coredns-674b8bbfcf-sczp2" Sep 13 00:16:45.255067 kubelet[2490]: I0913 00:16:45.254971 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f7583c8d-0290-45df-bcda-9e97c4955b03-whisker-backend-key-pair\") pod \"whisker-85c5c448c9-pnhk4\" (UID: \"f7583c8d-0290-45df-bcda-9e97c4955b03\") " pod="calico-system/whisker-85c5c448c9-pnhk4" Sep 13 00:16:45.255067 kubelet[2490]: I0913 00:16:45.254986 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jvpt\" (UniqueName: \"kubernetes.io/projected/f7583c8d-0290-45df-bcda-9e97c4955b03-kube-api-access-4jvpt\") pod \"whisker-85c5c448c9-pnhk4\" (UID: \"f7583c8d-0290-45df-bcda-9e97c4955b03\") " pod="calico-system/whisker-85c5c448c9-pnhk4" Sep 13 00:16:45.255189 kubelet[2490]: I0913 00:16:45.255000 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7583c8d-0290-45df-bcda-9e97c4955b03-whisker-ca-bundle\") pod \"whisker-85c5c448c9-pnhk4\" (UID: \"f7583c8d-0290-45df-bcda-9e97c4955b03\") " pod="calico-system/whisker-85c5c448c9-pnhk4" Sep 13 00:16:45.259009 systemd[1]: Created slice kubepods-besteffort-podc8e6805e_32c3_4cf6_8f89_9d8311bf375d.slice - libcontainer container kubepods-besteffort-podc8e6805e_32c3_4cf6_8f89_9d8311bf375d.slice. Sep 13 00:16:45.267370 systemd[1]: Created slice kubepods-besteffort-pod68701d44_98db_4175_b680_f21da2b19c48.slice - libcontainer container kubepods-besteffort-pod68701d44_98db_4175_b680_f21da2b19c48.slice. Sep 13 00:16:45.277270 systemd[1]: Created slice kubepods-besteffort-podf7583c8d_0290_45df_bcda_9e97c4955b03.slice - libcontainer container kubepods-besteffort-podf7583c8d_0290_45df_bcda_9e97c4955b03.slice. Sep 13 00:16:45.282484 systemd[1]: Created slice kubepods-besteffort-pod91ec7ae0_fffa_447c_b970_cf0f2591c90d.slice - libcontainer container kubepods-besteffort-pod91ec7ae0_fffa_447c_b970_cf0f2591c90d.slice. Sep 13 00:16:45.287755 systemd[1]: Created slice kubepods-besteffort-podc78523f1_1b2e_44f8_9fd4_6f6c075a99ad.slice - libcontainer container kubepods-besteffort-podc78523f1_1b2e_44f8_9fd4_6f6c075a99ad.slice. Sep 13 00:16:45.356341 kubelet[2490]: I0913 00:16:45.355875 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/c78523f1-1b2e-44f8-9fd4-6f6c075a99ad-goldmane-key-pair\") pod \"goldmane-54d579b49d-22r4m\" (UID: \"c78523f1-1b2e-44f8-9fd4-6f6c075a99ad\") " pod="calico-system/goldmane-54d579b49d-22r4m" Sep 13 00:16:45.356341 kubelet[2490]: I0913 00:16:45.355919 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkd5d\" (UniqueName: \"kubernetes.io/projected/c78523f1-1b2e-44f8-9fd4-6f6c075a99ad-kube-api-access-zkd5d\") pod \"goldmane-54d579b49d-22r4m\" (UID: \"c78523f1-1b2e-44f8-9fd4-6f6c075a99ad\") " pod="calico-system/goldmane-54d579b49d-22r4m" Sep 13 00:16:45.356341 kubelet[2490]: I0913 00:16:45.355964 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c78523f1-1b2e-44f8-9fd4-6f6c075a99ad-config\") pod \"goldmane-54d579b49d-22r4m\" (UID: \"c78523f1-1b2e-44f8-9fd4-6f6c075a99ad\") " pod="calico-system/goldmane-54d579b49d-22r4m" Sep 13 00:16:45.356341 kubelet[2490]: I0913 00:16:45.356056 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c78523f1-1b2e-44f8-9fd4-6f6c075a99ad-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-22r4m\" (UID: \"c78523f1-1b2e-44f8-9fd4-6f6c075a99ad\") " pod="calico-system/goldmane-54d579b49d-22r4m" Sep 13 00:16:45.544825 kubelet[2490]: E0913 00:16:45.544779 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:45.545510 containerd[1444]: time="2025-09-13T00:16:45.545426906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-sczp2,Uid:95502567-42ee-47eb-b6a6-72d10242f778,Namespace:kube-system,Attempt:0,}" Sep 13 00:16:45.553563 kubelet[2490]: E0913 00:16:45.553530 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:45.554504 containerd[1444]: time="2025-09-13T00:16:45.554154202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qj2mh,Uid:da7a220d-d282-4f5a-9d7b-1fe40051e284,Namespace:kube-system,Attempt:0,}" Sep 13 00:16:45.567514 containerd[1444]: time="2025-09-13T00:16:45.567281145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-859b9fb76c-tctwm,Uid:c8e6805e-32c3-4cf6-8f89-9d8311bf375d,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:16:45.572079 containerd[1444]: time="2025-09-13T00:16:45.571153072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-fd6458858-dk8sk,Uid:68701d44-98db-4175-b680-f21da2b19c48,Namespace:calico-system,Attempt:0,}" Sep 13 00:16:45.580049 containerd[1444]: time="2025-09-13T00:16:45.580010247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-85c5c448c9-pnhk4,Uid:f7583c8d-0290-45df-bcda-9e97c4955b03,Namespace:calico-system,Attempt:0,}" Sep 13 00:16:45.591636 containerd[1444]: time="2025-09-13T00:16:45.590216985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-859b9fb76c-hnqvt,Uid:91ec7ae0-fffa-447c-b970-cf0f2591c90d,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:16:45.592895 containerd[1444]: time="2025-09-13T00:16:45.592353589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-22r4m,Uid:c78523f1-1b2e-44f8-9fd4-6f6c075a99ad,Namespace:calico-system,Attempt:0,}" Sep 13 00:16:45.676082 systemd[1]: Created slice kubepods-besteffort-pod0b49a5c7_d8ed_4263_b267_b04f7372f88c.slice - libcontainer container kubepods-besteffort-pod0b49a5c7_d8ed_4263_b267_b04f7372f88c.slice. Sep 13 00:16:45.678969 containerd[1444]: time="2025-09-13T00:16:45.678606302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fs48x,Uid:0b49a5c7-d8ed-4263-b267-b04f7372f88c,Namespace:calico-system,Attempt:0,}" Sep 13 00:16:45.718392 containerd[1444]: time="2025-09-13T00:16:45.718341413Z" level=error msg="Failed to destroy network for sandbox \"b46982dd6853b5e80e42bf4510f8168884956bedb7fb50987d00163f9bd6678a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:16:45.721000 containerd[1444]: time="2025-09-13T00:16:45.720840057Z" level=error msg="Failed to destroy network for sandbox \"48be175f15263d25946c5fafd5d93ea0ba82b221a8a78f549856a0774ad11b96\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:16:45.721284 containerd[1444]: time="2025-09-13T00:16:45.721247658Z" level=error msg="encountered an error cleaning up failed sandbox \"48be175f15263d25946c5fafd5d93ea0ba82b221a8a78f549856a0774ad11b96\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:16:45.721336 containerd[1444]: time="2025-09-13T00:16:45.721304178Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qj2mh,Uid:da7a220d-d282-4f5a-9d7b-1fe40051e284,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"48be175f15263d25946c5fafd5d93ea0ba82b221a8a78f549856a0774ad11b96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:16:45.721466 containerd[1444]: time="2025-09-13T00:16:45.721325018Z" level=error msg="encountered an error cleaning up failed sandbox \"b46982dd6853b5e80e42bf4510f8168884956bedb7fb50987d00163f9bd6678a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:16:45.721724 containerd[1444]: time="2025-09-13T00:16:45.721467018Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-fd6458858-dk8sk,Uid:68701d44-98db-4175-b680-f21da2b19c48,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b46982dd6853b5e80e42bf4510f8168884956bedb7fb50987d00163f9bd6678a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:16:45.721812 kubelet[2490]: E0913 00:16:45.721564 2490 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48be175f15263d25946c5fafd5d93ea0ba82b221a8a78f549856a0774ad11b96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:16:45.722967 kubelet[2490]: E0913 00:16:45.721945 2490 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b46982dd6853b5e80e42bf4510f8168884956bedb7fb50987d00163f9bd6678a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:16:45.722967 kubelet[2490]: E0913 00:16:45.721977 2490 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b46982dd6853b5e80e42bf4510f8168884956bedb7fb50987d00163f9bd6678a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-fd6458858-dk8sk" Sep 13 00:16:45.722967 kubelet[2490]: E0913 00:16:45.721996 2490 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b46982dd6853b5e80e42bf4510f8168884956bedb7fb50987d00163f9bd6678a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-fd6458858-dk8sk" Sep 13 00:16:45.723100 containerd[1444]: time="2025-09-13T00:16:45.722568340Z" level=error msg="Failed to destroy network for sandbox \"2790ce029e84719cd3b1faf406cefb51cef139ee1c33ef00950cc85c9ef9ccbb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:16:45.723144 kubelet[2490]: E0913 00:16:45.722050 2490 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-fd6458858-dk8sk_calico-system(68701d44-98db-4175-b680-f21da2b19c48)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-fd6458858-dk8sk_calico-system(68701d44-98db-4175-b680-f21da2b19c48)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b46982dd6853b5e80e42bf4510f8168884956bedb7fb50987d00163f9bd6678a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-fd6458858-dk8sk" podUID="68701d44-98db-4175-b680-f21da2b19c48" Sep 13 00:16:45.723425 kubelet[2490]: E0913 00:16:45.721709 2490 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48be175f15263d25946c5fafd5d93ea0ba82b221a8a78f549856a0774ad11b96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-qj2mh" Sep 13 00:16:45.723425 kubelet[2490]: E0913 00:16:45.723334 2490 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48be175f15263d25946c5fafd5d93ea0ba82b221a8a78f549856a0774ad11b96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-qj2mh" Sep 13 00:16:45.723425 kubelet[2490]: E0913 00:16:45.723389 2490 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-qj2mh_kube-system(da7a220d-d282-4f5a-9d7b-1fe40051e284)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-qj2mh_kube-system(da7a220d-d282-4f5a-9d7b-1fe40051e284)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"48be175f15263d25946c5fafd5d93ea0ba82b221a8a78f549856a0774ad11b96\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-qj2mh" podUID="da7a220d-d282-4f5a-9d7b-1fe40051e284" Sep 13 00:16:45.723973 containerd[1444]: time="2025-09-13T00:16:45.723788462Z" level=error msg="encountered an error cleaning up failed sandbox \"2790ce029e84719cd3b1faf406cefb51cef139ee1c33ef00950cc85c9ef9ccbb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:16:45.723973 containerd[1444]: time="2025-09-13T00:16:45.723839142Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-sczp2,Uid:95502567-42ee-47eb-b6a6-72d10242f778,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2790ce029e84719cd3b1faf406cefb51cef139ee1c33ef00950cc85c9ef9ccbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:16:45.724064 kubelet[2490]: E0913 00:16:45.723991 2490 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2790ce029e84719cd3b1faf406cefb51cef139ee1c33ef00950cc85c9ef9ccbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:16:45.724064 kubelet[2490]: E0913 00:16:45.724030 2490 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2790ce029e84719cd3b1faf406cefb51cef139ee1c33ef00950cc85c9ef9ccbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-sczp2" Sep 13 00:16:45.724064 kubelet[2490]: E0913 00:16:45.724048 2490 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2790ce029e84719cd3b1faf406cefb51cef139ee1c33ef00950cc85c9ef9ccbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-sczp2" Sep 13 00:16:45.724151 kubelet[2490]: E0913 00:16:45.724079 2490 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-sczp2_kube-system(95502567-42ee-47eb-b6a6-72d10242f778)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-sczp2_kube-system(95502567-42ee-47eb-b6a6-72d10242f778)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2790ce029e84719cd3b1faf406cefb51cef139ee1c33ef00950cc85c9ef9ccbb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-sczp2" podUID="95502567-42ee-47eb-b6a6-72d10242f778" Sep 13 00:16:45.733173 containerd[1444]: time="2025-09-13T00:16:45.732874198Z" level=error msg="Failed to destroy network for sandbox \"6f045093ea64e7ab6523efd22b71b84a995a0b8aa3bb0d7c2b4957db7a20f022\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:16:45.734786 containerd[1444]: time="2025-09-13T00:16:45.734748682Z" level=error msg="encountered an error cleaning up failed sandbox \"6f045093ea64e7ab6523efd22b71b84a995a0b8aa3bb0d7c2b4957db7a20f022\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:16:45.734923 containerd[1444]: time="2025-09-13T00:16:45.734898802Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-859b9fb76c-tctwm,Uid:c8e6805e-32c3-4cf6-8f89-9d8311bf375d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6f045093ea64e7ab6523efd22b71b84a995a0b8aa3bb0d7c2b4957db7a20f022\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:16:45.735323 kubelet[2490]: E0913 00:16:45.735167 2490 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f045093ea64e7ab6523efd22b71b84a995a0b8aa3bb0d7c2b4957db7a20f022\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:16:45.735323 kubelet[2490]: E0913 00:16:45.735217 2490 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f045093ea64e7ab6523efd22b71b84a995a0b8aa3bb0d7c2b4957db7a20f022\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-859b9fb76c-tctwm" Sep 13 00:16:45.735323 kubelet[2490]: E0913 00:16:45.735235 2490 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f045093ea64e7ab6523efd22b71b84a995a0b8aa3bb0d7c2b4957db7a20f022\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-859b9fb76c-tctwm" Sep 13 00:16:45.735585 kubelet[2490]: E0913 00:16:45.735274 2490 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-859b9fb76c-tctwm_calico-apiserver(c8e6805e-32c3-4cf6-8f89-9d8311bf375d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-859b9fb76c-tctwm_calico-apiserver(c8e6805e-32c3-4cf6-8f89-9d8311bf375d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6f045093ea64e7ab6523efd22b71b84a995a0b8aa3bb0d7c2b4957db7a20f022\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-859b9fb76c-tctwm" podUID="c8e6805e-32c3-4cf6-8f89-9d8311bf375d" Sep 13 00:16:45.746347 containerd[1444]: time="2025-09-13T00:16:45.746272502Z" level=error msg="Failed to destroy network for sandbox \"5f98a97fe507fc5b0a47df8b5f016508e0f6a34fe089e993814fe3cc835fd60a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:16:45.747306 containerd[1444]: time="2025-09-13T00:16:45.747177624Z" level=error msg="encountered an error cleaning up failed sandbox \"5f98a97fe507fc5b0a47df8b5f016508e0f6a34fe089e993814fe3cc835fd60a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:16:45.747306 containerd[1444]: time="2025-09-13T00:16:45.747248104Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-859b9fb76c-hnqvt,Uid:91ec7ae0-fffa-447c-b970-cf0f2591c90d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5f98a97fe507fc5b0a47df8b5f016508e0f6a34fe089e993814fe3cc835fd60a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:16:45.747782 kubelet[2490]: E0913 00:16:45.747602 2490 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f98a97fe507fc5b0a47df8b5f016508e0f6a34fe089e993814fe3cc835fd60a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:16:45.747782 kubelet[2490]: E0913 00:16:45.747649 2490 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f98a97fe507fc5b0a47df8b5f016508e0f6a34fe089e993814fe3cc835fd60a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-859b9fb76c-hnqvt" Sep 13 00:16:45.747782 kubelet[2490]: E0913 00:16:45.747668 2490 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f98a97fe507fc5b0a47df8b5f016508e0f6a34fe089e993814fe3cc835fd60a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-859b9fb76c-hnqvt" Sep 13 00:16:45.747911 kubelet[2490]: E0913 00:16:45.747723 2490 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-859b9fb76c-hnqvt_calico-apiserver(91ec7ae0-fffa-447c-b970-cf0f2591c90d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-859b9fb76c-hnqvt_calico-apiserver(91ec7ae0-fffa-447c-b970-cf0f2591c90d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5f98a97fe507fc5b0a47df8b5f016508e0f6a34fe089e993814fe3cc835fd60a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-859b9fb76c-hnqvt" podUID="91ec7ae0-fffa-447c-b970-cf0f2591c90d" Sep 13 00:16:45.749585 containerd[1444]: time="2025-09-13T00:16:45.749376828Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 13 00:16:45.749703 kubelet[2490]: I0913 00:16:45.749474 2490 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b46982dd6853b5e80e42bf4510f8168884956bedb7fb50987d00163f9bd6678a" Sep 13 00:16:45.754893 containerd[1444]: time="2025-09-13T00:16:45.754837517Z" level=info msg="StopPodSandbox for \"b46982dd6853b5e80e42bf4510f8168884956bedb7fb50987d00163f9bd6678a\"" Sep 13 00:16:45.755425 containerd[1444]: time="2025-09-13T00:16:45.755097838Z" level=info msg="Ensure that sandbox b46982dd6853b5e80e42bf4510f8168884956bedb7fb50987d00163f9bd6678a in task-service has been cleanup successfully" Sep 13 00:16:45.758284 kubelet[2490]: I0913 00:16:45.757640 2490 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f045093ea64e7ab6523efd22b71b84a995a0b8aa3bb0d7c2b4957db7a20f022" Sep 13 00:16:45.759401 containerd[1444]: time="2025-09-13T00:16:45.759353285Z" level=info msg="StopPodSandbox for \"6f045093ea64e7ab6523efd22b71b84a995a0b8aa3bb0d7c2b4957db7a20f022\"" Sep 13 00:16:45.760484 containerd[1444]: time="2025-09-13T00:16:45.760437247Z" level=info msg="Ensure that sandbox 6f045093ea64e7ab6523efd22b71b84a995a0b8aa3bb0d7c2b4957db7a20f022 in task-service has been cleanup successfully" Sep 13 00:16:45.762859 kubelet[2490]: I0913 00:16:45.762824 2490 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2790ce029e84719cd3b1faf406cefb51cef139ee1c33ef00950cc85c9ef9ccbb" Sep 13 00:16:45.765311 containerd[1444]: time="2025-09-13T00:16:45.765252776Z" level=info msg="StopPodSandbox for \"2790ce029e84719cd3b1faf406cefb51cef139ee1c33ef00950cc85c9ef9ccbb\"" Sep 13 00:16:45.766618 containerd[1444]: time="2025-09-13T00:16:45.765926537Z" level=info msg="Ensure that sandbox 2790ce029e84719cd3b1faf406cefb51cef139ee1c33ef00950cc85c9ef9ccbb in task-service has been cleanup successfully" Sep 13 00:16:45.766694 kubelet[2490]: I0913 00:16:45.766042 2490 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48be175f15263d25946c5fafd5d93ea0ba82b221a8a78f549856a0774ad11b96" Sep 13 00:16:45.767719 containerd[1444]: time="2025-09-13T00:16:45.766865859Z" level=info msg="StopPodSandbox for \"48be175f15263d25946c5fafd5d93ea0ba82b221a8a78f549856a0774ad11b96\"" Sep 13 00:16:45.767901 containerd[1444]: time="2025-09-13T00:16:45.767877460Z" level=info msg="Ensure that sandbox 48be175f15263d25946c5fafd5d93ea0ba82b221a8a78f549856a0774ad11b96 in task-service has been cleanup successfully" Sep 13 00:16:45.784028 containerd[1444]: time="2025-09-13T00:16:45.783977369Z" level=error msg="Failed to destroy network for sandbox \"61578558f0fd523090e1f7292c78ab7dcd7551b3847bedd499846f518ca7ef01\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:16:45.784388 containerd[1444]: time="2025-09-13T00:16:45.784348370Z" level=error msg="encountered an error cleaning up failed sandbox \"61578558f0fd523090e1f7292c78ab7dcd7551b3847bedd499846f518ca7ef01\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:16:45.784449 containerd[1444]: time="2025-09-13T00:16:45.784412170Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-85c5c448c9-pnhk4,Uid:f7583c8d-0290-45df-bcda-9e97c4955b03,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"61578558f0fd523090e1f7292c78ab7dcd7551b3847bedd499846f518ca7ef01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:16:45.784703 kubelet[2490]: E0913 00:16:45.784653 2490 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61578558f0fd523090e1f7292c78ab7dcd7551b3847bedd499846f518ca7ef01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:16:45.784779 kubelet[2490]: E0913 00:16:45.784722 2490 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61578558f0fd523090e1f7292c78ab7dcd7551b3847bedd499846f518ca7ef01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-85c5c448c9-pnhk4" Sep 13 00:16:45.784779 kubelet[2490]: E0913 00:16:45.784743 2490 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61578558f0fd523090e1f7292c78ab7dcd7551b3847bedd499846f518ca7ef01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-85c5c448c9-pnhk4" Sep 13 00:16:45.784830 kubelet[2490]: E0913 00:16:45.784787 2490 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-85c5c448c9-pnhk4_calico-system(f7583c8d-0290-45df-bcda-9e97c4955b03)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-85c5c448c9-pnhk4_calico-system(f7583c8d-0290-45df-bcda-9e97c4955b03)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"61578558f0fd523090e1f7292c78ab7dcd7551b3847bedd499846f518ca7ef01\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-85c5c448c9-pnhk4" podUID="f7583c8d-0290-45df-bcda-9e97c4955b03" Sep 13 00:16:45.798103 containerd[1444]: time="2025-09-13T00:16:45.797895194Z" level=error msg="Failed to destroy network for sandbox \"d1d75c534df5d5f68fb52f69ac65e56c2c79024319ca0e563bfbaa673fc45661\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:16:45.800240 containerd[1444]: time="2025-09-13T00:16:45.800190398Z" level=error msg="encountered an error cleaning up failed sandbox \"d1d75c534df5d5f68fb52f69ac65e56c2c79024319ca0e563bfbaa673fc45661\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:16:45.800347 containerd[1444]: time="2025-09-13T00:16:45.800257038Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-22r4m,Uid:c78523f1-1b2e-44f8-9fd4-6f6c075a99ad,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d1d75c534df5d5f68fb52f69ac65e56c2c79024319ca0e563bfbaa673fc45661\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:16:45.800955 kubelet[2490]: E0913 00:16:45.800551 2490 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1d75c534df5d5f68fb52f69ac65e56c2c79024319ca0e563bfbaa673fc45661\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:16:45.800955 kubelet[2490]: E0913 00:16:45.800624 2490 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1d75c534df5d5f68fb52f69ac65e56c2c79024319ca0e563bfbaa673fc45661\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-22r4m" Sep 13 00:16:45.800955 kubelet[2490]: E0913 00:16:45.800646 2490 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1d75c534df5d5f68fb52f69ac65e56c2c79024319ca0e563bfbaa673fc45661\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-22r4m" Sep 13 00:16:45.801083 kubelet[2490]: E0913 00:16:45.800693 2490 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-22r4m_calico-system(c78523f1-1b2e-44f8-9fd4-6f6c075a99ad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-22r4m_calico-system(c78523f1-1b2e-44f8-9fd4-6f6c075a99ad)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d1d75c534df5d5f68fb52f69ac65e56c2c79024319ca0e563bfbaa673fc45661\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-22r4m" podUID="c78523f1-1b2e-44f8-9fd4-6f6c075a99ad" Sep 13 00:16:45.810056 containerd[1444]: time="2025-09-13T00:16:45.810002295Z" level=error msg="StopPodSandbox for \"2790ce029e84719cd3b1faf406cefb51cef139ee1c33ef00950cc85c9ef9ccbb\" failed" error="failed to destroy network for sandbox \"2790ce029e84719cd3b1faf406cefb51cef139ee1c33ef00950cc85c9ef9ccbb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:16:45.810490 kubelet[2490]: E0913 00:16:45.810226 2490 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2790ce029e84719cd3b1faf406cefb51cef139ee1c33ef00950cc85c9ef9ccbb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2790ce029e84719cd3b1faf406cefb51cef139ee1c33ef00950cc85c9ef9ccbb" Sep 13 00:16:45.810490 kubelet[2490]: E0913 00:16:45.810293 2490 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2790ce029e84719cd3b1faf406cefb51cef139ee1c33ef00950cc85c9ef9ccbb"} Sep 13 00:16:45.810490 kubelet[2490]: E0913 00:16:45.810350 2490 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"95502567-42ee-47eb-b6a6-72d10242f778\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2790ce029e84719cd3b1faf406cefb51cef139ee1c33ef00950cc85c9ef9ccbb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:16:45.810490 kubelet[2490]: E0913 00:16:45.810373 2490 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"95502567-42ee-47eb-b6a6-72d10242f778\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2790ce029e84719cd3b1faf406cefb51cef139ee1c33ef00950cc85c9ef9ccbb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-sczp2" podUID="95502567-42ee-47eb-b6a6-72d10242f778" Sep 13 00:16:45.812983 containerd[1444]: time="2025-09-13T00:16:45.812877500Z" level=error msg="StopPodSandbox for \"b46982dd6853b5e80e42bf4510f8168884956bedb7fb50987d00163f9bd6678a\" failed" error="failed to destroy network for sandbox \"b46982dd6853b5e80e42bf4510f8168884956bedb7fb50987d00163f9bd6678a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:16:45.813395 kubelet[2490]: E0913 00:16:45.813090 2490 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b46982dd6853b5e80e42bf4510f8168884956bedb7fb50987d00163f9bd6678a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b46982dd6853b5e80e42bf4510f8168884956bedb7fb50987d00163f9bd6678a" Sep 13 00:16:45.813395 kubelet[2490]: E0913 00:16:45.813148 2490 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b46982dd6853b5e80e42bf4510f8168884956bedb7fb50987d00163f9bd6678a"} Sep 13 00:16:45.813395 kubelet[2490]: E0913 00:16:45.813177 2490 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"68701d44-98db-4175-b680-f21da2b19c48\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b46982dd6853b5e80e42bf4510f8168884956bedb7fb50987d00163f9bd6678a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:16:45.813395 kubelet[2490]: E0913 00:16:45.813201 2490 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"68701d44-98db-4175-b680-f21da2b19c48\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b46982dd6853b5e80e42bf4510f8168884956bedb7fb50987d00163f9bd6678a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-fd6458858-dk8sk" podUID="68701d44-98db-4175-b680-f21da2b19c48" Sep 13 00:16:45.816166 containerd[1444]: time="2025-09-13T00:16:45.816126746Z" level=error msg="StopPodSandbox for \"6f045093ea64e7ab6523efd22b71b84a995a0b8aa3bb0d7c2b4957db7a20f022\" failed" error="failed to destroy network for sandbox \"6f045093ea64e7ab6523efd22b71b84a995a0b8aa3bb0d7c2b4957db7a20f022\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:16:45.816345 kubelet[2490]: E0913 00:16:45.816310 2490 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6f045093ea64e7ab6523efd22b71b84a995a0b8aa3bb0d7c2b4957db7a20f022\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6f045093ea64e7ab6523efd22b71b84a995a0b8aa3bb0d7c2b4957db7a20f022" Sep 13 00:16:45.816408 kubelet[2490]: E0913 00:16:45.816355 2490 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6f045093ea64e7ab6523efd22b71b84a995a0b8aa3bb0d7c2b4957db7a20f022"} Sep 13 00:16:45.816408 kubelet[2490]: E0913 00:16:45.816384 2490 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c8e6805e-32c3-4cf6-8f89-9d8311bf375d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6f045093ea64e7ab6523efd22b71b84a995a0b8aa3bb0d7c2b4957db7a20f022\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:16:45.816490 kubelet[2490]: E0913 00:16:45.816404 2490 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c8e6805e-32c3-4cf6-8f89-9d8311bf375d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6f045093ea64e7ab6523efd22b71b84a995a0b8aa3bb0d7c2b4957db7a20f022\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-859b9fb76c-tctwm" podUID="c8e6805e-32c3-4cf6-8f89-9d8311bf375d" Sep 13 00:16:45.820564 containerd[1444]: time="2025-09-13T00:16:45.820463914Z" level=error msg="StopPodSandbox for \"48be175f15263d25946c5fafd5d93ea0ba82b221a8a78f549856a0774ad11b96\" failed" error="failed to destroy network for sandbox \"48be175f15263d25946c5fafd5d93ea0ba82b221a8a78f549856a0774ad11b96\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:16:45.820814 kubelet[2490]: E0913 00:16:45.820693 2490 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"48be175f15263d25946c5fafd5d93ea0ba82b221a8a78f549856a0774ad11b96\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="48be175f15263d25946c5fafd5d93ea0ba82b221a8a78f549856a0774ad11b96" Sep 13 00:16:45.820814 kubelet[2490]: E0913 00:16:45.820736 2490 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"48be175f15263d25946c5fafd5d93ea0ba82b221a8a78f549856a0774ad11b96"} Sep 13 00:16:45.820814 kubelet[2490]: E0913 00:16:45.820760 2490 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"da7a220d-d282-4f5a-9d7b-1fe40051e284\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"48be175f15263d25946c5fafd5d93ea0ba82b221a8a78f549856a0774ad11b96\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:16:45.820814 kubelet[2490]: E0913 00:16:45.820778 2490 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"da7a220d-d282-4f5a-9d7b-1fe40051e284\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"48be175f15263d25946c5fafd5d93ea0ba82b221a8a78f549856a0774ad11b96\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-qj2mh" podUID="da7a220d-d282-4f5a-9d7b-1fe40051e284" Sep 13 00:16:45.825343 containerd[1444]: time="2025-09-13T00:16:45.825293522Z" level=error msg="Failed to destroy network for sandbox \"72bb7d3d6bce7112110c73bc80c881c9ae46c5d3e9b2c37640418d330431e6a6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:16:45.825674 containerd[1444]: time="2025-09-13T00:16:45.825648923Z" level=error msg="encountered an error cleaning up failed sandbox \"72bb7d3d6bce7112110c73bc80c881c9ae46c5d3e9b2c37640418d330431e6a6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:16:45.825718 containerd[1444]: time="2025-09-13T00:16:45.825698163Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fs48x,Uid:0b49a5c7-d8ed-4263-b267-b04f7372f88c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"72bb7d3d6bce7112110c73bc80c881c9ae46c5d3e9b2c37640418d330431e6a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:16:45.825958 kubelet[2490]: E0913 00:16:45.825899 2490 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72bb7d3d6bce7112110c73bc80c881c9ae46c5d3e9b2c37640418d330431e6a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:16:45.825997 kubelet[2490]: E0913 00:16:45.825978 2490 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72bb7d3d6bce7112110c73bc80c881c9ae46c5d3e9b2c37640418d330431e6a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fs48x" Sep 13 00:16:45.826020 kubelet[2490]: E0913 00:16:45.825997 2490 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72bb7d3d6bce7112110c73bc80c881c9ae46c5d3e9b2c37640418d330431e6a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fs48x" Sep 13 00:16:45.826075 kubelet[2490]: E0913 00:16:45.826051 2490 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fs48x_calico-system(0b49a5c7-d8ed-4263-b267-b04f7372f88c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fs48x_calico-system(0b49a5c7-d8ed-4263-b267-b04f7372f88c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"72bb7d3d6bce7112110c73bc80c881c9ae46c5d3e9b2c37640418d330431e6a6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fs48x" podUID="0b49a5c7-d8ed-4263-b267-b04f7372f88c" Sep 13 00:16:46.508521 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-48be175f15263d25946c5fafd5d93ea0ba82b221a8a78f549856a0774ad11b96-shm.mount: Deactivated successfully. Sep 13 00:16:46.509649 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2790ce029e84719cd3b1faf406cefb51cef139ee1c33ef00950cc85c9ef9ccbb-shm.mount: Deactivated successfully. Sep 13 00:16:46.770453 kubelet[2490]: I0913 00:16:46.770332 2490 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72bb7d3d6bce7112110c73bc80c881c9ae46c5d3e9b2c37640418d330431e6a6" Sep 13 00:16:46.771493 containerd[1444]: time="2025-09-13T00:16:46.770884593Z" level=info msg="StopPodSandbox for \"72bb7d3d6bce7112110c73bc80c881c9ae46c5d3e9b2c37640418d330431e6a6\"" Sep 13 00:16:46.771493 containerd[1444]: time="2025-09-13T00:16:46.771048273Z" level=info msg="Ensure that sandbox 72bb7d3d6bce7112110c73bc80c881c9ae46c5d3e9b2c37640418d330431e6a6 in task-service has been cleanup successfully" Sep 13 00:16:46.772127 kubelet[2490]: I0913 00:16:46.772088 2490 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f98a97fe507fc5b0a47df8b5f016508e0f6a34fe089e993814fe3cc835fd60a" Sep 13 00:16:46.772766 containerd[1444]: time="2025-09-13T00:16:46.772735236Z" level=info msg="StopPodSandbox for \"5f98a97fe507fc5b0a47df8b5f016508e0f6a34fe089e993814fe3cc835fd60a\"" Sep 13 00:16:46.772918 containerd[1444]: time="2025-09-13T00:16:46.772899356Z" level=info msg="Ensure that sandbox 5f98a97fe507fc5b0a47df8b5f016508e0f6a34fe089e993814fe3cc835fd60a in task-service has been cleanup successfully" Sep 13 00:16:46.780099 kubelet[2490]: I0913 00:16:46.780015 2490 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61578558f0fd523090e1f7292c78ab7dcd7551b3847bedd499846f518ca7ef01" Sep 13 00:16:46.780918 containerd[1444]: time="2025-09-13T00:16:46.780720569Z" level=info msg="StopPodSandbox for \"61578558f0fd523090e1f7292c78ab7dcd7551b3847bedd499846f518ca7ef01\"" Sep 13 00:16:46.781075 containerd[1444]: time="2025-09-13T00:16:46.781036210Z" level=info msg="Ensure that sandbox 61578558f0fd523090e1f7292c78ab7dcd7551b3847bedd499846f518ca7ef01 in task-service has been cleanup successfully" Sep 13 00:16:46.782224 kubelet[2490]: I0913 00:16:46.782187 2490 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1d75c534df5d5f68fb52f69ac65e56c2c79024319ca0e563bfbaa673fc45661" Sep 13 00:16:46.785513 containerd[1444]: time="2025-09-13T00:16:46.785167057Z" level=info msg="StopPodSandbox for \"d1d75c534df5d5f68fb52f69ac65e56c2c79024319ca0e563bfbaa673fc45661\"" Sep 13 00:16:46.785721 containerd[1444]: time="2025-09-13T00:16:46.785671578Z" level=info msg="Ensure that sandbox d1d75c534df5d5f68fb52f69ac65e56c2c79024319ca0e563bfbaa673fc45661 in task-service has been cleanup successfully" Sep 13 00:16:46.809385 containerd[1444]: time="2025-09-13T00:16:46.809296457Z" level=error msg="StopPodSandbox for \"72bb7d3d6bce7112110c73bc80c881c9ae46c5d3e9b2c37640418d330431e6a6\" failed" error="failed to destroy network for sandbox \"72bb7d3d6bce7112110c73bc80c881c9ae46c5d3e9b2c37640418d330431e6a6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:16:46.809585 kubelet[2490]: E0913 00:16:46.809535 2490 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"72bb7d3d6bce7112110c73bc80c881c9ae46c5d3e9b2c37640418d330431e6a6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="72bb7d3d6bce7112110c73bc80c881c9ae46c5d3e9b2c37640418d330431e6a6" Sep 13 00:16:46.809704 kubelet[2490]: E0913 00:16:46.809618 2490 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"72bb7d3d6bce7112110c73bc80c881c9ae46c5d3e9b2c37640418d330431e6a6"} Sep 13 00:16:46.809704 kubelet[2490]: E0913 00:16:46.809653 2490 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0b49a5c7-d8ed-4263-b267-b04f7372f88c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"72bb7d3d6bce7112110c73bc80c881c9ae46c5d3e9b2c37640418d330431e6a6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:16:46.809704 kubelet[2490]: E0913 00:16:46.809679 2490 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0b49a5c7-d8ed-4263-b267-b04f7372f88c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"72bb7d3d6bce7112110c73bc80c881c9ae46c5d3e9b2c37640418d330431e6a6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fs48x" podUID="0b49a5c7-d8ed-4263-b267-b04f7372f88c" Sep 13 00:16:46.827981 containerd[1444]: time="2025-09-13T00:16:46.827916968Z" level=error msg="StopPodSandbox for \"5f98a97fe507fc5b0a47df8b5f016508e0f6a34fe089e993814fe3cc835fd60a\" failed" error="failed to destroy network for sandbox \"5f98a97fe507fc5b0a47df8b5f016508e0f6a34fe089e993814fe3cc835fd60a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:16:46.828262 kubelet[2490]: E0913 00:16:46.828224 2490 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5f98a97fe507fc5b0a47df8b5f016508e0f6a34fe089e993814fe3cc835fd60a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5f98a97fe507fc5b0a47df8b5f016508e0f6a34fe089e993814fe3cc835fd60a" Sep 13 00:16:46.828340 kubelet[2490]: E0913 00:16:46.828279 2490 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5f98a97fe507fc5b0a47df8b5f016508e0f6a34fe089e993814fe3cc835fd60a"} Sep 13 00:16:46.828340 kubelet[2490]: E0913 00:16:46.828310 2490 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"91ec7ae0-fffa-447c-b970-cf0f2591c90d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5f98a97fe507fc5b0a47df8b5f016508e0f6a34fe089e993814fe3cc835fd60a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:16:46.828421 kubelet[2490]: E0913 00:16:46.828332 2490 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"91ec7ae0-fffa-447c-b970-cf0f2591c90d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5f98a97fe507fc5b0a47df8b5f016508e0f6a34fe089e993814fe3cc835fd60a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-859b9fb76c-hnqvt" podUID="91ec7ae0-fffa-447c-b970-cf0f2591c90d" Sep 13 00:16:46.831661 containerd[1444]: time="2025-09-13T00:16:46.831608254Z" level=error msg="StopPodSandbox for \"61578558f0fd523090e1f7292c78ab7dcd7551b3847bedd499846f518ca7ef01\" failed" error="failed to destroy network for sandbox \"61578558f0fd523090e1f7292c78ab7dcd7551b3847bedd499846f518ca7ef01\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:16:46.831868 kubelet[2490]: E0913 00:16:46.831833 2490 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"61578558f0fd523090e1f7292c78ab7dcd7551b3847bedd499846f518ca7ef01\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="61578558f0fd523090e1f7292c78ab7dcd7551b3847bedd499846f518ca7ef01" Sep 13 00:16:46.831916 kubelet[2490]: E0913 00:16:46.831882 2490 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"61578558f0fd523090e1f7292c78ab7dcd7551b3847bedd499846f518ca7ef01"} Sep 13 00:16:46.831939 kubelet[2490]: E0913 00:16:46.831912 2490 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f7583c8d-0290-45df-bcda-9e97c4955b03\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"61578558f0fd523090e1f7292c78ab7dcd7551b3847bedd499846f518ca7ef01\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:16:46.831990 kubelet[2490]: E0913 00:16:46.831933 2490 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f7583c8d-0290-45df-bcda-9e97c4955b03\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"61578558f0fd523090e1f7292c78ab7dcd7551b3847bedd499846f518ca7ef01\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-85c5c448c9-pnhk4" podUID="f7583c8d-0290-45df-bcda-9e97c4955b03" Sep 13 00:16:46.834231 containerd[1444]: time="2025-09-13T00:16:46.834177498Z" level=error msg="StopPodSandbox for \"d1d75c534df5d5f68fb52f69ac65e56c2c79024319ca0e563bfbaa673fc45661\" failed" error="failed to destroy network for sandbox \"d1d75c534df5d5f68fb52f69ac65e56c2c79024319ca0e563bfbaa673fc45661\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:16:46.834977 kubelet[2490]: E0913 00:16:46.834934 2490 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d1d75c534df5d5f68fb52f69ac65e56c2c79024319ca0e563bfbaa673fc45661\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d1d75c534df5d5f68fb52f69ac65e56c2c79024319ca0e563bfbaa673fc45661" Sep 13 00:16:46.835048 kubelet[2490]: E0913 00:16:46.834988 2490 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d1d75c534df5d5f68fb52f69ac65e56c2c79024319ca0e563bfbaa673fc45661"} Sep 13 00:16:46.835048 kubelet[2490]: E0913 00:16:46.835016 2490 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c78523f1-1b2e-44f8-9fd4-6f6c075a99ad\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d1d75c534df5d5f68fb52f69ac65e56c2c79024319ca0e563bfbaa673fc45661\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:16:46.835048 kubelet[2490]: E0913 00:16:46.835039 2490 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c78523f1-1b2e-44f8-9fd4-6f6c075a99ad\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d1d75c534df5d5f68fb52f69ac65e56c2c79024319ca0e563bfbaa673fc45661\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-22r4m" podUID="c78523f1-1b2e-44f8-9fd4-6f6c075a99ad" Sep 13 00:16:48.006272 kubelet[2490]: I0913 00:16:48.005457 2490 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:16:48.006272 kubelet[2490]: E0913 00:16:48.005800 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:48.785870 kubelet[2490]: E0913 00:16:48.785776 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:49.170516 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2022422352.mount: Deactivated successfully. Sep 13 00:16:49.512615 containerd[1444]: time="2025-09-13T00:16:49.512540894Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:49.513448 containerd[1444]: time="2025-09-13T00:16:49.513232975Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 13 00:16:49.515321 containerd[1444]: time="2025-09-13T00:16:49.515271058Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:49.518882 containerd[1444]: time="2025-09-13T00:16:49.518846343Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:49.519875 containerd[1444]: time="2025-09-13T00:16:49.519643584Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 3.770231156s" Sep 13 00:16:49.519875 containerd[1444]: time="2025-09-13T00:16:49.519672504Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 13 00:16:49.536163 containerd[1444]: time="2025-09-13T00:16:49.536113247Z" level=info msg="CreateContainer within sandbox \"2ae7fb87ba92d8e1da53515fa498c6679e1ef6468bde802b7738079a0a9c5e63\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 13 00:16:49.578893 containerd[1444]: time="2025-09-13T00:16:49.578841985Z" level=info msg="CreateContainer within sandbox \"2ae7fb87ba92d8e1da53515fa498c6679e1ef6468bde802b7738079a0a9c5e63\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"3edd6fbc7b2544de779e017308a33e603618055c21039e0d0ed0d47323d2c130\"" Sep 13 00:16:49.579590 containerd[1444]: time="2025-09-13T00:16:49.579433786Z" level=info msg="StartContainer for \"3edd6fbc7b2544de779e017308a33e603618055c21039e0d0ed0d47323d2c130\"" Sep 13 00:16:49.638768 systemd[1]: Started cri-containerd-3edd6fbc7b2544de779e017308a33e603618055c21039e0d0ed0d47323d2c130.scope - libcontainer container 3edd6fbc7b2544de779e017308a33e603618055c21039e0d0ed0d47323d2c130. Sep 13 00:16:49.669806 containerd[1444]: time="2025-09-13T00:16:49.669753589Z" level=info msg="StartContainer for \"3edd6fbc7b2544de779e017308a33e603618055c21039e0d0ed0d47323d2c130\" returns successfully" Sep 13 00:16:49.790197 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 13 00:16:49.790318 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 13 00:16:49.894111 kubelet[2490]: I0913 00:16:49.894030 2490 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-pfkkf" podStartSLOduration=1.714477923 podStartE2EDuration="12.894013217s" podCreationTimestamp="2025-09-13 00:16:37 +0000 UTC" firstStartedPulling="2025-09-13 00:16:38.341330492 +0000 UTC m=+20.759037980" lastFinishedPulling="2025-09-13 00:16:49.520865746 +0000 UTC m=+31.938573274" observedRunningTime="2025-09-13 00:16:49.807652578 +0000 UTC m=+32.225360106" watchObservedRunningTime="2025-09-13 00:16:49.894013217 +0000 UTC m=+32.311720745" Sep 13 00:16:49.896905 containerd[1444]: time="2025-09-13T00:16:49.896864860Z" level=info msg="StopPodSandbox for \"61578558f0fd523090e1f7292c78ab7dcd7551b3847bedd499846f518ca7ef01\"" Sep 13 00:16:50.120383 containerd[1444]: 2025-09-13 00:16:50.008 [INFO][3813] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="61578558f0fd523090e1f7292c78ab7dcd7551b3847bedd499846f518ca7ef01" Sep 13 00:16:50.120383 containerd[1444]: 2025-09-13 00:16:50.009 [INFO][3813] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="61578558f0fd523090e1f7292c78ab7dcd7551b3847bedd499846f518ca7ef01" iface="eth0" netns="/var/run/netns/cni-af763055-4aa1-c144-8c5e-0d65b09a19fe" Sep 13 00:16:50.120383 containerd[1444]: 2025-09-13 00:16:50.010 [INFO][3813] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="61578558f0fd523090e1f7292c78ab7dcd7551b3847bedd499846f518ca7ef01" iface="eth0" netns="/var/run/netns/cni-af763055-4aa1-c144-8c5e-0d65b09a19fe" Sep 13 00:16:50.120383 containerd[1444]: 2025-09-13 00:16:50.010 [INFO][3813] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="61578558f0fd523090e1f7292c78ab7dcd7551b3847bedd499846f518ca7ef01" iface="eth0" netns="/var/run/netns/cni-af763055-4aa1-c144-8c5e-0d65b09a19fe" Sep 13 00:16:50.120383 containerd[1444]: 2025-09-13 00:16:50.011 [INFO][3813] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="61578558f0fd523090e1f7292c78ab7dcd7551b3847bedd499846f518ca7ef01" Sep 13 00:16:50.120383 containerd[1444]: 2025-09-13 00:16:50.011 [INFO][3813] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="61578558f0fd523090e1f7292c78ab7dcd7551b3847bedd499846f518ca7ef01" Sep 13 00:16:50.120383 containerd[1444]: 2025-09-13 00:16:50.097 [INFO][3832] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="61578558f0fd523090e1f7292c78ab7dcd7551b3847bedd499846f518ca7ef01" HandleID="k8s-pod-network.61578558f0fd523090e1f7292c78ab7dcd7551b3847bedd499846f518ca7ef01" Workload="localhost-k8s-whisker--85c5c448c9--pnhk4-eth0" Sep 13 00:16:50.120383 containerd[1444]: 2025-09-13 00:16:50.097 [INFO][3832] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:16:50.120383 containerd[1444]: 2025-09-13 00:16:50.097 [INFO][3832] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:16:50.120383 containerd[1444]: 2025-09-13 00:16:50.110 [WARNING][3832] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="61578558f0fd523090e1f7292c78ab7dcd7551b3847bedd499846f518ca7ef01" HandleID="k8s-pod-network.61578558f0fd523090e1f7292c78ab7dcd7551b3847bedd499846f518ca7ef01" Workload="localhost-k8s-whisker--85c5c448c9--pnhk4-eth0" Sep 13 00:16:50.120383 containerd[1444]: 2025-09-13 00:16:50.110 [INFO][3832] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="61578558f0fd523090e1f7292c78ab7dcd7551b3847bedd499846f518ca7ef01" HandleID="k8s-pod-network.61578558f0fd523090e1f7292c78ab7dcd7551b3847bedd499846f518ca7ef01" Workload="localhost-k8s-whisker--85c5c448c9--pnhk4-eth0" Sep 13 00:16:50.120383 containerd[1444]: 2025-09-13 00:16:50.114 [INFO][3832] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:16:50.120383 containerd[1444]: 2025-09-13 00:16:50.117 [INFO][3813] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="61578558f0fd523090e1f7292c78ab7dcd7551b3847bedd499846f518ca7ef01" Sep 13 00:16:50.121203 containerd[1444]: time="2025-09-13T00:16:50.121151157Z" level=info msg="TearDown network for sandbox \"61578558f0fd523090e1f7292c78ab7dcd7551b3847bedd499846f518ca7ef01\" successfully" Sep 13 00:16:50.121247 containerd[1444]: time="2025-09-13T00:16:50.121205557Z" level=info msg="StopPodSandbox for \"61578558f0fd523090e1f7292c78ab7dcd7551b3847bedd499846f518ca7ef01\" returns successfully" Sep 13 00:16:50.170318 systemd[1]: run-netns-cni\x2daf763055\x2d4aa1\x2dc144\x2d8c5e\x2d0d65b09a19fe.mount: Deactivated successfully. Sep 13 00:16:50.192610 kubelet[2490]: I0913 00:16:50.192554 2490 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7583c8d-0290-45df-bcda-9e97c4955b03-whisker-ca-bundle\") pod \"f7583c8d-0290-45df-bcda-9e97c4955b03\" (UID: \"f7583c8d-0290-45df-bcda-9e97c4955b03\") " Sep 13 00:16:50.192728 kubelet[2490]: I0913 00:16:50.192623 2490 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jvpt\" (UniqueName: \"kubernetes.io/projected/f7583c8d-0290-45df-bcda-9e97c4955b03-kube-api-access-4jvpt\") pod \"f7583c8d-0290-45df-bcda-9e97c4955b03\" (UID: \"f7583c8d-0290-45df-bcda-9e97c4955b03\") " Sep 13 00:16:50.192728 kubelet[2490]: I0913 00:16:50.192672 2490 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f7583c8d-0290-45df-bcda-9e97c4955b03-whisker-backend-key-pair\") pod \"f7583c8d-0290-45df-bcda-9e97c4955b03\" (UID: \"f7583c8d-0290-45df-bcda-9e97c4955b03\") " Sep 13 00:16:50.202352 systemd[1]: var-lib-kubelet-pods-f7583c8d\x2d0290\x2d45df\x2dbcda\x2d9e97c4955b03-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4jvpt.mount: Deactivated successfully. Sep 13 00:16:50.205386 systemd[1]: var-lib-kubelet-pods-f7583c8d\x2d0290\x2d45df\x2dbcda\x2d9e97c4955b03-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 13 00:16:50.205906 kubelet[2490]: I0913 00:16:50.205846 2490 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7583c8d-0290-45df-bcda-9e97c4955b03-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "f7583c8d-0290-45df-bcda-9e97c4955b03" (UID: "f7583c8d-0290-45df-bcda-9e97c4955b03"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 00:16:50.206182 kubelet[2490]: I0913 00:16:50.206162 2490 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7583c8d-0290-45df-bcda-9e97c4955b03-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "f7583c8d-0290-45df-bcda-9e97c4955b03" (UID: "f7583c8d-0290-45df-bcda-9e97c4955b03"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 00:16:50.206334 kubelet[2490]: I0913 00:16:50.206313 2490 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7583c8d-0290-45df-bcda-9e97c4955b03-kube-api-access-4jvpt" (OuterVolumeSpecName: "kube-api-access-4jvpt") pod "f7583c8d-0290-45df-bcda-9e97c4955b03" (UID: "f7583c8d-0290-45df-bcda-9e97c4955b03"). InnerVolumeSpecName "kube-api-access-4jvpt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:16:50.293314 kubelet[2490]: I0913 00:16:50.293271 2490 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f7583c8d-0290-45df-bcda-9e97c4955b03-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 13 00:16:50.293510 kubelet[2490]: I0913 00:16:50.293478 2490 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7583c8d-0290-45df-bcda-9e97c4955b03-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 13 00:16:50.293510 kubelet[2490]: I0913 00:16:50.293494 2490 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4jvpt\" (UniqueName: \"kubernetes.io/projected/f7583c8d-0290-45df-bcda-9e97c4955b03-kube-api-access-4jvpt\") on node \"localhost\" DevicePath \"\"" Sep 13 00:16:50.803796 systemd[1]: Removed slice kubepods-besteffort-podf7583c8d_0290_45df_bcda_9e97c4955b03.slice - libcontainer container kubepods-besteffort-podf7583c8d_0290_45df_bcda_9e97c4955b03.slice. Sep 13 00:16:50.902705 systemd[1]: Created slice kubepods-besteffort-pod9a6bfe46_54a2_48fa_9a31_cf601a009c8d.slice - libcontainer container kubepods-besteffort-pod9a6bfe46_54a2_48fa_9a31_cf601a009c8d.slice. Sep 13 00:16:51.007832 kubelet[2490]: I0913 00:16:51.007773 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9a6bfe46-54a2-48fa-9a31-cf601a009c8d-whisker-backend-key-pair\") pod \"whisker-6677f9985d-bh4gx\" (UID: \"9a6bfe46-54a2-48fa-9a31-cf601a009c8d\") " pod="calico-system/whisker-6677f9985d-bh4gx" Sep 13 00:16:51.008358 kubelet[2490]: I0913 00:16:51.008271 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmng7\" (UniqueName: \"kubernetes.io/projected/9a6bfe46-54a2-48fa-9a31-cf601a009c8d-kube-api-access-tmng7\") pod \"whisker-6677f9985d-bh4gx\" (UID: \"9a6bfe46-54a2-48fa-9a31-cf601a009c8d\") " pod="calico-system/whisker-6677f9985d-bh4gx" Sep 13 00:16:51.008358 kubelet[2490]: I0913 00:16:51.008317 2490 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a6bfe46-54a2-48fa-9a31-cf601a009c8d-whisker-ca-bundle\") pod \"whisker-6677f9985d-bh4gx\" (UID: \"9a6bfe46-54a2-48fa-9a31-cf601a009c8d\") " pod="calico-system/whisker-6677f9985d-bh4gx" Sep 13 00:16:51.208781 containerd[1444]: time="2025-09-13T00:16:51.208737617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6677f9985d-bh4gx,Uid:9a6bfe46-54a2-48fa-9a31-cf601a009c8d,Namespace:calico-system,Attempt:0,}" Sep 13 00:16:51.372747 systemd-networkd[1380]: cali0214db8c70f: Link UP Sep 13 00:16:51.375932 systemd-networkd[1380]: cali0214db8c70f: Gained carrier Sep 13 00:16:51.400990 containerd[1444]: 2025-09-13 00:16:51.266 [INFO][3931] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:16:51.400990 containerd[1444]: 2025-09-13 00:16:51.287 [INFO][3931] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6677f9985d--bh4gx-eth0 whisker-6677f9985d- calico-system 9a6bfe46-54a2-48fa-9a31-cf601a009c8d 957 0 2025-09-13 00:16:50 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6677f9985d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6677f9985d-bh4gx eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali0214db8c70f [] [] }} ContainerID="5fd7f3fc28b90e52e2fdb6422dc9034ef7fd7e17b479e50ca397e64c6002a436" Namespace="calico-system" Pod="whisker-6677f9985d-bh4gx" WorkloadEndpoint="localhost-k8s-whisker--6677f9985d--bh4gx-" Sep 13 00:16:51.400990 containerd[1444]: 2025-09-13 00:16:51.287 [INFO][3931] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5fd7f3fc28b90e52e2fdb6422dc9034ef7fd7e17b479e50ca397e64c6002a436" Namespace="calico-system" Pod="whisker-6677f9985d-bh4gx" WorkloadEndpoint="localhost-k8s-whisker--6677f9985d--bh4gx-eth0" Sep 13 00:16:51.400990 containerd[1444]: 2025-09-13 00:16:51.315 [INFO][3987] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5fd7f3fc28b90e52e2fdb6422dc9034ef7fd7e17b479e50ca397e64c6002a436" HandleID="k8s-pod-network.5fd7f3fc28b90e52e2fdb6422dc9034ef7fd7e17b479e50ca397e64c6002a436" Workload="localhost-k8s-whisker--6677f9985d--bh4gx-eth0" Sep 13 00:16:51.400990 containerd[1444]: 2025-09-13 00:16:51.316 [INFO][3987] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5fd7f3fc28b90e52e2fdb6422dc9034ef7fd7e17b479e50ca397e64c6002a436" HandleID="k8s-pod-network.5fd7f3fc28b90e52e2fdb6422dc9034ef7fd7e17b479e50ca397e64c6002a436" Workload="localhost-k8s-whisker--6677f9985d--bh4gx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3090), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6677f9985d-bh4gx", "timestamp":"2025-09-13 00:16:51.315901466 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:16:51.400990 containerd[1444]: 2025-09-13 00:16:51.316 [INFO][3987] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:16:51.400990 containerd[1444]: 2025-09-13 00:16:51.316 [INFO][3987] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:16:51.400990 containerd[1444]: 2025-09-13 00:16:51.316 [INFO][3987] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:16:51.400990 containerd[1444]: 2025-09-13 00:16:51.332 [INFO][3987] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5fd7f3fc28b90e52e2fdb6422dc9034ef7fd7e17b479e50ca397e64c6002a436" host="localhost" Sep 13 00:16:51.400990 containerd[1444]: 2025-09-13 00:16:51.338 [INFO][3987] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:16:51.400990 containerd[1444]: 2025-09-13 00:16:51.343 [INFO][3987] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:16:51.400990 containerd[1444]: 2025-09-13 00:16:51.345 [INFO][3987] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:16:51.400990 containerd[1444]: 2025-09-13 00:16:51.347 [INFO][3987] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:16:51.400990 containerd[1444]: 2025-09-13 00:16:51.347 [INFO][3987] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5fd7f3fc28b90e52e2fdb6422dc9034ef7fd7e17b479e50ca397e64c6002a436" host="localhost" Sep 13 00:16:51.400990 containerd[1444]: 2025-09-13 00:16:51.349 [INFO][3987] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5fd7f3fc28b90e52e2fdb6422dc9034ef7fd7e17b479e50ca397e64c6002a436 Sep 13 00:16:51.400990 containerd[1444]: 2025-09-13 00:16:51.352 [INFO][3987] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5fd7f3fc28b90e52e2fdb6422dc9034ef7fd7e17b479e50ca397e64c6002a436" host="localhost" Sep 13 00:16:51.400990 containerd[1444]: 2025-09-13 00:16:51.357 [INFO][3987] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.5fd7f3fc28b90e52e2fdb6422dc9034ef7fd7e17b479e50ca397e64c6002a436" host="localhost" Sep 13 00:16:51.400990 containerd[1444]: 2025-09-13 00:16:51.357 [INFO][3987] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.5fd7f3fc28b90e52e2fdb6422dc9034ef7fd7e17b479e50ca397e64c6002a436" host="localhost" Sep 13 00:16:51.400990 containerd[1444]: 2025-09-13 00:16:51.357 [INFO][3987] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:16:51.400990 containerd[1444]: 2025-09-13 00:16:51.357 [INFO][3987] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="5fd7f3fc28b90e52e2fdb6422dc9034ef7fd7e17b479e50ca397e64c6002a436" HandleID="k8s-pod-network.5fd7f3fc28b90e52e2fdb6422dc9034ef7fd7e17b479e50ca397e64c6002a436" Workload="localhost-k8s-whisker--6677f9985d--bh4gx-eth0" Sep 13 00:16:51.401605 containerd[1444]: 2025-09-13 00:16:51.361 [INFO][3931] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5fd7f3fc28b90e52e2fdb6422dc9034ef7fd7e17b479e50ca397e64c6002a436" Namespace="calico-system" Pod="whisker-6677f9985d-bh4gx" WorkloadEndpoint="localhost-k8s-whisker--6677f9985d--bh4gx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6677f9985d--bh4gx-eth0", GenerateName:"whisker-6677f9985d-", Namespace:"calico-system", SelfLink:"", UID:"9a6bfe46-54a2-48fa-9a31-cf601a009c8d", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 16, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6677f9985d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6677f9985d-bh4gx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0214db8c70f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:16:51.401605 containerd[1444]: 2025-09-13 00:16:51.361 [INFO][3931] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="5fd7f3fc28b90e52e2fdb6422dc9034ef7fd7e17b479e50ca397e64c6002a436" Namespace="calico-system" Pod="whisker-6677f9985d-bh4gx" WorkloadEndpoint="localhost-k8s-whisker--6677f9985d--bh4gx-eth0" Sep 13 00:16:51.401605 containerd[1444]: 2025-09-13 00:16:51.361 [INFO][3931] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0214db8c70f ContainerID="5fd7f3fc28b90e52e2fdb6422dc9034ef7fd7e17b479e50ca397e64c6002a436" Namespace="calico-system" Pod="whisker-6677f9985d-bh4gx" WorkloadEndpoint="localhost-k8s-whisker--6677f9985d--bh4gx-eth0" Sep 13 00:16:51.401605 containerd[1444]: 2025-09-13 00:16:51.377 [INFO][3931] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5fd7f3fc28b90e52e2fdb6422dc9034ef7fd7e17b479e50ca397e64c6002a436" Namespace="calico-system" Pod="whisker-6677f9985d-bh4gx" WorkloadEndpoint="localhost-k8s-whisker--6677f9985d--bh4gx-eth0" Sep 13 00:16:51.401605 containerd[1444]: 2025-09-13 00:16:51.379 [INFO][3931] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5fd7f3fc28b90e52e2fdb6422dc9034ef7fd7e17b479e50ca397e64c6002a436" Namespace="calico-system" Pod="whisker-6677f9985d-bh4gx" WorkloadEndpoint="localhost-k8s-whisker--6677f9985d--bh4gx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6677f9985d--bh4gx-eth0", GenerateName:"whisker-6677f9985d-", Namespace:"calico-system", SelfLink:"", UID:"9a6bfe46-54a2-48fa-9a31-cf601a009c8d", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 16, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6677f9985d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5fd7f3fc28b90e52e2fdb6422dc9034ef7fd7e17b479e50ca397e64c6002a436", Pod:"whisker-6677f9985d-bh4gx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0214db8c70f", MAC:"16:9f:3a:27:58:2a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:16:51.401605 containerd[1444]: 2025-09-13 00:16:51.396 [INFO][3931] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5fd7f3fc28b90e52e2fdb6422dc9034ef7fd7e17b479e50ca397e64c6002a436" Namespace="calico-system" Pod="whisker-6677f9985d-bh4gx" WorkloadEndpoint="localhost-k8s-whisker--6677f9985d--bh4gx-eth0" Sep 13 00:16:51.431193 containerd[1444]: time="2025-09-13T00:16:51.431099244Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:16:51.431193 containerd[1444]: time="2025-09-13T00:16:51.431157884Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:16:51.431193 containerd[1444]: time="2025-09-13T00:16:51.431173444Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:16:51.431454 containerd[1444]: time="2025-09-13T00:16:51.431251885Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:16:51.457755 systemd[1]: Started cri-containerd-5fd7f3fc28b90e52e2fdb6422dc9034ef7fd7e17b479e50ca397e64c6002a436.scope - libcontainer container 5fd7f3fc28b90e52e2fdb6422dc9034ef7fd7e17b479e50ca397e64c6002a436. Sep 13 00:16:51.472624 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:16:51.495784 containerd[1444]: time="2025-09-13T00:16:51.495740722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6677f9985d-bh4gx,Uid:9a6bfe46-54a2-48fa-9a31-cf601a009c8d,Namespace:calico-system,Attempt:0,} returns sandbox id \"5fd7f3fc28b90e52e2fdb6422dc9034ef7fd7e17b479e50ca397e64c6002a436\"" Sep 13 00:16:51.499560 containerd[1444]: time="2025-09-13T00:16:51.499531287Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 13 00:16:51.510628 kernel: bpftool[4078]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 13 00:16:51.668281 systemd-networkd[1380]: vxlan.calico: Link UP Sep 13 00:16:51.668288 systemd-networkd[1380]: vxlan.calico: Gained carrier Sep 13 00:16:51.671734 kubelet[2490]: I0913 00:16:51.671691 2490 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7583c8d-0290-45df-bcda-9e97c4955b03" path="/var/lib/kubelet/pods/f7583c8d-0290-45df-bcda-9e97c4955b03/volumes" Sep 13 00:16:52.527148 containerd[1444]: time="2025-09-13T00:16:52.527065284Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:52.528188 containerd[1444]: time="2025-09-13T00:16:52.528154525Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 13 00:16:52.529879 containerd[1444]: time="2025-09-13T00:16:52.529585127Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:52.537424 containerd[1444]: time="2025-09-13T00:16:52.537386735Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:52.538292 containerd[1444]: time="2025-09-13T00:16:52.538262216Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 1.038692369s" Sep 13 00:16:52.538349 containerd[1444]: time="2025-09-13T00:16:52.538300336Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 13 00:16:52.543208 containerd[1444]: time="2025-09-13T00:16:52.543047622Z" level=info msg="CreateContainer within sandbox \"5fd7f3fc28b90e52e2fdb6422dc9034ef7fd7e17b479e50ca397e64c6002a436\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 13 00:16:52.561403 containerd[1444]: time="2025-09-13T00:16:52.561356162Z" level=info msg="CreateContainer within sandbox \"5fd7f3fc28b90e52e2fdb6422dc9034ef7fd7e17b479e50ca397e64c6002a436\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"51fbf90da57010cd14ca1b4279fcd7562472e8755f748efc4dc037fe9264e6bb\"" Sep 13 00:16:52.561949 containerd[1444]: time="2025-09-13T00:16:52.561852203Z" level=info msg="StartContainer for \"51fbf90da57010cd14ca1b4279fcd7562472e8755f748efc4dc037fe9264e6bb\"" Sep 13 00:16:52.602800 systemd[1]: Started cri-containerd-51fbf90da57010cd14ca1b4279fcd7562472e8755f748efc4dc037fe9264e6bb.scope - libcontainer container 51fbf90da57010cd14ca1b4279fcd7562472e8755f748efc4dc037fe9264e6bb. Sep 13 00:16:52.638722 containerd[1444]: time="2025-09-13T00:16:52.638542770Z" level=info msg="StartContainer for \"51fbf90da57010cd14ca1b4279fcd7562472e8755f748efc4dc037fe9264e6bb\" returns successfully" Sep 13 00:16:52.640332 containerd[1444]: time="2025-09-13T00:16:52.640300972Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 13 00:16:52.711728 systemd-networkd[1380]: vxlan.calico: Gained IPv6LL Sep 13 00:16:52.969296 systemd-networkd[1380]: cali0214db8c70f: Gained IPv6LL Sep 13 00:16:53.918395 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1227188177.mount: Deactivated successfully. Sep 13 00:16:53.941256 containerd[1444]: time="2025-09-13T00:16:53.941202493Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:53.942271 containerd[1444]: time="2025-09-13T00:16:53.942183974Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 13 00:16:53.943293 containerd[1444]: time="2025-09-13T00:16:53.943267335Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:53.946718 containerd[1444]: time="2025-09-13T00:16:53.946515539Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:53.947546 containerd[1444]: time="2025-09-13T00:16:53.947519140Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 1.307179768s" Sep 13 00:16:53.947712 containerd[1444]: time="2025-09-13T00:16:53.947554220Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 13 00:16:53.955999 containerd[1444]: time="2025-09-13T00:16:53.955877268Z" level=info msg="CreateContainer within sandbox \"5fd7f3fc28b90e52e2fdb6422dc9034ef7fd7e17b479e50ca397e64c6002a436\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 13 00:16:53.965011 containerd[1444]: time="2025-09-13T00:16:53.964918118Z" level=info msg="CreateContainer within sandbox \"5fd7f3fc28b90e52e2fdb6422dc9034ef7fd7e17b479e50ca397e64c6002a436\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"411a1c2f998c0eb3f9c0eba956b45f004eef7db9b01f5f01db919e10671376de\"" Sep 13 00:16:53.966026 containerd[1444]: time="2025-09-13T00:16:53.965995719Z" level=info msg="StartContainer for \"411a1c2f998c0eb3f9c0eba956b45f004eef7db9b01f5f01db919e10671376de\"" Sep 13 00:16:54.002801 systemd[1]: Started cri-containerd-411a1c2f998c0eb3f9c0eba956b45f004eef7db9b01f5f01db919e10671376de.scope - libcontainer container 411a1c2f998c0eb3f9c0eba956b45f004eef7db9b01f5f01db919e10671376de. Sep 13 00:16:54.031032 containerd[1444]: time="2025-09-13T00:16:54.030957586Z" level=info msg="StartContainer for \"411a1c2f998c0eb3f9c0eba956b45f004eef7db9b01f5f01db919e10671376de\" returns successfully" Sep 13 00:16:54.822374 kubelet[2490]: I0913 00:16:54.822282 2490 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6677f9985d-bh4gx" podStartSLOduration=2.3710969139999998 podStartE2EDuration="4.822265611s" podCreationTimestamp="2025-09-13 00:16:50 +0000 UTC" firstStartedPulling="2025-09-13 00:16:51.497570604 +0000 UTC m=+33.915278132" lastFinishedPulling="2025-09-13 00:16:53.948739301 +0000 UTC m=+36.366446829" observedRunningTime="2025-09-13 00:16:54.82175513 +0000 UTC m=+37.239462658" watchObservedRunningTime="2025-09-13 00:16:54.822265611 +0000 UTC m=+37.239973099" Sep 13 00:16:57.669783 containerd[1444]: time="2025-09-13T00:16:57.669740775Z" level=info msg="StopPodSandbox for \"72bb7d3d6bce7112110c73bc80c881c9ae46c5d3e9b2c37640418d330431e6a6\"" Sep 13 00:16:57.670492 containerd[1444]: time="2025-09-13T00:16:57.670258336Z" level=info msg="StopPodSandbox for \"d1d75c534df5d5f68fb52f69ac65e56c2c79024319ca0e563bfbaa673fc45661\"" Sep 13 00:16:57.769734 containerd[1444]: 2025-09-13 00:16:57.723 [INFO][4303] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d1d75c534df5d5f68fb52f69ac65e56c2c79024319ca0e563bfbaa673fc45661" Sep 13 00:16:57.769734 containerd[1444]: 2025-09-13 00:16:57.724 [INFO][4303] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d1d75c534df5d5f68fb52f69ac65e56c2c79024319ca0e563bfbaa673fc45661" iface="eth0" netns="/var/run/netns/cni-4fe746e6-2f1b-9020-7d87-59ce75d8405e" Sep 13 00:16:57.769734 containerd[1444]: 2025-09-13 00:16:57.725 [INFO][4303] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d1d75c534df5d5f68fb52f69ac65e56c2c79024319ca0e563bfbaa673fc45661" iface="eth0" netns="/var/run/netns/cni-4fe746e6-2f1b-9020-7d87-59ce75d8405e" Sep 13 00:16:57.769734 containerd[1444]: 2025-09-13 00:16:57.725 [INFO][4303] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d1d75c534df5d5f68fb52f69ac65e56c2c79024319ca0e563bfbaa673fc45661" iface="eth0" netns="/var/run/netns/cni-4fe746e6-2f1b-9020-7d87-59ce75d8405e" Sep 13 00:16:57.769734 containerd[1444]: 2025-09-13 00:16:57.725 [INFO][4303] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d1d75c534df5d5f68fb52f69ac65e56c2c79024319ca0e563bfbaa673fc45661" Sep 13 00:16:57.769734 containerd[1444]: 2025-09-13 00:16:57.725 [INFO][4303] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d1d75c534df5d5f68fb52f69ac65e56c2c79024319ca0e563bfbaa673fc45661" Sep 13 00:16:57.769734 containerd[1444]: 2025-09-13 00:16:57.749 [INFO][4317] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d1d75c534df5d5f68fb52f69ac65e56c2c79024319ca0e563bfbaa673fc45661" HandleID="k8s-pod-network.d1d75c534df5d5f68fb52f69ac65e56c2c79024319ca0e563bfbaa673fc45661" Workload="localhost-k8s-goldmane--54d579b49d--22r4m-eth0" Sep 13 00:16:57.769734 containerd[1444]: 2025-09-13 00:16:57.750 [INFO][4317] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:16:57.769734 containerd[1444]: 2025-09-13 00:16:57.750 [INFO][4317] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:16:57.769734 containerd[1444]: 2025-09-13 00:16:57.762 [WARNING][4317] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d1d75c534df5d5f68fb52f69ac65e56c2c79024319ca0e563bfbaa673fc45661" HandleID="k8s-pod-network.d1d75c534df5d5f68fb52f69ac65e56c2c79024319ca0e563bfbaa673fc45661" Workload="localhost-k8s-goldmane--54d579b49d--22r4m-eth0" Sep 13 00:16:57.769734 containerd[1444]: 2025-09-13 00:16:57.762 [INFO][4317] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d1d75c534df5d5f68fb52f69ac65e56c2c79024319ca0e563bfbaa673fc45661" HandleID="k8s-pod-network.d1d75c534df5d5f68fb52f69ac65e56c2c79024319ca0e563bfbaa673fc45661" Workload="localhost-k8s-goldmane--54d579b49d--22r4m-eth0" Sep 13 00:16:57.769734 containerd[1444]: 2025-09-13 00:16:57.764 [INFO][4317] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:16:57.769734 containerd[1444]: 2025-09-13 00:16:57.768 [INFO][4303] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d1d75c534df5d5f68fb52f69ac65e56c2c79024319ca0e563bfbaa673fc45661" Sep 13 00:16:57.772560 containerd[1444]: time="2025-09-13T00:16:57.769868497Z" level=info msg="TearDown network for sandbox \"d1d75c534df5d5f68fb52f69ac65e56c2c79024319ca0e563bfbaa673fc45661\" successfully" Sep 13 00:16:57.772560 containerd[1444]: time="2025-09-13T00:16:57.769894497Z" level=info msg="StopPodSandbox for \"d1d75c534df5d5f68fb52f69ac65e56c2c79024319ca0e563bfbaa673fc45661\" returns successfully" Sep 13 00:16:57.772560 containerd[1444]: time="2025-09-13T00:16:57.772159259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-22r4m,Uid:c78523f1-1b2e-44f8-9fd4-6f6c075a99ad,Namespace:calico-system,Attempt:1,}" Sep 13 00:16:57.771861 systemd[1]: run-netns-cni\x2d4fe746e6\x2d2f1b\x2d9020\x2d7d87\x2d59ce75d8405e.mount: Deactivated successfully. Sep 13 00:16:57.782086 containerd[1444]: 2025-09-13 00:16:57.721 [INFO][4298] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="72bb7d3d6bce7112110c73bc80c881c9ae46c5d3e9b2c37640418d330431e6a6" Sep 13 00:16:57.782086 containerd[1444]: 2025-09-13 00:16:57.721 [INFO][4298] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="72bb7d3d6bce7112110c73bc80c881c9ae46c5d3e9b2c37640418d330431e6a6" iface="eth0" netns="/var/run/netns/cni-c935f22c-df37-884c-e441-0d47c5f96760" Sep 13 00:16:57.782086 containerd[1444]: 2025-09-13 00:16:57.721 [INFO][4298] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="72bb7d3d6bce7112110c73bc80c881c9ae46c5d3e9b2c37640418d330431e6a6" iface="eth0" netns="/var/run/netns/cni-c935f22c-df37-884c-e441-0d47c5f96760" Sep 13 00:16:57.782086 containerd[1444]: 2025-09-13 00:16:57.722 [INFO][4298] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="72bb7d3d6bce7112110c73bc80c881c9ae46c5d3e9b2c37640418d330431e6a6" iface="eth0" netns="/var/run/netns/cni-c935f22c-df37-884c-e441-0d47c5f96760" Sep 13 00:16:57.782086 containerd[1444]: 2025-09-13 00:16:57.722 [INFO][4298] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="72bb7d3d6bce7112110c73bc80c881c9ae46c5d3e9b2c37640418d330431e6a6" Sep 13 00:16:57.782086 containerd[1444]: 2025-09-13 00:16:57.722 [INFO][4298] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="72bb7d3d6bce7112110c73bc80c881c9ae46c5d3e9b2c37640418d330431e6a6" Sep 13 00:16:57.782086 containerd[1444]: 2025-09-13 00:16:57.750 [INFO][4316] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="72bb7d3d6bce7112110c73bc80c881c9ae46c5d3e9b2c37640418d330431e6a6" HandleID="k8s-pod-network.72bb7d3d6bce7112110c73bc80c881c9ae46c5d3e9b2c37640418d330431e6a6" Workload="localhost-k8s-csi--node--driver--fs48x-eth0" Sep 13 00:16:57.782086 containerd[1444]: 2025-09-13 00:16:57.750 [INFO][4316] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:16:57.782086 containerd[1444]: 2025-09-13 00:16:57.764 [INFO][4316] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:16:57.782086 containerd[1444]: 2025-09-13 00:16:57.776 [WARNING][4316] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="72bb7d3d6bce7112110c73bc80c881c9ae46c5d3e9b2c37640418d330431e6a6" HandleID="k8s-pod-network.72bb7d3d6bce7112110c73bc80c881c9ae46c5d3e9b2c37640418d330431e6a6" Workload="localhost-k8s-csi--node--driver--fs48x-eth0" Sep 13 00:16:57.782086 containerd[1444]: 2025-09-13 00:16:57.776 [INFO][4316] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="72bb7d3d6bce7112110c73bc80c881c9ae46c5d3e9b2c37640418d330431e6a6" HandleID="k8s-pod-network.72bb7d3d6bce7112110c73bc80c881c9ae46c5d3e9b2c37640418d330431e6a6" Workload="localhost-k8s-csi--node--driver--fs48x-eth0" Sep 13 00:16:57.782086 containerd[1444]: 2025-09-13 00:16:57.777 [INFO][4316] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:16:57.782086 containerd[1444]: 2025-09-13 00:16:57.780 [INFO][4298] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="72bb7d3d6bce7112110c73bc80c881c9ae46c5d3e9b2c37640418d330431e6a6" Sep 13 00:16:57.782086 containerd[1444]: time="2025-09-13T00:16:57.782037147Z" level=info msg="TearDown network for sandbox \"72bb7d3d6bce7112110c73bc80c881c9ae46c5d3e9b2c37640418d330431e6a6\" successfully" Sep 13 00:16:57.782086 containerd[1444]: time="2025-09-13T00:16:57.782081227Z" level=info msg="StopPodSandbox for \"72bb7d3d6bce7112110c73bc80c881c9ae46c5d3e9b2c37640418d330431e6a6\" returns successfully" Sep 13 00:16:57.782934 containerd[1444]: time="2025-09-13T00:16:57.782717268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fs48x,Uid:0b49a5c7-d8ed-4263-b267-b04f7372f88c,Namespace:calico-system,Attempt:1,}" Sep 13 00:16:57.785550 systemd[1]: run-netns-cni\x2dc935f22c\x2ddf37\x2d884c\x2de441\x2d0d47c5f96760.mount: Deactivated successfully. Sep 13 00:16:57.929135 systemd-networkd[1380]: cali9c7a8e4bb20: Link UP Sep 13 00:16:57.930029 systemd-networkd[1380]: cali9c7a8e4bb20: Gained carrier Sep 13 00:16:57.947737 containerd[1444]: 2025-09-13 00:16:57.864 [INFO][4333] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--fs48x-eth0 csi-node-driver- calico-system 0b49a5c7-d8ed-4263-b267-b04f7372f88c 995 0 2025-09-13 00:16:38 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-fs48x eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali9c7a8e4bb20 [] [] }} ContainerID="7fe730565fc50e613efc2b53124308e6db9888465473d6de76981595e78067e5" Namespace="calico-system" Pod="csi-node-driver-fs48x" WorkloadEndpoint="localhost-k8s-csi--node--driver--fs48x-" Sep 13 00:16:57.947737 containerd[1444]: 2025-09-13 00:16:57.865 [INFO][4333] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7fe730565fc50e613efc2b53124308e6db9888465473d6de76981595e78067e5" Namespace="calico-system" Pod="csi-node-driver-fs48x" WorkloadEndpoint="localhost-k8s-csi--node--driver--fs48x-eth0" Sep 13 00:16:57.947737 containerd[1444]: 2025-09-13 00:16:57.890 [INFO][4360] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7fe730565fc50e613efc2b53124308e6db9888465473d6de76981595e78067e5" HandleID="k8s-pod-network.7fe730565fc50e613efc2b53124308e6db9888465473d6de76981595e78067e5" Workload="localhost-k8s-csi--node--driver--fs48x-eth0" Sep 13 00:16:57.947737 containerd[1444]: 2025-09-13 00:16:57.890 [INFO][4360] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7fe730565fc50e613efc2b53124308e6db9888465473d6de76981595e78067e5" HandleID="k8s-pod-network.7fe730565fc50e613efc2b53124308e6db9888465473d6de76981595e78067e5" Workload="localhost-k8s-csi--node--driver--fs48x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000117460), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-fs48x", "timestamp":"2025-09-13 00:16:57.890727996 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:16:57.947737 containerd[1444]: 2025-09-13 00:16:57.890 [INFO][4360] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:16:57.947737 containerd[1444]: 2025-09-13 00:16:57.891 [INFO][4360] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:16:57.947737 containerd[1444]: 2025-09-13 00:16:57.891 [INFO][4360] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:16:57.947737 containerd[1444]: 2025-09-13 00:16:57.900 [INFO][4360] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7fe730565fc50e613efc2b53124308e6db9888465473d6de76981595e78067e5" host="localhost" Sep 13 00:16:57.947737 containerd[1444]: 2025-09-13 00:16:57.906 [INFO][4360] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:16:57.947737 containerd[1444]: 2025-09-13 00:16:57.910 [INFO][4360] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:16:57.947737 containerd[1444]: 2025-09-13 00:16:57.912 [INFO][4360] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:16:57.947737 containerd[1444]: 2025-09-13 00:16:57.914 [INFO][4360] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:16:57.947737 containerd[1444]: 2025-09-13 00:16:57.914 [INFO][4360] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7fe730565fc50e613efc2b53124308e6db9888465473d6de76981595e78067e5" host="localhost" Sep 13 00:16:57.947737 containerd[1444]: 2025-09-13 00:16:57.916 [INFO][4360] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7fe730565fc50e613efc2b53124308e6db9888465473d6de76981595e78067e5 Sep 13 00:16:57.947737 containerd[1444]: 2025-09-13 00:16:57.919 [INFO][4360] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7fe730565fc50e613efc2b53124308e6db9888465473d6de76981595e78067e5" host="localhost" Sep 13 00:16:57.947737 containerd[1444]: 2025-09-13 00:16:57.924 [INFO][4360] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.7fe730565fc50e613efc2b53124308e6db9888465473d6de76981595e78067e5" host="localhost" Sep 13 00:16:57.947737 containerd[1444]: 2025-09-13 00:16:57.924 [INFO][4360] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.7fe730565fc50e613efc2b53124308e6db9888465473d6de76981595e78067e5" host="localhost" Sep 13 00:16:57.947737 containerd[1444]: 2025-09-13 00:16:57.924 [INFO][4360] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:16:57.947737 containerd[1444]: 2025-09-13 00:16:57.924 [INFO][4360] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="7fe730565fc50e613efc2b53124308e6db9888465473d6de76981595e78067e5" HandleID="k8s-pod-network.7fe730565fc50e613efc2b53124308e6db9888465473d6de76981595e78067e5" Workload="localhost-k8s-csi--node--driver--fs48x-eth0" Sep 13 00:16:57.948259 containerd[1444]: 2025-09-13 00:16:57.926 [INFO][4333] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7fe730565fc50e613efc2b53124308e6db9888465473d6de76981595e78067e5" Namespace="calico-system" Pod="csi-node-driver-fs48x" WorkloadEndpoint="localhost-k8s-csi--node--driver--fs48x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--fs48x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0b49a5c7-d8ed-4263-b267-b04f7372f88c", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 16, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-fs48x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9c7a8e4bb20", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:16:57.948259 containerd[1444]: 2025-09-13 00:16:57.926 [INFO][4333] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="7fe730565fc50e613efc2b53124308e6db9888465473d6de76981595e78067e5" Namespace="calico-system" Pod="csi-node-driver-fs48x" WorkloadEndpoint="localhost-k8s-csi--node--driver--fs48x-eth0" Sep 13 00:16:57.948259 containerd[1444]: 2025-09-13 00:16:57.926 [INFO][4333] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9c7a8e4bb20 ContainerID="7fe730565fc50e613efc2b53124308e6db9888465473d6de76981595e78067e5" Namespace="calico-system" Pod="csi-node-driver-fs48x" WorkloadEndpoint="localhost-k8s-csi--node--driver--fs48x-eth0" Sep 13 00:16:57.948259 containerd[1444]: 2025-09-13 00:16:57.932 [INFO][4333] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7fe730565fc50e613efc2b53124308e6db9888465473d6de76981595e78067e5" Namespace="calico-system" Pod="csi-node-driver-fs48x" WorkloadEndpoint="localhost-k8s-csi--node--driver--fs48x-eth0" Sep 13 00:16:57.948259 containerd[1444]: 2025-09-13 00:16:57.932 [INFO][4333] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7fe730565fc50e613efc2b53124308e6db9888465473d6de76981595e78067e5" Namespace="calico-system" Pod="csi-node-driver-fs48x" WorkloadEndpoint="localhost-k8s-csi--node--driver--fs48x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--fs48x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0b49a5c7-d8ed-4263-b267-b04f7372f88c", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 16, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7fe730565fc50e613efc2b53124308e6db9888465473d6de76981595e78067e5", Pod:"csi-node-driver-fs48x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9c7a8e4bb20", MAC:"92:ed:05:f1:15:d7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:16:57.948259 containerd[1444]: 2025-09-13 00:16:57.944 [INFO][4333] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7fe730565fc50e613efc2b53124308e6db9888465473d6de76981595e78067e5" Namespace="calico-system" Pod="csi-node-driver-fs48x" WorkloadEndpoint="localhost-k8s-csi--node--driver--fs48x-eth0" Sep 13 00:16:57.963792 containerd[1444]: time="2025-09-13T00:16:57.963708696Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:16:57.964174 containerd[1444]: time="2025-09-13T00:16:57.964129816Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:16:57.964174 containerd[1444]: time="2025-09-13T00:16:57.964149856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:16:57.964262 containerd[1444]: time="2025-09-13T00:16:57.964236376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:16:57.984800 systemd[1]: Started cri-containerd-7fe730565fc50e613efc2b53124308e6db9888465473d6de76981595e78067e5.scope - libcontainer container 7fe730565fc50e613efc2b53124308e6db9888465473d6de76981595e78067e5. Sep 13 00:16:57.995045 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:16:58.006613 containerd[1444]: time="2025-09-13T00:16:58.006513970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fs48x,Uid:0b49a5c7-d8ed-4263-b267-b04f7372f88c,Namespace:calico-system,Attempt:1,} returns sandbox id \"7fe730565fc50e613efc2b53124308e6db9888465473d6de76981595e78067e5\"" Sep 13 00:16:58.010012 containerd[1444]: time="2025-09-13T00:16:58.009980293Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 13 00:16:58.032467 systemd-networkd[1380]: cali16afb906159: Link UP Sep 13 00:16:58.033143 systemd-networkd[1380]: cali16afb906159: Gained carrier Sep 13 00:16:58.047671 containerd[1444]: 2025-09-13 00:16:57.877 [INFO][4343] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--22r4m-eth0 goldmane-54d579b49d- calico-system c78523f1-1b2e-44f8-9fd4-6f6c075a99ad 996 0 2025-09-13 00:16:37 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-22r4m eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali16afb906159 [] [] }} ContainerID="5f15513d95ad8550fcb083b05d4a8653f94a421f739ec19edb1ae3fe436843a4" Namespace="calico-system" Pod="goldmane-54d579b49d-22r4m" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--22r4m-" Sep 13 00:16:58.047671 containerd[1444]: 2025-09-13 00:16:57.877 [INFO][4343] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5f15513d95ad8550fcb083b05d4a8653f94a421f739ec19edb1ae3fe436843a4" Namespace="calico-system" Pod="goldmane-54d579b49d-22r4m" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--22r4m-eth0" Sep 13 00:16:58.047671 containerd[1444]: 2025-09-13 00:16:57.907 [INFO][4368] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5f15513d95ad8550fcb083b05d4a8653f94a421f739ec19edb1ae3fe436843a4" HandleID="k8s-pod-network.5f15513d95ad8550fcb083b05d4a8653f94a421f739ec19edb1ae3fe436843a4" Workload="localhost-k8s-goldmane--54d579b49d--22r4m-eth0" Sep 13 00:16:58.047671 containerd[1444]: 2025-09-13 00:16:57.907 [INFO][4368] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5f15513d95ad8550fcb083b05d4a8653f94a421f739ec19edb1ae3fe436843a4" HandleID="k8s-pod-network.5f15513d95ad8550fcb083b05d4a8653f94a421f739ec19edb1ae3fe436843a4" Workload="localhost-k8s-goldmane--54d579b49d--22r4m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003ac510), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-22r4m", "timestamp":"2025-09-13 00:16:57.90766933 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:16:58.047671 containerd[1444]: 2025-09-13 00:16:57.908 [INFO][4368] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:16:58.047671 containerd[1444]: 2025-09-13 00:16:57.924 [INFO][4368] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:16:58.047671 containerd[1444]: 2025-09-13 00:16:57.924 [INFO][4368] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:16:58.047671 containerd[1444]: 2025-09-13 00:16:58.000 [INFO][4368] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5f15513d95ad8550fcb083b05d4a8653f94a421f739ec19edb1ae3fe436843a4" host="localhost" Sep 13 00:16:58.047671 containerd[1444]: 2025-09-13 00:16:58.007 [INFO][4368] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:16:58.047671 containerd[1444]: 2025-09-13 00:16:58.012 [INFO][4368] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:16:58.047671 containerd[1444]: 2025-09-13 00:16:58.014 [INFO][4368] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:16:58.047671 containerd[1444]: 2025-09-13 00:16:58.016 [INFO][4368] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:16:58.047671 containerd[1444]: 2025-09-13 00:16:58.016 [INFO][4368] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5f15513d95ad8550fcb083b05d4a8653f94a421f739ec19edb1ae3fe436843a4" host="localhost" Sep 13 00:16:58.047671 containerd[1444]: 2025-09-13 00:16:58.018 [INFO][4368] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5f15513d95ad8550fcb083b05d4a8653f94a421f739ec19edb1ae3fe436843a4 Sep 13 00:16:58.047671 containerd[1444]: 2025-09-13 00:16:58.021 [INFO][4368] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5f15513d95ad8550fcb083b05d4a8653f94a421f739ec19edb1ae3fe436843a4" host="localhost" Sep 13 00:16:58.047671 containerd[1444]: 2025-09-13 00:16:58.026 [INFO][4368] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.5f15513d95ad8550fcb083b05d4a8653f94a421f739ec19edb1ae3fe436843a4" host="localhost" Sep 13 00:16:58.047671 containerd[1444]: 2025-09-13 00:16:58.026 [INFO][4368] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.5f15513d95ad8550fcb083b05d4a8653f94a421f739ec19edb1ae3fe436843a4" host="localhost" Sep 13 00:16:58.047671 containerd[1444]: 2025-09-13 00:16:58.026 [INFO][4368] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:16:58.047671 containerd[1444]: 2025-09-13 00:16:58.026 [INFO][4368] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="5f15513d95ad8550fcb083b05d4a8653f94a421f739ec19edb1ae3fe436843a4" HandleID="k8s-pod-network.5f15513d95ad8550fcb083b05d4a8653f94a421f739ec19edb1ae3fe436843a4" Workload="localhost-k8s-goldmane--54d579b49d--22r4m-eth0" Sep 13 00:16:58.048342 containerd[1444]: 2025-09-13 00:16:58.029 [INFO][4343] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5f15513d95ad8550fcb083b05d4a8653f94a421f739ec19edb1ae3fe436843a4" Namespace="calico-system" Pod="goldmane-54d579b49d-22r4m" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--22r4m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--22r4m-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"c78523f1-1b2e-44f8-9fd4-6f6c075a99ad", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 16, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-22r4m", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali16afb906159", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:16:58.048342 containerd[1444]: 2025-09-13 00:16:58.029 [INFO][4343] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="5f15513d95ad8550fcb083b05d4a8653f94a421f739ec19edb1ae3fe436843a4" Namespace="calico-system" Pod="goldmane-54d579b49d-22r4m" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--22r4m-eth0" Sep 13 00:16:58.048342 containerd[1444]: 2025-09-13 00:16:58.029 [INFO][4343] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali16afb906159 ContainerID="5f15513d95ad8550fcb083b05d4a8653f94a421f739ec19edb1ae3fe436843a4" Namespace="calico-system" Pod="goldmane-54d579b49d-22r4m" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--22r4m-eth0" Sep 13 00:16:58.048342 containerd[1444]: 2025-09-13 00:16:58.032 [INFO][4343] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5f15513d95ad8550fcb083b05d4a8653f94a421f739ec19edb1ae3fe436843a4" Namespace="calico-system" Pod="goldmane-54d579b49d-22r4m" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--22r4m-eth0" Sep 13 00:16:58.048342 containerd[1444]: 2025-09-13 00:16:58.035 [INFO][4343] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5f15513d95ad8550fcb083b05d4a8653f94a421f739ec19edb1ae3fe436843a4" Namespace="calico-system" Pod="goldmane-54d579b49d-22r4m" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--22r4m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--22r4m-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"c78523f1-1b2e-44f8-9fd4-6f6c075a99ad", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 16, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5f15513d95ad8550fcb083b05d4a8653f94a421f739ec19edb1ae3fe436843a4", Pod:"goldmane-54d579b49d-22r4m", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali16afb906159", MAC:"3a:8f:4a:fc:1b:16", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:16:58.048342 containerd[1444]: 2025-09-13 00:16:58.044 [INFO][4343] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5f15513d95ad8550fcb083b05d4a8653f94a421f739ec19edb1ae3fe436843a4" Namespace="calico-system" Pod="goldmane-54d579b49d-22r4m" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--22r4m-eth0" Sep 13 00:16:58.063879 containerd[1444]: time="2025-09-13T00:16:58.063686894Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:16:58.063879 containerd[1444]: time="2025-09-13T00:16:58.063738694Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:16:58.063879 containerd[1444]: time="2025-09-13T00:16:58.063750134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:16:58.063879 containerd[1444]: time="2025-09-13T00:16:58.063831134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:16:58.081790 systemd[1]: Started cri-containerd-5f15513d95ad8550fcb083b05d4a8653f94a421f739ec19edb1ae3fe436843a4.scope - libcontainer container 5f15513d95ad8550fcb083b05d4a8653f94a421f739ec19edb1ae3fe436843a4. Sep 13 00:16:58.093236 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:16:58.121937 containerd[1444]: time="2025-09-13T00:16:58.121895139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-22r4m,Uid:c78523f1-1b2e-44f8-9fd4-6f6c075a99ad,Namespace:calico-system,Attempt:1,} returns sandbox id \"5f15513d95ad8550fcb083b05d4a8653f94a421f739ec19edb1ae3fe436843a4\"" Sep 13 00:16:58.669878 containerd[1444]: time="2025-09-13T00:16:58.669828718Z" level=info msg="StopPodSandbox for \"6f045093ea64e7ab6523efd22b71b84a995a0b8aa3bb0d7c2b4957db7a20f022\"" Sep 13 00:16:58.670933 containerd[1444]: time="2025-09-13T00:16:58.670303479Z" level=info msg="StopPodSandbox for \"2790ce029e84719cd3b1faf406cefb51cef139ee1c33ef00950cc85c9ef9ccbb\"" Sep 13 00:16:58.767770 containerd[1444]: 2025-09-13 00:16:58.714 [INFO][4505] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6f045093ea64e7ab6523efd22b71b84a995a0b8aa3bb0d7c2b4957db7a20f022" Sep 13 00:16:58.767770 containerd[1444]: 2025-09-13 00:16:58.714 [INFO][4505] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6f045093ea64e7ab6523efd22b71b84a995a0b8aa3bb0d7c2b4957db7a20f022" iface="eth0" netns="/var/run/netns/cni-e365d5fd-3f4c-87c8-c77f-03127ab5934b" Sep 13 00:16:58.767770 containerd[1444]: 2025-09-13 00:16:58.715 [INFO][4505] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6f045093ea64e7ab6523efd22b71b84a995a0b8aa3bb0d7c2b4957db7a20f022" iface="eth0" netns="/var/run/netns/cni-e365d5fd-3f4c-87c8-c77f-03127ab5934b" Sep 13 00:16:58.767770 containerd[1444]: 2025-09-13 00:16:58.715 [INFO][4505] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6f045093ea64e7ab6523efd22b71b84a995a0b8aa3bb0d7c2b4957db7a20f022" iface="eth0" netns="/var/run/netns/cni-e365d5fd-3f4c-87c8-c77f-03127ab5934b" Sep 13 00:16:58.767770 containerd[1444]: 2025-09-13 00:16:58.715 [INFO][4505] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6f045093ea64e7ab6523efd22b71b84a995a0b8aa3bb0d7c2b4957db7a20f022" Sep 13 00:16:58.767770 containerd[1444]: 2025-09-13 00:16:58.715 [INFO][4505] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6f045093ea64e7ab6523efd22b71b84a995a0b8aa3bb0d7c2b4957db7a20f022" Sep 13 00:16:58.767770 containerd[1444]: 2025-09-13 00:16:58.748 [INFO][4522] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6f045093ea64e7ab6523efd22b71b84a995a0b8aa3bb0d7c2b4957db7a20f022" HandleID="k8s-pod-network.6f045093ea64e7ab6523efd22b71b84a995a0b8aa3bb0d7c2b4957db7a20f022" Workload="localhost-k8s-calico--apiserver--859b9fb76c--tctwm-eth0" Sep 13 00:16:58.767770 containerd[1444]: 2025-09-13 00:16:58.748 [INFO][4522] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:16:58.767770 containerd[1444]: 2025-09-13 00:16:58.748 [INFO][4522] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:16:58.767770 containerd[1444]: 2025-09-13 00:16:58.761 [WARNING][4522] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6f045093ea64e7ab6523efd22b71b84a995a0b8aa3bb0d7c2b4957db7a20f022" HandleID="k8s-pod-network.6f045093ea64e7ab6523efd22b71b84a995a0b8aa3bb0d7c2b4957db7a20f022" Workload="localhost-k8s-calico--apiserver--859b9fb76c--tctwm-eth0" Sep 13 00:16:58.767770 containerd[1444]: 2025-09-13 00:16:58.761 [INFO][4522] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6f045093ea64e7ab6523efd22b71b84a995a0b8aa3bb0d7c2b4957db7a20f022" HandleID="k8s-pod-network.6f045093ea64e7ab6523efd22b71b84a995a0b8aa3bb0d7c2b4957db7a20f022" Workload="localhost-k8s-calico--apiserver--859b9fb76c--tctwm-eth0" Sep 13 00:16:58.767770 containerd[1444]: 2025-09-13 00:16:58.763 [INFO][4522] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:16:58.767770 containerd[1444]: 2025-09-13 00:16:58.765 [INFO][4505] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6f045093ea64e7ab6523efd22b71b84a995a0b8aa3bb0d7c2b4957db7a20f022" Sep 13 00:16:58.768524 containerd[1444]: time="2025-09-13T00:16:58.767877273Z" level=info msg="TearDown network for sandbox \"6f045093ea64e7ab6523efd22b71b84a995a0b8aa3bb0d7c2b4957db7a20f022\" successfully" Sep 13 00:16:58.768524 containerd[1444]: time="2025-09-13T00:16:58.767903074Z" level=info msg="StopPodSandbox for \"6f045093ea64e7ab6523efd22b71b84a995a0b8aa3bb0d7c2b4957db7a20f022\" returns successfully" Sep 13 00:16:58.769784 containerd[1444]: time="2025-09-13T00:16:58.769756395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-859b9fb76c-tctwm,Uid:c8e6805e-32c3-4cf6-8f89-9d8311bf375d,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:16:58.773283 systemd[1]: run-netns-cni\x2de365d5fd\x2d3f4c\x2d87c8\x2dc77f\x2d03127ab5934b.mount: Deactivated successfully. Sep 13 00:16:58.782236 containerd[1444]: 2025-09-13 00:16:58.729 [INFO][4510] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2790ce029e84719cd3b1faf406cefb51cef139ee1c33ef00950cc85c9ef9ccbb" Sep 13 00:16:58.782236 containerd[1444]: 2025-09-13 00:16:58.729 [INFO][4510] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2790ce029e84719cd3b1faf406cefb51cef139ee1c33ef00950cc85c9ef9ccbb" iface="eth0" netns="/var/run/netns/cni-06254677-812d-c0b2-649d-96a9fa8c86f5" Sep 13 00:16:58.782236 containerd[1444]: 2025-09-13 00:16:58.730 [INFO][4510] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2790ce029e84719cd3b1faf406cefb51cef139ee1c33ef00950cc85c9ef9ccbb" iface="eth0" netns="/var/run/netns/cni-06254677-812d-c0b2-649d-96a9fa8c86f5" Sep 13 00:16:58.782236 containerd[1444]: 2025-09-13 00:16:58.733 [INFO][4510] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2790ce029e84719cd3b1faf406cefb51cef139ee1c33ef00950cc85c9ef9ccbb" iface="eth0" netns="/var/run/netns/cni-06254677-812d-c0b2-649d-96a9fa8c86f5" Sep 13 00:16:58.782236 containerd[1444]: 2025-09-13 00:16:58.733 [INFO][4510] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2790ce029e84719cd3b1faf406cefb51cef139ee1c33ef00950cc85c9ef9ccbb" Sep 13 00:16:58.782236 containerd[1444]: 2025-09-13 00:16:58.733 [INFO][4510] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2790ce029e84719cd3b1faf406cefb51cef139ee1c33ef00950cc85c9ef9ccbb" Sep 13 00:16:58.782236 containerd[1444]: 2025-09-13 00:16:58.757 [INFO][4529] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2790ce029e84719cd3b1faf406cefb51cef139ee1c33ef00950cc85c9ef9ccbb" HandleID="k8s-pod-network.2790ce029e84719cd3b1faf406cefb51cef139ee1c33ef00950cc85c9ef9ccbb" Workload="localhost-k8s-coredns--674b8bbfcf--sczp2-eth0" Sep 13 00:16:58.782236 containerd[1444]: 2025-09-13 00:16:58.757 [INFO][4529] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:16:58.782236 containerd[1444]: 2025-09-13 00:16:58.763 [INFO][4529] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:16:58.782236 containerd[1444]: 2025-09-13 00:16:58.776 [WARNING][4529] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2790ce029e84719cd3b1faf406cefb51cef139ee1c33ef00950cc85c9ef9ccbb" HandleID="k8s-pod-network.2790ce029e84719cd3b1faf406cefb51cef139ee1c33ef00950cc85c9ef9ccbb" Workload="localhost-k8s-coredns--674b8bbfcf--sczp2-eth0" Sep 13 00:16:58.782236 containerd[1444]: 2025-09-13 00:16:58.776 [INFO][4529] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2790ce029e84719cd3b1faf406cefb51cef139ee1c33ef00950cc85c9ef9ccbb" HandleID="k8s-pod-network.2790ce029e84719cd3b1faf406cefb51cef139ee1c33ef00950cc85c9ef9ccbb" Workload="localhost-k8s-coredns--674b8bbfcf--sczp2-eth0" Sep 13 00:16:58.782236 containerd[1444]: 2025-09-13 00:16:58.777 [INFO][4529] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:16:58.782236 containerd[1444]: 2025-09-13 00:16:58.779 [INFO][4510] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2790ce029e84719cd3b1faf406cefb51cef139ee1c33ef00950cc85c9ef9ccbb" Sep 13 00:16:58.782726 containerd[1444]: time="2025-09-13T00:16:58.782359205Z" level=info msg="TearDown network for sandbox \"2790ce029e84719cd3b1faf406cefb51cef139ee1c33ef00950cc85c9ef9ccbb\" successfully" Sep 13 00:16:58.782726 containerd[1444]: time="2025-09-13T00:16:58.782452205Z" level=info msg="StopPodSandbox for \"2790ce029e84719cd3b1faf406cefb51cef139ee1c33ef00950cc85c9ef9ccbb\" returns successfully" Sep 13 00:16:58.785753 kubelet[2490]: E0913 00:16:58.784852 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:58.786591 containerd[1444]: time="2025-09-13T00:16:58.786458928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-sczp2,Uid:95502567-42ee-47eb-b6a6-72d10242f778,Namespace:kube-system,Attempt:1,}" Sep 13 00:16:58.787210 systemd[1]: run-netns-cni\x2d06254677\x2d812d\x2dc0b2\x2d649d\x2d96a9fa8c86f5.mount: Deactivated successfully. Sep 13 00:16:58.909323 systemd-networkd[1380]: calia42bbcf6463: Link UP Sep 13 00:16:58.909535 systemd-networkd[1380]: calia42bbcf6463: Gained carrier Sep 13 00:16:58.927774 containerd[1444]: 2025-09-13 00:16:58.828 [INFO][4549] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--sczp2-eth0 coredns-674b8bbfcf- kube-system 95502567-42ee-47eb-b6a6-72d10242f778 1010 0 2025-09-13 00:16:23 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-sczp2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia42bbcf6463 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="552863bb3112d550909c809b2193e979bc46d0d9f4acd2b78177014c14a75937" Namespace="kube-system" Pod="coredns-674b8bbfcf-sczp2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--sczp2-" Sep 13 00:16:58.927774 containerd[1444]: 2025-09-13 00:16:58.828 [INFO][4549] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="552863bb3112d550909c809b2193e979bc46d0d9f4acd2b78177014c14a75937" Namespace="kube-system" Pod="coredns-674b8bbfcf-sczp2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--sczp2-eth0" Sep 13 00:16:58.927774 containerd[1444]: 2025-09-13 00:16:58.866 [INFO][4568] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="552863bb3112d550909c809b2193e979bc46d0d9f4acd2b78177014c14a75937" HandleID="k8s-pod-network.552863bb3112d550909c809b2193e979bc46d0d9f4acd2b78177014c14a75937" Workload="localhost-k8s-coredns--674b8bbfcf--sczp2-eth0" Sep 13 00:16:58.927774 containerd[1444]: 2025-09-13 00:16:58.866 [INFO][4568] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="552863bb3112d550909c809b2193e979bc46d0d9f4acd2b78177014c14a75937" HandleID="k8s-pod-network.552863bb3112d550909c809b2193e979bc46d0d9f4acd2b78177014c14a75937" Workload="localhost-k8s-coredns--674b8bbfcf--sczp2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c2940), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-sczp2", "timestamp":"2025-09-13 00:16:58.866330709 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:16:58.927774 containerd[1444]: 2025-09-13 00:16:58.866 [INFO][4568] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:16:58.927774 containerd[1444]: 2025-09-13 00:16:58.866 [INFO][4568] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:16:58.927774 containerd[1444]: 2025-09-13 00:16:58.866 [INFO][4568] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:16:58.927774 containerd[1444]: 2025-09-13 00:16:58.877 [INFO][4568] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.552863bb3112d550909c809b2193e979bc46d0d9f4acd2b78177014c14a75937" host="localhost" Sep 13 00:16:58.927774 containerd[1444]: 2025-09-13 00:16:58.882 [INFO][4568] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:16:58.927774 containerd[1444]: 2025-09-13 00:16:58.886 [INFO][4568] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:16:58.927774 containerd[1444]: 2025-09-13 00:16:58.888 [INFO][4568] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:16:58.927774 containerd[1444]: 2025-09-13 00:16:58.890 [INFO][4568] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:16:58.927774 containerd[1444]: 2025-09-13 00:16:58.890 [INFO][4568] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.552863bb3112d550909c809b2193e979bc46d0d9f4acd2b78177014c14a75937" host="localhost" Sep 13 00:16:58.927774 containerd[1444]: 2025-09-13 00:16:58.892 [INFO][4568] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.552863bb3112d550909c809b2193e979bc46d0d9f4acd2b78177014c14a75937 Sep 13 00:16:58.927774 containerd[1444]: 2025-09-13 00:16:58.896 [INFO][4568] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.552863bb3112d550909c809b2193e979bc46d0d9f4acd2b78177014c14a75937" host="localhost" Sep 13 00:16:58.927774 containerd[1444]: 2025-09-13 00:16:58.901 [INFO][4568] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.552863bb3112d550909c809b2193e979bc46d0d9f4acd2b78177014c14a75937" host="localhost" Sep 13 00:16:58.927774 containerd[1444]: 2025-09-13 00:16:58.901 [INFO][4568] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.552863bb3112d550909c809b2193e979bc46d0d9f4acd2b78177014c14a75937" host="localhost" Sep 13 00:16:58.927774 containerd[1444]: 2025-09-13 00:16:58.901 [INFO][4568] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:16:58.927774 containerd[1444]: 2025-09-13 00:16:58.901 [INFO][4568] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="552863bb3112d550909c809b2193e979bc46d0d9f4acd2b78177014c14a75937" HandleID="k8s-pod-network.552863bb3112d550909c809b2193e979bc46d0d9f4acd2b78177014c14a75937" Workload="localhost-k8s-coredns--674b8bbfcf--sczp2-eth0" Sep 13 00:16:58.928495 containerd[1444]: 2025-09-13 00:16:58.905 [INFO][4549] cni-plugin/k8s.go 418: Populated endpoint ContainerID="552863bb3112d550909c809b2193e979bc46d0d9f4acd2b78177014c14a75937" Namespace="kube-system" Pod="coredns-674b8bbfcf-sczp2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--sczp2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--sczp2-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"95502567-42ee-47eb-b6a6-72d10242f778", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 16, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-sczp2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia42bbcf6463", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:16:58.928495 containerd[1444]: 2025-09-13 00:16:58.906 [INFO][4549] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="552863bb3112d550909c809b2193e979bc46d0d9f4acd2b78177014c14a75937" Namespace="kube-system" Pod="coredns-674b8bbfcf-sczp2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--sczp2-eth0" Sep 13 00:16:58.928495 containerd[1444]: 2025-09-13 00:16:58.906 [INFO][4549] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia42bbcf6463 ContainerID="552863bb3112d550909c809b2193e979bc46d0d9f4acd2b78177014c14a75937" Namespace="kube-system" Pod="coredns-674b8bbfcf-sczp2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--sczp2-eth0" Sep 13 00:16:58.928495 containerd[1444]: 2025-09-13 00:16:58.910 [INFO][4549] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="552863bb3112d550909c809b2193e979bc46d0d9f4acd2b78177014c14a75937" Namespace="kube-system" Pod="coredns-674b8bbfcf-sczp2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--sczp2-eth0" Sep 13 00:16:58.928495 containerd[1444]: 2025-09-13 00:16:58.911 [INFO][4549] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="552863bb3112d550909c809b2193e979bc46d0d9f4acd2b78177014c14a75937" Namespace="kube-system" Pod="coredns-674b8bbfcf-sczp2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--sczp2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--sczp2-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"95502567-42ee-47eb-b6a6-72d10242f778", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 16, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"552863bb3112d550909c809b2193e979bc46d0d9f4acd2b78177014c14a75937", Pod:"coredns-674b8bbfcf-sczp2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia42bbcf6463", MAC:"86:6d:0b:a6:1e:11", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:16:58.928495 containerd[1444]: 2025-09-13 00:16:58.925 [INFO][4549] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="552863bb3112d550909c809b2193e979bc46d0d9f4acd2b78177014c14a75937" Namespace="kube-system" Pod="coredns-674b8bbfcf-sczp2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--sczp2-eth0" Sep 13 00:16:58.945282 containerd[1444]: time="2025-09-13T00:16:58.945153729Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:16:58.945282 containerd[1444]: time="2025-09-13T00:16:58.945204889Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:16:58.945282 containerd[1444]: time="2025-09-13T00:16:58.945220409Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:16:58.945548 containerd[1444]: time="2025-09-13T00:16:58.945295289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:16:58.973920 systemd[1]: Started cri-containerd-552863bb3112d550909c809b2193e979bc46d0d9f4acd2b78177014c14a75937.scope - libcontainer container 552863bb3112d550909c809b2193e979bc46d0d9f4acd2b78177014c14a75937. Sep 13 00:16:58.983815 systemd-networkd[1380]: cali9c7a8e4bb20: Gained IPv6LL Sep 13 00:16:58.989008 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:16:58.997413 containerd[1444]: time="2025-09-13T00:16:58.997343209Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:58.998254 containerd[1444]: time="2025-09-13T00:16:58.998219850Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 13 00:16:59.003140 containerd[1444]: time="2025-09-13T00:16:59.002963213Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:59.008318 containerd[1444]: time="2025-09-13T00:16:59.008260937Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:16:59.009244 containerd[1444]: time="2025-09-13T00:16:59.009191938Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 999.172965ms" Sep 13 00:16:59.009244 containerd[1444]: time="2025-09-13T00:16:59.009231738Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 13 00:16:59.010880 containerd[1444]: time="2025-09-13T00:16:59.010844459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 13 00:16:59.015060 containerd[1444]: time="2025-09-13T00:16:59.014998382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-sczp2,Uid:95502567-42ee-47eb-b6a6-72d10242f778,Namespace:kube-system,Attempt:1,} returns sandbox id \"552863bb3112d550909c809b2193e979bc46d0d9f4acd2b78177014c14a75937\"" Sep 13 00:16:59.015933 kubelet[2490]: E0913 00:16:59.015756 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:59.016436 containerd[1444]: time="2025-09-13T00:16:59.016400543Z" level=info msg="CreateContainer within sandbox \"7fe730565fc50e613efc2b53124308e6db9888465473d6de76981595e78067e5\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 13 00:16:59.022093 systemd-networkd[1380]: cali3b2bdcbd070: Link UP Sep 13 00:16:59.022922 systemd-networkd[1380]: cali3b2bdcbd070: Gained carrier Sep 13 00:16:59.023224 containerd[1444]: time="2025-09-13T00:16:59.023163668Z" level=info msg="CreateContainer within sandbox \"552863bb3112d550909c809b2193e979bc46d0d9f4acd2b78177014c14a75937\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:16:59.040722 containerd[1444]: time="2025-09-13T00:16:59.040650961Z" level=info msg="CreateContainer within sandbox \"7fe730565fc50e613efc2b53124308e6db9888465473d6de76981595e78067e5\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"19aa30b93cc6ebd32da5c0fd4277f58a319992b80d13eb053dbc9868d92b790d\"" Sep 13 00:16:59.041371 containerd[1444]: 2025-09-13 00:16:58.833 [INFO][4539] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--859b9fb76c--tctwm-eth0 calico-apiserver-859b9fb76c- calico-apiserver c8e6805e-32c3-4cf6-8f89-9d8311bf375d 1009 0 2025-09-13 00:16:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:859b9fb76c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-859b9fb76c-tctwm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3b2bdcbd070 [] [] }} ContainerID="6ea8db0aee61cf1b23d675ffaf58f48c494fa9d9b102eae8769caac090de2712" Namespace="calico-apiserver" Pod="calico-apiserver-859b9fb76c-tctwm" WorkloadEndpoint="localhost-k8s-calico--apiserver--859b9fb76c--tctwm-" Sep 13 00:16:59.041371 containerd[1444]: 2025-09-13 00:16:58.833 [INFO][4539] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6ea8db0aee61cf1b23d675ffaf58f48c494fa9d9b102eae8769caac090de2712" Namespace="calico-apiserver" Pod="calico-apiserver-859b9fb76c-tctwm" WorkloadEndpoint="localhost-k8s-calico--apiserver--859b9fb76c--tctwm-eth0" Sep 13 00:16:59.041371 containerd[1444]: 2025-09-13 00:16:58.870 [INFO][4574] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6ea8db0aee61cf1b23d675ffaf58f48c494fa9d9b102eae8769caac090de2712" HandleID="k8s-pod-network.6ea8db0aee61cf1b23d675ffaf58f48c494fa9d9b102eae8769caac090de2712" Workload="localhost-k8s-calico--apiserver--859b9fb76c--tctwm-eth0" Sep 13 00:16:59.041371 containerd[1444]: 2025-09-13 00:16:58.870 [INFO][4574] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6ea8db0aee61cf1b23d675ffaf58f48c494fa9d9b102eae8769caac090de2712" HandleID="k8s-pod-network.6ea8db0aee61cf1b23d675ffaf58f48c494fa9d9b102eae8769caac090de2712" Workload="localhost-k8s-calico--apiserver--859b9fb76c--tctwm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137cb0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-859b9fb76c-tctwm", "timestamp":"2025-09-13 00:16:58.870125152 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:16:59.041371 containerd[1444]: 2025-09-13 00:16:58.870 [INFO][4574] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:16:59.041371 containerd[1444]: 2025-09-13 00:16:58.901 [INFO][4574] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:16:59.041371 containerd[1444]: 2025-09-13 00:16:58.901 [INFO][4574] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:16:59.041371 containerd[1444]: 2025-09-13 00:16:58.978 [INFO][4574] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6ea8db0aee61cf1b23d675ffaf58f48c494fa9d9b102eae8769caac090de2712" host="localhost" Sep 13 00:16:59.041371 containerd[1444]: 2025-09-13 00:16:58.985 [INFO][4574] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:16:59.041371 containerd[1444]: 2025-09-13 00:16:58.991 [INFO][4574] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:16:59.041371 containerd[1444]: 2025-09-13 00:16:58.993 [INFO][4574] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:16:59.041371 containerd[1444]: 2025-09-13 00:16:58.995 [INFO][4574] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:16:59.041371 containerd[1444]: 2025-09-13 00:16:58.995 [INFO][4574] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6ea8db0aee61cf1b23d675ffaf58f48c494fa9d9b102eae8769caac090de2712" host="localhost" Sep 13 00:16:59.041371 containerd[1444]: 2025-09-13 00:16:59.000 [INFO][4574] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6ea8db0aee61cf1b23d675ffaf58f48c494fa9d9b102eae8769caac090de2712 Sep 13 00:16:59.041371 containerd[1444]: 2025-09-13 00:16:59.005 [INFO][4574] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6ea8db0aee61cf1b23d675ffaf58f48c494fa9d9b102eae8769caac090de2712" host="localhost" Sep 13 00:16:59.041371 containerd[1444]: 2025-09-13 00:16:59.013 [INFO][4574] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.6ea8db0aee61cf1b23d675ffaf58f48c494fa9d9b102eae8769caac090de2712" host="localhost" Sep 13 00:16:59.041371 containerd[1444]: 2025-09-13 00:16:59.015 [INFO][4574] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.6ea8db0aee61cf1b23d675ffaf58f48c494fa9d9b102eae8769caac090de2712" host="localhost" Sep 13 00:16:59.041371 containerd[1444]: 2025-09-13 00:16:59.015 [INFO][4574] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:16:59.041371 containerd[1444]: 2025-09-13 00:16:59.015 [INFO][4574] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="6ea8db0aee61cf1b23d675ffaf58f48c494fa9d9b102eae8769caac090de2712" HandleID="k8s-pod-network.6ea8db0aee61cf1b23d675ffaf58f48c494fa9d9b102eae8769caac090de2712" Workload="localhost-k8s-calico--apiserver--859b9fb76c--tctwm-eth0" Sep 13 00:16:59.042884 containerd[1444]: 2025-09-13 00:16:59.020 [INFO][4539] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6ea8db0aee61cf1b23d675ffaf58f48c494fa9d9b102eae8769caac090de2712" Namespace="calico-apiserver" Pod="calico-apiserver-859b9fb76c-tctwm" WorkloadEndpoint="localhost-k8s-calico--apiserver--859b9fb76c--tctwm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--859b9fb76c--tctwm-eth0", GenerateName:"calico-apiserver-859b9fb76c-", Namespace:"calico-apiserver", SelfLink:"", UID:"c8e6805e-32c3-4cf6-8f89-9d8311bf375d", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 16, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"859b9fb76c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-859b9fb76c-tctwm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3b2bdcbd070", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:16:59.042884 containerd[1444]: 2025-09-13 00:16:59.020 [INFO][4539] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="6ea8db0aee61cf1b23d675ffaf58f48c494fa9d9b102eae8769caac090de2712" Namespace="calico-apiserver" Pod="calico-apiserver-859b9fb76c-tctwm" WorkloadEndpoint="localhost-k8s-calico--apiserver--859b9fb76c--tctwm-eth0" Sep 13 00:16:59.042884 containerd[1444]: 2025-09-13 00:16:59.020 [INFO][4539] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3b2bdcbd070 ContainerID="6ea8db0aee61cf1b23d675ffaf58f48c494fa9d9b102eae8769caac090de2712" Namespace="calico-apiserver" Pod="calico-apiserver-859b9fb76c-tctwm" WorkloadEndpoint="localhost-k8s-calico--apiserver--859b9fb76c--tctwm-eth0" Sep 13 00:16:59.042884 containerd[1444]: 2025-09-13 00:16:59.023 [INFO][4539] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6ea8db0aee61cf1b23d675ffaf58f48c494fa9d9b102eae8769caac090de2712" Namespace="calico-apiserver" Pod="calico-apiserver-859b9fb76c-tctwm" WorkloadEndpoint="localhost-k8s-calico--apiserver--859b9fb76c--tctwm-eth0" Sep 13 00:16:59.042884 containerd[1444]: 2025-09-13 00:16:59.025 [INFO][4539] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6ea8db0aee61cf1b23d675ffaf58f48c494fa9d9b102eae8769caac090de2712" Namespace="calico-apiserver" Pod="calico-apiserver-859b9fb76c-tctwm" WorkloadEndpoint="localhost-k8s-calico--apiserver--859b9fb76c--tctwm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--859b9fb76c--tctwm-eth0", GenerateName:"calico-apiserver-859b9fb76c-", Namespace:"calico-apiserver", SelfLink:"", UID:"c8e6805e-32c3-4cf6-8f89-9d8311bf375d", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 16, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"859b9fb76c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6ea8db0aee61cf1b23d675ffaf58f48c494fa9d9b102eae8769caac090de2712", Pod:"calico-apiserver-859b9fb76c-tctwm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3b2bdcbd070", MAC:"46:39:53:8b:31:a5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:16:59.042884 containerd[1444]: 2025-09-13 00:16:59.037 [INFO][4539] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6ea8db0aee61cf1b23d675ffaf58f48c494fa9d9b102eae8769caac090de2712" Namespace="calico-apiserver" Pod="calico-apiserver-859b9fb76c-tctwm" WorkloadEndpoint="localhost-k8s-calico--apiserver--859b9fb76c--tctwm-eth0" Sep 13 00:16:59.042884 containerd[1444]: time="2025-09-13T00:16:59.042441602Z" level=info msg="StartContainer for \"19aa30b93cc6ebd32da5c0fd4277f58a319992b80d13eb053dbc9868d92b790d\"" Sep 13 00:16:59.055129 containerd[1444]: time="2025-09-13T00:16:59.054576371Z" level=info msg="CreateContainer within sandbox \"552863bb3112d550909c809b2193e979bc46d0d9f4acd2b78177014c14a75937\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"036a8455a307db789403f1506151c93b5c08940dbdde8758d665d959707a3cc6\"" Sep 13 00:16:59.055860 containerd[1444]: time="2025-09-13T00:16:59.055806331Z" level=info msg="StartContainer for \"036a8455a307db789403f1506151c93b5c08940dbdde8758d665d959707a3cc6\"" Sep 13 00:16:59.063477 containerd[1444]: time="2025-09-13T00:16:59.063050337Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:16:59.063477 containerd[1444]: time="2025-09-13T00:16:59.063383417Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:16:59.063477 containerd[1444]: time="2025-09-13T00:16:59.063424937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:16:59.063636 containerd[1444]: time="2025-09-13T00:16:59.063537257Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:16:59.069788 systemd[1]: Started cri-containerd-19aa30b93cc6ebd32da5c0fd4277f58a319992b80d13eb053dbc9868d92b790d.scope - libcontainer container 19aa30b93cc6ebd32da5c0fd4277f58a319992b80d13eb053dbc9868d92b790d. Sep 13 00:16:59.083759 systemd[1]: Started cri-containerd-6ea8db0aee61cf1b23d675ffaf58f48c494fa9d9b102eae8769caac090de2712.scope - libcontainer container 6ea8db0aee61cf1b23d675ffaf58f48c494fa9d9b102eae8769caac090de2712. Sep 13 00:16:59.087058 systemd[1]: Started cri-containerd-036a8455a307db789403f1506151c93b5c08940dbdde8758d665d959707a3cc6.scope - libcontainer container 036a8455a307db789403f1506151c93b5c08940dbdde8758d665d959707a3cc6. Sep 13 00:16:59.102757 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:16:59.139274 containerd[1444]: time="2025-09-13T00:16:59.139199591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-859b9fb76c-tctwm,Uid:c8e6805e-32c3-4cf6-8f89-9d8311bf375d,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"6ea8db0aee61cf1b23d675ffaf58f48c494fa9d9b102eae8769caac090de2712\"" Sep 13 00:16:59.139653 containerd[1444]: time="2025-09-13T00:16:59.139276751Z" level=info msg="StartContainer for \"19aa30b93cc6ebd32da5c0fd4277f58a319992b80d13eb053dbc9868d92b790d\" returns successfully" Sep 13 00:16:59.139653 containerd[1444]: time="2025-09-13T00:16:59.139226791Z" level=info msg="StartContainer for \"036a8455a307db789403f1506151c93b5c08940dbdde8758d665d959707a3cc6\" returns successfully" Sep 13 00:16:59.559734 systemd-networkd[1380]: cali16afb906159: Gained IPv6LL Sep 13 00:16:59.670344 containerd[1444]: time="2025-09-13T00:16:59.670286973Z" level=info msg="StopPodSandbox for \"5f98a97fe507fc5b0a47df8b5f016508e0f6a34fe089e993814fe3cc835fd60a\"" Sep 13 00:16:59.750706 containerd[1444]: 2025-09-13 00:16:59.713 [INFO][4778] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5f98a97fe507fc5b0a47df8b5f016508e0f6a34fe089e993814fe3cc835fd60a" Sep 13 00:16:59.750706 containerd[1444]: 2025-09-13 00:16:59.714 [INFO][4778] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5f98a97fe507fc5b0a47df8b5f016508e0f6a34fe089e993814fe3cc835fd60a" iface="eth0" netns="/var/run/netns/cni-0cac8fb3-afd3-06c7-44c4-d6f69c1c1c4f" Sep 13 00:16:59.750706 containerd[1444]: 2025-09-13 00:16:59.714 [INFO][4778] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5f98a97fe507fc5b0a47df8b5f016508e0f6a34fe089e993814fe3cc835fd60a" iface="eth0" netns="/var/run/netns/cni-0cac8fb3-afd3-06c7-44c4-d6f69c1c1c4f" Sep 13 00:16:59.750706 containerd[1444]: 2025-09-13 00:16:59.714 [INFO][4778] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5f98a97fe507fc5b0a47df8b5f016508e0f6a34fe089e993814fe3cc835fd60a" iface="eth0" netns="/var/run/netns/cni-0cac8fb3-afd3-06c7-44c4-d6f69c1c1c4f" Sep 13 00:16:59.750706 containerd[1444]: 2025-09-13 00:16:59.714 [INFO][4778] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5f98a97fe507fc5b0a47df8b5f016508e0f6a34fe089e993814fe3cc835fd60a" Sep 13 00:16:59.750706 containerd[1444]: 2025-09-13 00:16:59.714 [INFO][4778] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5f98a97fe507fc5b0a47df8b5f016508e0f6a34fe089e993814fe3cc835fd60a" Sep 13 00:16:59.750706 containerd[1444]: 2025-09-13 00:16:59.736 [INFO][4787] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5f98a97fe507fc5b0a47df8b5f016508e0f6a34fe089e993814fe3cc835fd60a" HandleID="k8s-pod-network.5f98a97fe507fc5b0a47df8b5f016508e0f6a34fe089e993814fe3cc835fd60a" Workload="localhost-k8s-calico--apiserver--859b9fb76c--hnqvt-eth0" Sep 13 00:16:59.750706 containerd[1444]: 2025-09-13 00:16:59.736 [INFO][4787] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:16:59.750706 containerd[1444]: 2025-09-13 00:16:59.736 [INFO][4787] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:16:59.750706 containerd[1444]: 2025-09-13 00:16:59.745 [WARNING][4787] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5f98a97fe507fc5b0a47df8b5f016508e0f6a34fe089e993814fe3cc835fd60a" HandleID="k8s-pod-network.5f98a97fe507fc5b0a47df8b5f016508e0f6a34fe089e993814fe3cc835fd60a" Workload="localhost-k8s-calico--apiserver--859b9fb76c--hnqvt-eth0" Sep 13 00:16:59.750706 containerd[1444]: 2025-09-13 00:16:59.745 [INFO][4787] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5f98a97fe507fc5b0a47df8b5f016508e0f6a34fe089e993814fe3cc835fd60a" HandleID="k8s-pod-network.5f98a97fe507fc5b0a47df8b5f016508e0f6a34fe089e993814fe3cc835fd60a" Workload="localhost-k8s-calico--apiserver--859b9fb76c--hnqvt-eth0" Sep 13 00:16:59.750706 containerd[1444]: 2025-09-13 00:16:59.746 [INFO][4787] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:16:59.750706 containerd[1444]: 2025-09-13 00:16:59.748 [INFO][4778] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5f98a97fe507fc5b0a47df8b5f016508e0f6a34fe089e993814fe3cc835fd60a" Sep 13 00:16:59.751270 containerd[1444]: time="2025-09-13T00:16:59.750852711Z" level=info msg="TearDown network for sandbox \"5f98a97fe507fc5b0a47df8b5f016508e0f6a34fe089e993814fe3cc835fd60a\" successfully" Sep 13 00:16:59.751270 containerd[1444]: time="2025-09-13T00:16:59.750883511Z" level=info msg="StopPodSandbox for \"5f98a97fe507fc5b0a47df8b5f016508e0f6a34fe089e993814fe3cc835fd60a\" returns successfully" Sep 13 00:16:59.752265 containerd[1444]: time="2025-09-13T00:16:59.751894111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-859b9fb76c-hnqvt,Uid:91ec7ae0-fffa-447c-b970-cf0f2591c90d,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:16:59.776473 systemd[1]: run-netns-cni\x2d0cac8fb3\x2dafd3\x2d06c7\x2d44c4\x2dd6f69c1c1c4f.mount: Deactivated successfully. Sep 13 00:16:59.833195 kubelet[2490]: E0913 00:16:59.832278 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:16:59.849536 kubelet[2490]: I0913 00:16:59.848272 2490 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-sczp2" podStartSLOduration=36.84825566 podStartE2EDuration="36.84825566s" podCreationTimestamp="2025-09-13 00:16:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:16:59.8476717 +0000 UTC m=+42.265379228" watchObservedRunningTime="2025-09-13 00:16:59.84825566 +0000 UTC m=+42.265963148" Sep 13 00:16:59.881303 systemd-networkd[1380]: cali7c24b383a16: Link UP Sep 13 00:16:59.882123 systemd-networkd[1380]: cali7c24b383a16: Gained carrier Sep 13 00:16:59.895755 containerd[1444]: 2025-09-13 00:16:59.802 [INFO][4801] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--859b9fb76c--hnqvt-eth0 calico-apiserver-859b9fb76c- calico-apiserver 91ec7ae0-fffa-447c-b970-cf0f2591c90d 1030 0 2025-09-13 00:16:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:859b9fb76c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-859b9fb76c-hnqvt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7c24b383a16 [] [] }} ContainerID="1f630dcccce7b0ba35bf91308813256abe57dfdecc67d61c19ed2990d6fc2935" Namespace="calico-apiserver" Pod="calico-apiserver-859b9fb76c-hnqvt" WorkloadEndpoint="localhost-k8s-calico--apiserver--859b9fb76c--hnqvt-" Sep 13 00:16:59.895755 containerd[1444]: 2025-09-13 00:16:59.802 [INFO][4801] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1f630dcccce7b0ba35bf91308813256abe57dfdecc67d61c19ed2990d6fc2935" Namespace="calico-apiserver" Pod="calico-apiserver-859b9fb76c-hnqvt" WorkloadEndpoint="localhost-k8s-calico--apiserver--859b9fb76c--hnqvt-eth0" Sep 13 00:16:59.895755 containerd[1444]: 2025-09-13 00:16:59.829 [INFO][4810] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1f630dcccce7b0ba35bf91308813256abe57dfdecc67d61c19ed2990d6fc2935" HandleID="k8s-pod-network.1f630dcccce7b0ba35bf91308813256abe57dfdecc67d61c19ed2990d6fc2935" Workload="localhost-k8s-calico--apiserver--859b9fb76c--hnqvt-eth0" Sep 13 00:16:59.895755 containerd[1444]: 2025-09-13 00:16:59.829 [INFO][4810] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1f630dcccce7b0ba35bf91308813256abe57dfdecc67d61c19ed2990d6fc2935" HandleID="k8s-pod-network.1f630dcccce7b0ba35bf91308813256abe57dfdecc67d61c19ed2990d6fc2935" Workload="localhost-k8s-calico--apiserver--859b9fb76c--hnqvt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001376f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-859b9fb76c-hnqvt", "timestamp":"2025-09-13 00:16:59.829158727 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:16:59.895755 containerd[1444]: 2025-09-13 00:16:59.829 [INFO][4810] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:16:59.895755 containerd[1444]: 2025-09-13 00:16:59.829 [INFO][4810] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:16:59.895755 containerd[1444]: 2025-09-13 00:16:59.829 [INFO][4810] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:16:59.895755 containerd[1444]: 2025-09-13 00:16:59.841 [INFO][4810] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1f630dcccce7b0ba35bf91308813256abe57dfdecc67d61c19ed2990d6fc2935" host="localhost" Sep 13 00:16:59.895755 containerd[1444]: 2025-09-13 00:16:59.845 [INFO][4810] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:16:59.895755 containerd[1444]: 2025-09-13 00:16:59.854 [INFO][4810] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:16:59.895755 containerd[1444]: 2025-09-13 00:16:59.856 [INFO][4810] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:16:59.895755 containerd[1444]: 2025-09-13 00:16:59.858 [INFO][4810] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:16:59.895755 containerd[1444]: 2025-09-13 00:16:59.858 [INFO][4810] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1f630dcccce7b0ba35bf91308813256abe57dfdecc67d61c19ed2990d6fc2935" host="localhost" Sep 13 00:16:59.895755 containerd[1444]: 2025-09-13 00:16:59.861 [INFO][4810] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1f630dcccce7b0ba35bf91308813256abe57dfdecc67d61c19ed2990d6fc2935 Sep 13 00:16:59.895755 containerd[1444]: 2025-09-13 00:16:59.866 [INFO][4810] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1f630dcccce7b0ba35bf91308813256abe57dfdecc67d61c19ed2990d6fc2935" host="localhost" Sep 13 00:16:59.895755 containerd[1444]: 2025-09-13 00:16:59.874 [INFO][4810] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.1f630dcccce7b0ba35bf91308813256abe57dfdecc67d61c19ed2990d6fc2935" host="localhost" Sep 13 00:16:59.895755 containerd[1444]: 2025-09-13 00:16:59.874 [INFO][4810] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.1f630dcccce7b0ba35bf91308813256abe57dfdecc67d61c19ed2990d6fc2935" host="localhost" Sep 13 00:16:59.895755 containerd[1444]: 2025-09-13 00:16:59.874 [INFO][4810] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:16:59.895755 containerd[1444]: 2025-09-13 00:16:59.874 [INFO][4810] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="1f630dcccce7b0ba35bf91308813256abe57dfdecc67d61c19ed2990d6fc2935" HandleID="k8s-pod-network.1f630dcccce7b0ba35bf91308813256abe57dfdecc67d61c19ed2990d6fc2935" Workload="localhost-k8s-calico--apiserver--859b9fb76c--hnqvt-eth0" Sep 13 00:16:59.896772 containerd[1444]: 2025-09-13 00:16:59.878 [INFO][4801] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1f630dcccce7b0ba35bf91308813256abe57dfdecc67d61c19ed2990d6fc2935" Namespace="calico-apiserver" Pod="calico-apiserver-859b9fb76c-hnqvt" WorkloadEndpoint="localhost-k8s-calico--apiserver--859b9fb76c--hnqvt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--859b9fb76c--hnqvt-eth0", GenerateName:"calico-apiserver-859b9fb76c-", Namespace:"calico-apiserver", SelfLink:"", UID:"91ec7ae0-fffa-447c-b970-cf0f2591c90d", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 16, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"859b9fb76c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-859b9fb76c-hnqvt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7c24b383a16", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:16:59.896772 containerd[1444]: 2025-09-13 00:16:59.878 [INFO][4801] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="1f630dcccce7b0ba35bf91308813256abe57dfdecc67d61c19ed2990d6fc2935" Namespace="calico-apiserver" Pod="calico-apiserver-859b9fb76c-hnqvt" WorkloadEndpoint="localhost-k8s-calico--apiserver--859b9fb76c--hnqvt-eth0" Sep 13 00:16:59.896772 containerd[1444]: 2025-09-13 00:16:59.878 [INFO][4801] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7c24b383a16 ContainerID="1f630dcccce7b0ba35bf91308813256abe57dfdecc67d61c19ed2990d6fc2935" Namespace="calico-apiserver" Pod="calico-apiserver-859b9fb76c-hnqvt" WorkloadEndpoint="localhost-k8s-calico--apiserver--859b9fb76c--hnqvt-eth0" Sep 13 00:16:59.896772 containerd[1444]: 2025-09-13 00:16:59.882 [INFO][4801] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1f630dcccce7b0ba35bf91308813256abe57dfdecc67d61c19ed2990d6fc2935" Namespace="calico-apiserver" Pod="calico-apiserver-859b9fb76c-hnqvt" WorkloadEndpoint="localhost-k8s-calico--apiserver--859b9fb76c--hnqvt-eth0" Sep 13 00:16:59.896772 containerd[1444]: 2025-09-13 00:16:59.882 [INFO][4801] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1f630dcccce7b0ba35bf91308813256abe57dfdecc67d61c19ed2990d6fc2935" Namespace="calico-apiserver" Pod="calico-apiserver-859b9fb76c-hnqvt" WorkloadEndpoint="localhost-k8s-calico--apiserver--859b9fb76c--hnqvt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--859b9fb76c--hnqvt-eth0", GenerateName:"calico-apiserver-859b9fb76c-", Namespace:"calico-apiserver", SelfLink:"", UID:"91ec7ae0-fffa-447c-b970-cf0f2591c90d", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 16, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"859b9fb76c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1f630dcccce7b0ba35bf91308813256abe57dfdecc67d61c19ed2990d6fc2935", Pod:"calico-apiserver-859b9fb76c-hnqvt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7c24b383a16", MAC:"2a:4d:92:54:79:98", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:16:59.896772 containerd[1444]: 2025-09-13 00:16:59.893 [INFO][4801] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1f630dcccce7b0ba35bf91308813256abe57dfdecc67d61c19ed2990d6fc2935" Namespace="calico-apiserver" Pod="calico-apiserver-859b9fb76c-hnqvt" WorkloadEndpoint="localhost-k8s-calico--apiserver--859b9fb76c--hnqvt-eth0" Sep 13 00:16:59.916502 containerd[1444]: time="2025-09-13T00:16:59.916402669Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:16:59.916502 containerd[1444]: time="2025-09-13T00:16:59.916467229Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:16:59.916502 containerd[1444]: time="2025-09-13T00:16:59.916481949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:16:59.917357 containerd[1444]: time="2025-09-13T00:16:59.917199630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:16:59.940780 systemd[1]: Started cri-containerd-1f630dcccce7b0ba35bf91308813256abe57dfdecc67d61c19ed2990d6fc2935.scope - libcontainer container 1f630dcccce7b0ba35bf91308813256abe57dfdecc67d61c19ed2990d6fc2935. Sep 13 00:16:59.951986 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:16:59.973808 containerd[1444]: time="2025-09-13T00:16:59.973768231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-859b9fb76c-hnqvt,Uid:91ec7ae0-fffa-447c-b970-cf0f2591c90d,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"1f630dcccce7b0ba35bf91308813256abe57dfdecc67d61c19ed2990d6fc2935\"" Sep 13 00:17:00.264109 systemd-networkd[1380]: calia42bbcf6463: Gained IPv6LL Sep 13 00:17:00.468931 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4116970624.mount: Deactivated successfully. Sep 13 00:17:00.520733 systemd-networkd[1380]: cali3b2bdcbd070: Gained IPv6LL Sep 13 00:17:00.669997 containerd[1444]: time="2025-09-13T00:17:00.669915581Z" level=info msg="StopPodSandbox for \"48be175f15263d25946c5fafd5d93ea0ba82b221a8a78f549856a0774ad11b96\"" Sep 13 00:17:00.670271 containerd[1444]: time="2025-09-13T00:17:00.670044021Z" level=info msg="StopPodSandbox for \"b46982dd6853b5e80e42bf4510f8168884956bedb7fb50987d00163f9bd6678a\"" Sep 13 00:17:00.772480 containerd[1444]: 2025-09-13 00:17:00.728 [INFO][4903] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="48be175f15263d25946c5fafd5d93ea0ba82b221a8a78f549856a0774ad11b96" Sep 13 00:17:00.772480 containerd[1444]: 2025-09-13 00:17:00.729 [INFO][4903] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="48be175f15263d25946c5fafd5d93ea0ba82b221a8a78f549856a0774ad11b96" iface="eth0" netns="/var/run/netns/cni-75c36574-d24a-be33-ec55-aec1a8261e2f" Sep 13 00:17:00.772480 containerd[1444]: 2025-09-13 00:17:00.730 [INFO][4903] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="48be175f15263d25946c5fafd5d93ea0ba82b221a8a78f549856a0774ad11b96" iface="eth0" netns="/var/run/netns/cni-75c36574-d24a-be33-ec55-aec1a8261e2f" Sep 13 00:17:00.772480 containerd[1444]: 2025-09-13 00:17:00.731 [INFO][4903] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="48be175f15263d25946c5fafd5d93ea0ba82b221a8a78f549856a0774ad11b96" iface="eth0" netns="/var/run/netns/cni-75c36574-d24a-be33-ec55-aec1a8261e2f" Sep 13 00:17:00.772480 containerd[1444]: 2025-09-13 00:17:00.732 [INFO][4903] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="48be175f15263d25946c5fafd5d93ea0ba82b221a8a78f549856a0774ad11b96" Sep 13 00:17:00.772480 containerd[1444]: 2025-09-13 00:17:00.732 [INFO][4903] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="48be175f15263d25946c5fafd5d93ea0ba82b221a8a78f549856a0774ad11b96" Sep 13 00:17:00.772480 containerd[1444]: 2025-09-13 00:17:00.754 [INFO][4915] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="48be175f15263d25946c5fafd5d93ea0ba82b221a8a78f549856a0774ad11b96" HandleID="k8s-pod-network.48be175f15263d25946c5fafd5d93ea0ba82b221a8a78f549856a0774ad11b96" Workload="localhost-k8s-coredns--674b8bbfcf--qj2mh-eth0" Sep 13 00:17:00.772480 containerd[1444]: 2025-09-13 00:17:00.754 [INFO][4915] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:17:00.772480 containerd[1444]: 2025-09-13 00:17:00.754 [INFO][4915] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:17:00.772480 containerd[1444]: 2025-09-13 00:17:00.763 [WARNING][4915] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="48be175f15263d25946c5fafd5d93ea0ba82b221a8a78f549856a0774ad11b96" HandleID="k8s-pod-network.48be175f15263d25946c5fafd5d93ea0ba82b221a8a78f549856a0774ad11b96" Workload="localhost-k8s-coredns--674b8bbfcf--qj2mh-eth0" Sep 13 00:17:00.772480 containerd[1444]: 2025-09-13 00:17:00.763 [INFO][4915] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="48be175f15263d25946c5fafd5d93ea0ba82b221a8a78f549856a0774ad11b96" HandleID="k8s-pod-network.48be175f15263d25946c5fafd5d93ea0ba82b221a8a78f549856a0774ad11b96" Workload="localhost-k8s-coredns--674b8bbfcf--qj2mh-eth0" Sep 13 00:17:00.772480 containerd[1444]: 2025-09-13 00:17:00.765 [INFO][4915] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:17:00.772480 containerd[1444]: 2025-09-13 00:17:00.767 [INFO][4903] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="48be175f15263d25946c5fafd5d93ea0ba82b221a8a78f549856a0774ad11b96" Sep 13 00:17:00.772480 containerd[1444]: time="2025-09-13T00:17:00.772152849Z" level=info msg="TearDown network for sandbox \"48be175f15263d25946c5fafd5d93ea0ba82b221a8a78f549856a0774ad11b96\" successfully" Sep 13 00:17:00.772480 containerd[1444]: time="2025-09-13T00:17:00.772180609Z" level=info msg="StopPodSandbox for \"48be175f15263d25946c5fafd5d93ea0ba82b221a8a78f549856a0774ad11b96\" returns successfully" Sep 13 00:17:00.773460 kubelet[2490]: E0913 00:17:00.772471 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:00.774514 systemd[1]: run-netns-cni\x2d75c36574\x2dd24a\x2dbe33\x2dec55\x2daec1a8261e2f.mount: Deactivated successfully. Sep 13 00:17:00.776923 containerd[1444]: time="2025-09-13T00:17:00.775581972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qj2mh,Uid:da7a220d-d282-4f5a-9d7b-1fe40051e284,Namespace:kube-system,Attempt:1,}" Sep 13 00:17:00.794403 containerd[1444]: 2025-09-13 00:17:00.732 [INFO][4898] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b46982dd6853b5e80e42bf4510f8168884956bedb7fb50987d00163f9bd6678a" Sep 13 00:17:00.794403 containerd[1444]: 2025-09-13 00:17:00.732 [INFO][4898] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b46982dd6853b5e80e42bf4510f8168884956bedb7fb50987d00163f9bd6678a" iface="eth0" netns="/var/run/netns/cni-0be5f6b4-0e50-c19f-d88d-3bcc8aaab60d" Sep 13 00:17:00.794403 containerd[1444]: 2025-09-13 00:17:00.733 [INFO][4898] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b46982dd6853b5e80e42bf4510f8168884956bedb7fb50987d00163f9bd6678a" iface="eth0" netns="/var/run/netns/cni-0be5f6b4-0e50-c19f-d88d-3bcc8aaab60d" Sep 13 00:17:00.794403 containerd[1444]: 2025-09-13 00:17:00.733 [INFO][4898] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b46982dd6853b5e80e42bf4510f8168884956bedb7fb50987d00163f9bd6678a" iface="eth0" netns="/var/run/netns/cni-0be5f6b4-0e50-c19f-d88d-3bcc8aaab60d" Sep 13 00:17:00.794403 containerd[1444]: 2025-09-13 00:17:00.733 [INFO][4898] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b46982dd6853b5e80e42bf4510f8168884956bedb7fb50987d00163f9bd6678a" Sep 13 00:17:00.794403 containerd[1444]: 2025-09-13 00:17:00.733 [INFO][4898] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b46982dd6853b5e80e42bf4510f8168884956bedb7fb50987d00163f9bd6678a" Sep 13 00:17:00.794403 containerd[1444]: 2025-09-13 00:17:00.775 [INFO][4917] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b46982dd6853b5e80e42bf4510f8168884956bedb7fb50987d00163f9bd6678a" HandleID="k8s-pod-network.b46982dd6853b5e80e42bf4510f8168884956bedb7fb50987d00163f9bd6678a" Workload="localhost-k8s-calico--kube--controllers--fd6458858--dk8sk-eth0" Sep 13 00:17:00.794403 containerd[1444]: 2025-09-13 00:17:00.775 [INFO][4917] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:17:00.794403 containerd[1444]: 2025-09-13 00:17:00.776 [INFO][4917] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:17:00.794403 containerd[1444]: 2025-09-13 00:17:00.787 [WARNING][4917] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b46982dd6853b5e80e42bf4510f8168884956bedb7fb50987d00163f9bd6678a" HandleID="k8s-pod-network.b46982dd6853b5e80e42bf4510f8168884956bedb7fb50987d00163f9bd6678a" Workload="localhost-k8s-calico--kube--controllers--fd6458858--dk8sk-eth0" Sep 13 00:17:00.794403 containerd[1444]: 2025-09-13 00:17:00.787 [INFO][4917] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b46982dd6853b5e80e42bf4510f8168884956bedb7fb50987d00163f9bd6678a" HandleID="k8s-pod-network.b46982dd6853b5e80e42bf4510f8168884956bedb7fb50987d00163f9bd6678a" Workload="localhost-k8s-calico--kube--controllers--fd6458858--dk8sk-eth0" Sep 13 00:17:00.794403 containerd[1444]: 2025-09-13 00:17:00.789 [INFO][4917] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:17:00.794403 containerd[1444]: 2025-09-13 00:17:00.791 [INFO][4898] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b46982dd6853b5e80e42bf4510f8168884956bedb7fb50987d00163f9bd6678a" Sep 13 00:17:00.795828 containerd[1444]: time="2025-09-13T00:17:00.794636344Z" level=info msg="TearDown network for sandbox \"b46982dd6853b5e80e42bf4510f8168884956bedb7fb50987d00163f9bd6678a\" successfully" Sep 13 00:17:00.795828 containerd[1444]: time="2025-09-13T00:17:00.794663545Z" level=info msg="StopPodSandbox for \"b46982dd6853b5e80e42bf4510f8168884956bedb7fb50987d00163f9bd6678a\" returns successfully" Sep 13 00:17:00.796983 systemd[1]: run-netns-cni\x2d0be5f6b4\x2d0e50\x2dc19f\x2dd88d\x2d3bcc8aaab60d.mount: Deactivated successfully. Sep 13 00:17:00.797743 containerd[1444]: time="2025-09-13T00:17:00.797198866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-fd6458858-dk8sk,Uid:68701d44-98db-4175-b680-f21da2b19c48,Namespace:calico-system,Attempt:1,}" Sep 13 00:17:00.841034 kubelet[2490]: E0913 00:17:00.840998 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:00.905438 containerd[1444]: time="2025-09-13T00:17:00.904921899Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:00.906971 containerd[1444]: time="2025-09-13T00:17:00.906897740Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 13 00:17:00.908017 containerd[1444]: time="2025-09-13T00:17:00.907887741Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:00.910624 containerd[1444]: time="2025-09-13T00:17:00.910584583Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:00.912514 containerd[1444]: time="2025-09-13T00:17:00.912471584Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 1.901589165s" Sep 13 00:17:00.912613 containerd[1444]: time="2025-09-13T00:17:00.912518064Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 13 00:17:00.914547 containerd[1444]: time="2025-09-13T00:17:00.914002705Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:17:00.934900 containerd[1444]: time="2025-09-13T00:17:00.934838559Z" level=info msg="CreateContainer within sandbox \"5f15513d95ad8550fcb083b05d4a8653f94a421f739ec19edb1ae3fe436843a4\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 13 00:17:00.990417 containerd[1444]: time="2025-09-13T00:17:00.990366636Z" level=info msg="CreateContainer within sandbox \"5f15513d95ad8550fcb083b05d4a8653f94a421f739ec19edb1ae3fe436843a4\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"896c8ec7edb65aed88c1552383a611c8407652b15845d7258823ba7374114da1\"" Sep 13 00:17:00.991506 containerd[1444]: time="2025-09-13T00:17:00.991109237Z" level=info msg="StartContainer for \"896c8ec7edb65aed88c1552383a611c8407652b15845d7258823ba7374114da1\"" Sep 13 00:17:01.029792 systemd[1]: Started cri-containerd-896c8ec7edb65aed88c1552383a611c8407652b15845d7258823ba7374114da1.scope - libcontainer container 896c8ec7edb65aed88c1552383a611c8407652b15845d7258823ba7374114da1. Sep 13 00:17:01.030818 systemd-networkd[1380]: calidef2df53869: Link UP Sep 13 00:17:01.031034 systemd-networkd[1380]: calidef2df53869: Gained carrier Sep 13 00:17:01.046763 containerd[1444]: 2025-09-13 00:17:00.942 [INFO][4937] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--qj2mh-eth0 coredns-674b8bbfcf- kube-system da7a220d-d282-4f5a-9d7b-1fe40051e284 1047 0 2025-09-13 00:16:23 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-qj2mh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calidef2df53869 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f723fc0f611c8d0a1c03a5963f3f921529d994be4b99527a28a851911f60330a" Namespace="kube-system" Pod="coredns-674b8bbfcf-qj2mh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qj2mh-" Sep 13 00:17:01.046763 containerd[1444]: 2025-09-13 00:17:00.942 [INFO][4937] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f723fc0f611c8d0a1c03a5963f3f921529d994be4b99527a28a851911f60330a" Namespace="kube-system" Pod="coredns-674b8bbfcf-qj2mh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qj2mh-eth0" Sep 13 00:17:01.046763 containerd[1444]: 2025-09-13 00:17:00.965 [INFO][4966] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f723fc0f611c8d0a1c03a5963f3f921529d994be4b99527a28a851911f60330a" HandleID="k8s-pod-network.f723fc0f611c8d0a1c03a5963f3f921529d994be4b99527a28a851911f60330a" Workload="localhost-k8s-coredns--674b8bbfcf--qj2mh-eth0" Sep 13 00:17:01.046763 containerd[1444]: 2025-09-13 00:17:00.969 [INFO][4966] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f723fc0f611c8d0a1c03a5963f3f921529d994be4b99527a28a851911f60330a" HandleID="k8s-pod-network.f723fc0f611c8d0a1c03a5963f3f921529d994be4b99527a28a851911f60330a" Workload="localhost-k8s-coredns--674b8bbfcf--qj2mh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c32a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-qj2mh", "timestamp":"2025-09-13 00:17:00.96575574 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:17:01.046763 containerd[1444]: 2025-09-13 00:17:00.969 [INFO][4966] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:17:01.046763 containerd[1444]: 2025-09-13 00:17:00.969 [INFO][4966] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:17:01.046763 containerd[1444]: 2025-09-13 00:17:00.969 [INFO][4966] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:17:01.046763 containerd[1444]: 2025-09-13 00:17:00.978 [INFO][4966] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f723fc0f611c8d0a1c03a5963f3f921529d994be4b99527a28a851911f60330a" host="localhost" Sep 13 00:17:01.046763 containerd[1444]: 2025-09-13 00:17:00.982 [INFO][4966] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:17:01.046763 containerd[1444]: 2025-09-13 00:17:00.989 [INFO][4966] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:17:01.046763 containerd[1444]: 2025-09-13 00:17:00.991 [INFO][4966] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:17:01.046763 containerd[1444]: 2025-09-13 00:17:00.994 [INFO][4966] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:17:01.046763 containerd[1444]: 2025-09-13 00:17:00.994 [INFO][4966] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f723fc0f611c8d0a1c03a5963f3f921529d994be4b99527a28a851911f60330a" host="localhost" Sep 13 00:17:01.046763 containerd[1444]: 2025-09-13 00:17:00.998 [INFO][4966] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f723fc0f611c8d0a1c03a5963f3f921529d994be4b99527a28a851911f60330a Sep 13 00:17:01.046763 containerd[1444]: 2025-09-13 00:17:01.008 [INFO][4966] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f723fc0f611c8d0a1c03a5963f3f921529d994be4b99527a28a851911f60330a" host="localhost" Sep 13 00:17:01.046763 containerd[1444]: 2025-09-13 00:17:01.016 [INFO][4966] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.f723fc0f611c8d0a1c03a5963f3f921529d994be4b99527a28a851911f60330a" host="localhost" Sep 13 00:17:01.046763 containerd[1444]: 2025-09-13 00:17:01.022 [INFO][4966] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.f723fc0f611c8d0a1c03a5963f3f921529d994be4b99527a28a851911f60330a" host="localhost" Sep 13 00:17:01.046763 containerd[1444]: 2025-09-13 00:17:01.022 [INFO][4966] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:17:01.046763 containerd[1444]: 2025-09-13 00:17:01.022 [INFO][4966] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="f723fc0f611c8d0a1c03a5963f3f921529d994be4b99527a28a851911f60330a" HandleID="k8s-pod-network.f723fc0f611c8d0a1c03a5963f3f921529d994be4b99527a28a851911f60330a" Workload="localhost-k8s-coredns--674b8bbfcf--qj2mh-eth0" Sep 13 00:17:01.047360 containerd[1444]: 2025-09-13 00:17:01.027 [INFO][4937] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f723fc0f611c8d0a1c03a5963f3f921529d994be4b99527a28a851911f60330a" Namespace="kube-system" Pod="coredns-674b8bbfcf-qj2mh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qj2mh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--qj2mh-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"da7a220d-d282-4f5a-9d7b-1fe40051e284", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 16, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-qj2mh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidef2df53869", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:17:01.047360 containerd[1444]: 2025-09-13 00:17:01.027 [INFO][4937] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="f723fc0f611c8d0a1c03a5963f3f921529d994be4b99527a28a851911f60330a" Namespace="kube-system" Pod="coredns-674b8bbfcf-qj2mh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qj2mh-eth0" Sep 13 00:17:01.047360 containerd[1444]: 2025-09-13 00:17:01.027 [INFO][4937] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidef2df53869 ContainerID="f723fc0f611c8d0a1c03a5963f3f921529d994be4b99527a28a851911f60330a" Namespace="kube-system" Pod="coredns-674b8bbfcf-qj2mh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qj2mh-eth0" Sep 13 00:17:01.047360 containerd[1444]: 2025-09-13 00:17:01.031 [INFO][4937] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f723fc0f611c8d0a1c03a5963f3f921529d994be4b99527a28a851911f60330a" Namespace="kube-system" Pod="coredns-674b8bbfcf-qj2mh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qj2mh-eth0" Sep 13 00:17:01.047360 containerd[1444]: 2025-09-13 00:17:01.032 [INFO][4937] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f723fc0f611c8d0a1c03a5963f3f921529d994be4b99527a28a851911f60330a" Namespace="kube-system" Pod="coredns-674b8bbfcf-qj2mh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qj2mh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--qj2mh-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"da7a220d-d282-4f5a-9d7b-1fe40051e284", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 16, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f723fc0f611c8d0a1c03a5963f3f921529d994be4b99527a28a851911f60330a", Pod:"coredns-674b8bbfcf-qj2mh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidef2df53869", MAC:"ea:da:84:f5:d4:94", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:17:01.047360 containerd[1444]: 2025-09-13 00:17:01.041 [INFO][4937] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f723fc0f611c8d0a1c03a5963f3f921529d994be4b99527a28a851911f60330a" Namespace="kube-system" Pod="coredns-674b8bbfcf-qj2mh" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--qj2mh-eth0" Sep 13 00:17:01.067341 containerd[1444]: time="2025-09-13T00:17:01.067066405Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:17:01.067341 containerd[1444]: time="2025-09-13T00:17:01.067234925Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:17:01.067341 containerd[1444]: time="2025-09-13T00:17:01.067247645Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:17:01.067955 containerd[1444]: time="2025-09-13T00:17:01.067800766Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:17:01.069872 containerd[1444]: time="2025-09-13T00:17:01.069830887Z" level=info msg="StartContainer for \"896c8ec7edb65aed88c1552383a611c8407652b15845d7258823ba7374114da1\" returns successfully" Sep 13 00:17:01.106998 systemd[1]: Started cri-containerd-f723fc0f611c8d0a1c03a5963f3f921529d994be4b99527a28a851911f60330a.scope - libcontainer container f723fc0f611c8d0a1c03a5963f3f921529d994be4b99527a28a851911f60330a. Sep 13 00:17:01.125773 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:17:01.127632 systemd-networkd[1380]: calia9d34eff079: Link UP Sep 13 00:17:01.128140 systemd-networkd[1380]: calia9d34eff079: Gained carrier Sep 13 00:17:01.152987 containerd[1444]: 2025-09-13 00:17:00.945 [INFO][4946] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--fd6458858--dk8sk-eth0 calico-kube-controllers-fd6458858- calico-system 68701d44-98db-4175-b680-f21da2b19c48 1048 0 2025-09-13 00:16:38 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:fd6458858 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-fd6458858-dk8sk eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia9d34eff079 [] [] }} ContainerID="1020e36dbd60caf926d432a588d129cfe36dcc8a183375db37db834a7313280c" Namespace="calico-system" Pod="calico-kube-controllers-fd6458858-dk8sk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--fd6458858--dk8sk-" Sep 13 00:17:01.152987 containerd[1444]: 2025-09-13 00:17:00.945 [INFO][4946] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1020e36dbd60caf926d432a588d129cfe36dcc8a183375db37db834a7313280c" Namespace="calico-system" Pod="calico-kube-controllers-fd6458858-dk8sk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--fd6458858--dk8sk-eth0" Sep 13 00:17:01.152987 containerd[1444]: 2025-09-13 00:17:01.011 [INFO][4974] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1020e36dbd60caf926d432a588d129cfe36dcc8a183375db37db834a7313280c" HandleID="k8s-pod-network.1020e36dbd60caf926d432a588d129cfe36dcc8a183375db37db834a7313280c" Workload="localhost-k8s-calico--kube--controllers--fd6458858--dk8sk-eth0" Sep 13 00:17:01.152987 containerd[1444]: 2025-09-13 00:17:01.011 [INFO][4974] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1020e36dbd60caf926d432a588d129cfe36dcc8a183375db37db834a7313280c" HandleID="k8s-pod-network.1020e36dbd60caf926d432a588d129cfe36dcc8a183375db37db834a7313280c" Workload="localhost-k8s-calico--kube--controllers--fd6458858--dk8sk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3600), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-fd6458858-dk8sk", "timestamp":"2025-09-13 00:17:01.01173273 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:17:01.152987 containerd[1444]: 2025-09-13 00:17:01.011 [INFO][4974] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:17:01.152987 containerd[1444]: 2025-09-13 00:17:01.023 [INFO][4974] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:17:01.152987 containerd[1444]: 2025-09-13 00:17:01.023 [INFO][4974] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:17:01.152987 containerd[1444]: 2025-09-13 00:17:01.080 [INFO][4974] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1020e36dbd60caf926d432a588d129cfe36dcc8a183375db37db834a7313280c" host="localhost" Sep 13 00:17:01.152987 containerd[1444]: 2025-09-13 00:17:01.085 [INFO][4974] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:17:01.152987 containerd[1444]: 2025-09-13 00:17:01.093 [INFO][4974] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:17:01.152987 containerd[1444]: 2025-09-13 00:17:01.095 [INFO][4974] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:17:01.152987 containerd[1444]: 2025-09-13 00:17:01.098 [INFO][4974] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:17:01.152987 containerd[1444]: 2025-09-13 00:17:01.098 [INFO][4974] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1020e36dbd60caf926d432a588d129cfe36dcc8a183375db37db834a7313280c" host="localhost" Sep 13 00:17:01.152987 containerd[1444]: 2025-09-13 00:17:01.100 [INFO][4974] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1020e36dbd60caf926d432a588d129cfe36dcc8a183375db37db834a7313280c Sep 13 00:17:01.152987 containerd[1444]: 2025-09-13 00:17:01.106 [INFO][4974] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1020e36dbd60caf926d432a588d129cfe36dcc8a183375db37db834a7313280c" host="localhost" Sep 13 00:17:01.152987 containerd[1444]: 2025-09-13 00:17:01.116 [INFO][4974] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.1020e36dbd60caf926d432a588d129cfe36dcc8a183375db37db834a7313280c" host="localhost" Sep 13 00:17:01.152987 containerd[1444]: 2025-09-13 00:17:01.116 [INFO][4974] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.1020e36dbd60caf926d432a588d129cfe36dcc8a183375db37db834a7313280c" host="localhost" Sep 13 00:17:01.152987 containerd[1444]: 2025-09-13 00:17:01.116 [INFO][4974] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:17:01.152987 containerd[1444]: 2025-09-13 00:17:01.116 [INFO][4974] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="1020e36dbd60caf926d432a588d129cfe36dcc8a183375db37db834a7313280c" HandleID="k8s-pod-network.1020e36dbd60caf926d432a588d129cfe36dcc8a183375db37db834a7313280c" Workload="localhost-k8s-calico--kube--controllers--fd6458858--dk8sk-eth0" Sep 13 00:17:01.153542 containerd[1444]: 2025-09-13 00:17:01.123 [INFO][4946] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1020e36dbd60caf926d432a588d129cfe36dcc8a183375db37db834a7313280c" Namespace="calico-system" Pod="calico-kube-controllers-fd6458858-dk8sk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--fd6458858--dk8sk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--fd6458858--dk8sk-eth0", GenerateName:"calico-kube-controllers-fd6458858-", Namespace:"calico-system", SelfLink:"", UID:"68701d44-98db-4175-b680-f21da2b19c48", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 16, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"fd6458858", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-fd6458858-dk8sk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia9d34eff079", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:17:01.153542 containerd[1444]: 2025-09-13 00:17:01.124 [INFO][4946] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="1020e36dbd60caf926d432a588d129cfe36dcc8a183375db37db834a7313280c" Namespace="calico-system" Pod="calico-kube-controllers-fd6458858-dk8sk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--fd6458858--dk8sk-eth0" Sep 13 00:17:01.153542 containerd[1444]: 2025-09-13 00:17:01.124 [INFO][4946] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia9d34eff079 ContainerID="1020e36dbd60caf926d432a588d129cfe36dcc8a183375db37db834a7313280c" Namespace="calico-system" Pod="calico-kube-controllers-fd6458858-dk8sk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--fd6458858--dk8sk-eth0" Sep 13 00:17:01.153542 containerd[1444]: 2025-09-13 00:17:01.128 [INFO][4946] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1020e36dbd60caf926d432a588d129cfe36dcc8a183375db37db834a7313280c" Namespace="calico-system" Pod="calico-kube-controllers-fd6458858-dk8sk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--fd6458858--dk8sk-eth0" Sep 13 00:17:01.153542 containerd[1444]: 2025-09-13 00:17:01.129 [INFO][4946] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1020e36dbd60caf926d432a588d129cfe36dcc8a183375db37db834a7313280c" Namespace="calico-system" Pod="calico-kube-controllers-fd6458858-dk8sk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--fd6458858--dk8sk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--fd6458858--dk8sk-eth0", GenerateName:"calico-kube-controllers-fd6458858-", Namespace:"calico-system", SelfLink:"", UID:"68701d44-98db-4175-b680-f21da2b19c48", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 16, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"fd6458858", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1020e36dbd60caf926d432a588d129cfe36dcc8a183375db37db834a7313280c", Pod:"calico-kube-controllers-fd6458858-dk8sk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia9d34eff079", MAC:"2a:10:2d:1a:b5:47", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:17:01.153542 containerd[1444]: 2025-09-13 00:17:01.144 [INFO][4946] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1020e36dbd60caf926d432a588d129cfe36dcc8a183375db37db834a7313280c" Namespace="calico-system" Pod="calico-kube-controllers-fd6458858-dk8sk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--fd6458858--dk8sk-eth0" Sep 13 00:17:01.155195 containerd[1444]: time="2025-09-13T00:17:01.155159181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qj2mh,Uid:da7a220d-d282-4f5a-9d7b-1fe40051e284,Namespace:kube-system,Attempt:1,} returns sandbox id \"f723fc0f611c8d0a1c03a5963f3f921529d994be4b99527a28a851911f60330a\"" Sep 13 00:17:01.156241 kubelet[2490]: E0913 00:17:01.156046 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:01.159808 systemd-networkd[1380]: cali7c24b383a16: Gained IPv6LL Sep 13 00:17:01.161274 containerd[1444]: time="2025-09-13T00:17:01.160349264Z" level=info msg="CreateContainer within sandbox \"f723fc0f611c8d0a1c03a5963f3f921529d994be4b99527a28a851911f60330a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:17:01.174391 containerd[1444]: time="2025-09-13T00:17:01.174297433Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:17:01.174391 containerd[1444]: time="2025-09-13T00:17:01.174352153Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:17:01.174391 containerd[1444]: time="2025-09-13T00:17:01.174367153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:17:01.174812 containerd[1444]: time="2025-09-13T00:17:01.174720313Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:17:01.181109 containerd[1444]: time="2025-09-13T00:17:01.181070797Z" level=info msg="CreateContainer within sandbox \"f723fc0f611c8d0a1c03a5963f3f921529d994be4b99527a28a851911f60330a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c6ca798ed752505a1d053721f2d041ab8d4ebff1cb1a8aa7d5eb62b8b9069498\"" Sep 13 00:17:01.181565 containerd[1444]: time="2025-09-13T00:17:01.181543957Z" level=info msg="StartContainer for \"c6ca798ed752505a1d053721f2d041ab8d4ebff1cb1a8aa7d5eb62b8b9069498\"" Sep 13 00:17:01.198755 systemd[1]: Started cri-containerd-1020e36dbd60caf926d432a588d129cfe36dcc8a183375db37db834a7313280c.scope - libcontainer container 1020e36dbd60caf926d432a588d129cfe36dcc8a183375db37db834a7313280c. Sep 13 00:17:01.205997 systemd[1]: Started cri-containerd-c6ca798ed752505a1d053721f2d041ab8d4ebff1cb1a8aa7d5eb62b8b9069498.scope - libcontainer container c6ca798ed752505a1d053721f2d041ab8d4ebff1cb1a8aa7d5eb62b8b9069498. Sep 13 00:17:01.213199 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:17:01.251871 containerd[1444]: time="2025-09-13T00:17:01.251827802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-fd6458858-dk8sk,Uid:68701d44-98db-4175-b680-f21da2b19c48,Namespace:calico-system,Attempt:1,} returns sandbox id \"1020e36dbd60caf926d432a588d129cfe36dcc8a183375db37db834a7313280c\"" Sep 13 00:17:01.252087 containerd[1444]: time="2025-09-13T00:17:01.251981722Z" level=info msg="StartContainer for \"c6ca798ed752505a1d053721f2d041ab8d4ebff1cb1a8aa7d5eb62b8b9069498\" returns successfully" Sep 13 00:17:01.878836 kubelet[2490]: E0913 00:17:01.878426 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:01.878836 kubelet[2490]: E0913 00:17:01.878563 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:01.895210 kubelet[2490]: I0913 00:17:01.894520 2490 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-22r4m" podStartSLOduration=22.103956322 podStartE2EDuration="24.894504647s" podCreationTimestamp="2025-09-13 00:16:37 +0000 UTC" firstStartedPulling="2025-09-13 00:16:58.12315666 +0000 UTC m=+40.540864188" lastFinishedPulling="2025-09-13 00:17:00.913704985 +0000 UTC m=+43.331412513" observedRunningTime="2025-09-13 00:17:01.893977687 +0000 UTC m=+44.311685215" watchObservedRunningTime="2025-09-13 00:17:01.894504647 +0000 UTC m=+44.312212135" Sep 13 00:17:01.919089 kubelet[2490]: I0913 00:17:01.918706 2490 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-qj2mh" podStartSLOduration=38.918685423 podStartE2EDuration="38.918685423s" podCreationTimestamp="2025-09-13 00:16:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:17:01.918051142 +0000 UTC m=+44.335758670" watchObservedRunningTime="2025-09-13 00:17:01.918685423 +0000 UTC m=+44.336392951" Sep 13 00:17:02.377382 containerd[1444]: time="2025-09-13T00:17:02.377330697Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:02.379355 containerd[1444]: time="2025-09-13T00:17:02.379325818Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 13 00:17:02.380529 containerd[1444]: time="2025-09-13T00:17:02.380500059Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:02.383199 containerd[1444]: time="2025-09-13T00:17:02.383165701Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:02.397217 containerd[1444]: time="2025-09-13T00:17:02.397178869Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 1.483118124s" Sep 13 00:17:02.397217 containerd[1444]: time="2025-09-13T00:17:02.397218709Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 13 00:17:02.398939 containerd[1444]: time="2025-09-13T00:17:02.398746430Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 13 00:17:02.402062 containerd[1444]: time="2025-09-13T00:17:02.402029032Z" level=info msg="CreateContainer within sandbox \"6ea8db0aee61cf1b23d675ffaf58f48c494fa9d9b102eae8769caac090de2712\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:17:02.416217 containerd[1444]: time="2025-09-13T00:17:02.416178080Z" level=info msg="CreateContainer within sandbox \"6ea8db0aee61cf1b23d675ffaf58f48c494fa9d9b102eae8769caac090de2712\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b96c571c67d68ffec37db81ae611a059aa51f3889d9004867155dec9c8c606bd\"" Sep 13 00:17:02.416770 containerd[1444]: time="2025-09-13T00:17:02.416742001Z" level=info msg="StartContainer for \"b96c571c67d68ffec37db81ae611a059aa51f3889d9004867155dec9c8c606bd\"" Sep 13 00:17:02.449763 systemd[1]: Started cri-containerd-b96c571c67d68ffec37db81ae611a059aa51f3889d9004867155dec9c8c606bd.scope - libcontainer container b96c571c67d68ffec37db81ae611a059aa51f3889d9004867155dec9c8c606bd. Sep 13 00:17:02.484508 containerd[1444]: time="2025-09-13T00:17:02.484459281Z" level=info msg="StartContainer for \"b96c571c67d68ffec37db81ae611a059aa51f3889d9004867155dec9c8c606bd\" returns successfully" Sep 13 00:17:02.884989 kubelet[2490]: E0913 00:17:02.882825 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:02.884989 kubelet[2490]: I0913 00:17:02.883125 2490 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:17:02.890425 systemd-networkd[1380]: calidef2df53869: Gained IPv6LL Sep 13 00:17:03.082069 systemd-networkd[1380]: calia9d34eff079: Gained IPv6LL Sep 13 00:17:03.884816 kubelet[2490]: I0913 00:17:03.884245 2490 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:17:03.885417 kubelet[2490]: E0913 00:17:03.885375 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:04.057273 containerd[1444]: time="2025-09-13T00:17:04.057215410Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:04.059010 containerd[1444]: time="2025-09-13T00:17:04.058890131Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 13 00:17:04.060038 containerd[1444]: time="2025-09-13T00:17:04.059995892Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:04.064059 containerd[1444]: time="2025-09-13T00:17:04.063995094Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:04.064748 containerd[1444]: time="2025-09-13T00:17:04.064721254Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 1.665944384s" Sep 13 00:17:04.064793 containerd[1444]: time="2025-09-13T00:17:04.064755374Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 13 00:17:04.066664 containerd[1444]: time="2025-09-13T00:17:04.066630335Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:17:04.070220 containerd[1444]: time="2025-09-13T00:17:04.070185657Z" level=info msg="CreateContainer within sandbox \"7fe730565fc50e613efc2b53124308e6db9888465473d6de76981595e78067e5\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 13 00:17:04.102116 containerd[1444]: time="2025-09-13T00:17:04.102036394Z" level=info msg="CreateContainer within sandbox \"7fe730565fc50e613efc2b53124308e6db9888465473d6de76981595e78067e5\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"7f881ffbc053843fe8bd7388902021e7466b2c2a3c361707cda8911d85922c60\"" Sep 13 00:17:04.102898 containerd[1444]: time="2025-09-13T00:17:04.102795754Z" level=info msg="StartContainer for \"7f881ffbc053843fe8bd7388902021e7466b2c2a3c361707cda8911d85922c60\"" Sep 13 00:17:04.138778 systemd[1]: Started cri-containerd-7f881ffbc053843fe8bd7388902021e7466b2c2a3c361707cda8911d85922c60.scope - libcontainer container 7f881ffbc053843fe8bd7388902021e7466b2c2a3c361707cda8911d85922c60. Sep 13 00:17:04.175687 containerd[1444]: time="2025-09-13T00:17:04.175645792Z" level=info msg="StartContainer for \"7f881ffbc053843fe8bd7388902021e7466b2c2a3c361707cda8911d85922c60\" returns successfully" Sep 13 00:17:04.326539 containerd[1444]: time="2025-09-13T00:17:04.326481910Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:04.329576 containerd[1444]: time="2025-09-13T00:17:04.328497271Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 13 00:17:04.332047 containerd[1444]: time="2025-09-13T00:17:04.330646952Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 263.978977ms" Sep 13 00:17:04.332047 containerd[1444]: time="2025-09-13T00:17:04.330676632Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 13 00:17:04.332330 containerd[1444]: time="2025-09-13T00:17:04.332311633Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 13 00:17:04.336245 containerd[1444]: time="2025-09-13T00:17:04.336215355Z" level=info msg="CreateContainer within sandbox \"1f630dcccce7b0ba35bf91308813256abe57dfdecc67d61c19ed2990d6fc2935\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:17:04.354679 containerd[1444]: time="2025-09-13T00:17:04.354580925Z" level=info msg="CreateContainer within sandbox \"1f630dcccce7b0ba35bf91308813256abe57dfdecc67d61c19ed2990d6fc2935\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"698238a90bd52a427ade28273888f6a06fb055c7236f1cd24c058a9ecd5570db\"" Sep 13 00:17:04.355415 containerd[1444]: time="2025-09-13T00:17:04.355385285Z" level=info msg="StartContainer for \"698238a90bd52a427ade28273888f6a06fb055c7236f1cd24c058a9ecd5570db\"" Sep 13 00:17:04.379783 systemd[1]: Started cri-containerd-698238a90bd52a427ade28273888f6a06fb055c7236f1cd24c058a9ecd5570db.scope - libcontainer container 698238a90bd52a427ade28273888f6a06fb055c7236f1cd24c058a9ecd5570db. Sep 13 00:17:04.424250 containerd[1444]: time="2025-09-13T00:17:04.423948641Z" level=info msg="StartContainer for \"698238a90bd52a427ade28273888f6a06fb055c7236f1cd24c058a9ecd5570db\" returns successfully" Sep 13 00:17:04.748772 kubelet[2490]: I0913 00:17:04.748723 2490 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 13 00:17:04.748921 kubelet[2490]: I0913 00:17:04.748788 2490 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 13 00:17:04.908724 kubelet[2490]: I0913 00:17:04.908634 2490 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-fs48x" podStartSLOduration=20.85129789 podStartE2EDuration="26.908613933s" podCreationTimestamp="2025-09-13 00:16:38 +0000 UTC" firstStartedPulling="2025-09-13 00:16:58.008360772 +0000 UTC m=+40.426068300" lastFinishedPulling="2025-09-13 00:17:04.065676815 +0000 UTC m=+46.483384343" observedRunningTime="2025-09-13 00:17:04.903743651 +0000 UTC m=+47.321451179" watchObservedRunningTime="2025-09-13 00:17:04.908613933 +0000 UTC m=+47.326321461" Sep 13 00:17:04.909154 kubelet[2490]: I0913 00:17:04.908814 2490 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-859b9fb76c-tctwm" podStartSLOduration=27.652729177 podStartE2EDuration="30.908808733s" podCreationTimestamp="2025-09-13 00:16:34 +0000 UTC" firstStartedPulling="2025-09-13 00:16:59.141922953 +0000 UTC m=+41.559630481" lastFinishedPulling="2025-09-13 00:17:02.398002509 +0000 UTC m=+44.815710037" observedRunningTime="2025-09-13 00:17:02.908222291 +0000 UTC m=+45.325929819" watchObservedRunningTime="2025-09-13 00:17:04.908808733 +0000 UTC m=+47.326516261" Sep 13 00:17:04.919879 kubelet[2490]: I0913 00:17:04.919813 2490 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-859b9fb76c-hnqvt" podStartSLOduration=26.563403098 podStartE2EDuration="30.919797379s" podCreationTimestamp="2025-09-13 00:16:34 +0000 UTC" firstStartedPulling="2025-09-13 00:16:59.974994992 +0000 UTC m=+42.392702520" lastFinishedPulling="2025-09-13 00:17:04.331389313 +0000 UTC m=+46.749096801" observedRunningTime="2025-09-13 00:17:04.919785139 +0000 UTC m=+47.337492707" watchObservedRunningTime="2025-09-13 00:17:04.919797379 +0000 UTC m=+47.337504907" Sep 13 00:17:05.565054 systemd[1]: Started sshd@7-10.0.0.130:22-10.0.0.1:44396.service - OpenSSH per-connection server daemon (10.0.0.1:44396). Sep 13 00:17:05.638017 sshd[5312]: Accepted publickey for core from 10.0.0.1 port 44396 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:17:05.639878 sshd[5312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:17:05.644334 systemd-logind[1419]: New session 8 of user core. Sep 13 00:17:05.649740 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 13 00:17:05.909308 kubelet[2490]: I0913 00:17:05.907365 2490 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:17:05.989539 sshd[5312]: pam_unix(sshd:session): session closed for user core Sep 13 00:17:05.993692 systemd[1]: sshd@7-10.0.0.130:22-10.0.0.1:44396.service: Deactivated successfully. Sep 13 00:17:05.997212 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 00:17:05.999770 systemd-logind[1419]: Session 8 logged out. Waiting for processes to exit. Sep 13 00:17:06.002666 systemd-logind[1419]: Removed session 8. Sep 13 00:17:06.135435 containerd[1444]: time="2025-09-13T00:17:06.135385610Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:06.136366 containerd[1444]: time="2025-09-13T00:17:06.135990850Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 13 00:17:06.137241 containerd[1444]: time="2025-09-13T00:17:06.137197291Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:06.139773 containerd[1444]: time="2025-09-13T00:17:06.139701772Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:06.140450 containerd[1444]: time="2025-09-13T00:17:06.140325972Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 1.807925619s" Sep 13 00:17:06.140450 containerd[1444]: time="2025-09-13T00:17:06.140360252Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 13 00:17:06.155997 containerd[1444]: time="2025-09-13T00:17:06.155948619Z" level=info msg="CreateContainer within sandbox \"1020e36dbd60caf926d432a588d129cfe36dcc8a183375db37db834a7313280c\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 13 00:17:06.167734 containerd[1444]: time="2025-09-13T00:17:06.167634385Z" level=info msg="CreateContainer within sandbox \"1020e36dbd60caf926d432a588d129cfe36dcc8a183375db37db834a7313280c\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"153c4b6f33855cea2bd669d3b3aea12736baa553b99102cabba6e69939402df0\"" Sep 13 00:17:06.168620 containerd[1444]: time="2025-09-13T00:17:06.168424265Z" level=info msg="StartContainer for \"153c4b6f33855cea2bd669d3b3aea12736baa553b99102cabba6e69939402df0\"" Sep 13 00:17:06.201784 systemd[1]: Started cri-containerd-153c4b6f33855cea2bd669d3b3aea12736baa553b99102cabba6e69939402df0.scope - libcontainer container 153c4b6f33855cea2bd669d3b3aea12736baa553b99102cabba6e69939402df0. Sep 13 00:17:06.244204 containerd[1444]: time="2025-09-13T00:17:06.242205499Z" level=info msg="StartContainer for \"153c4b6f33855cea2bd669d3b3aea12736baa553b99102cabba6e69939402df0\" returns successfully" Sep 13 00:17:06.925403 kubelet[2490]: I0913 00:17:06.925327 2490 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-fd6458858-dk8sk" podStartSLOduration=24.038936041 podStartE2EDuration="28.925310971s" podCreationTimestamp="2025-09-13 00:16:38 +0000 UTC" firstStartedPulling="2025-09-13 00:17:01.254601643 +0000 UTC m=+43.672309171" lastFinishedPulling="2025-09-13 00:17:06.140976613 +0000 UTC m=+48.558684101" observedRunningTime="2025-09-13 00:17:06.924350491 +0000 UTC m=+49.342057979" watchObservedRunningTime="2025-09-13 00:17:06.925310971 +0000 UTC m=+49.343018499" Sep 13 00:17:11.004392 systemd[1]: Started sshd@8-10.0.0.130:22-10.0.0.1:50988.service - OpenSSH per-connection server daemon (10.0.0.1:50988). Sep 13 00:17:11.069218 sshd[5405]: Accepted publickey for core from 10.0.0.1 port 50988 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:17:11.072179 sshd[5405]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:17:11.078312 systemd-logind[1419]: New session 9 of user core. Sep 13 00:17:11.086094 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 13 00:17:11.352431 sshd[5405]: pam_unix(sshd:session): session closed for user core Sep 13 00:17:11.357260 systemd[1]: sshd@8-10.0.0.130:22-10.0.0.1:50988.service: Deactivated successfully. Sep 13 00:17:11.359015 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 00:17:11.360288 systemd-logind[1419]: Session 9 logged out. Waiting for processes to exit. Sep 13 00:17:11.361173 systemd-logind[1419]: Removed session 9. Sep 13 00:17:13.782795 kubelet[2490]: I0913 00:17:13.782743 2490 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:17:16.363995 systemd[1]: Started sshd@9-10.0.0.130:22-10.0.0.1:50998.service - OpenSSH per-connection server daemon (10.0.0.1:50998). Sep 13 00:17:16.452780 sshd[5470]: Accepted publickey for core from 10.0.0.1 port 50998 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:17:16.454139 sshd[5470]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:17:16.458586 systemd-logind[1419]: New session 10 of user core. Sep 13 00:17:16.464770 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 13 00:17:16.728186 sshd[5470]: pam_unix(sshd:session): session closed for user core Sep 13 00:17:16.737425 systemd[1]: sshd@9-10.0.0.130:22-10.0.0.1:50998.service: Deactivated successfully. Sep 13 00:17:16.739207 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 00:17:16.740441 systemd-logind[1419]: Session 10 logged out. Waiting for processes to exit. Sep 13 00:17:16.746956 systemd[1]: Started sshd@10-10.0.0.130:22-10.0.0.1:51004.service - OpenSSH per-connection server daemon (10.0.0.1:51004). Sep 13 00:17:16.748286 systemd-logind[1419]: Removed session 10. Sep 13 00:17:16.781516 sshd[5485]: Accepted publickey for core from 10.0.0.1 port 51004 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:17:16.783248 sshd[5485]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:17:16.787267 systemd-logind[1419]: New session 11 of user core. Sep 13 00:17:16.798829 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 13 00:17:17.007075 sshd[5485]: pam_unix(sshd:session): session closed for user core Sep 13 00:17:17.019263 systemd[1]: sshd@10-10.0.0.130:22-10.0.0.1:51004.service: Deactivated successfully. Sep 13 00:17:17.024113 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 00:17:17.029063 systemd-logind[1419]: Session 11 logged out. Waiting for processes to exit. Sep 13 00:17:17.034921 systemd[1]: Started sshd@11-10.0.0.130:22-10.0.0.1:51016.service - OpenSSH per-connection server daemon (10.0.0.1:51016). Sep 13 00:17:17.040152 systemd-logind[1419]: Removed session 11. Sep 13 00:17:17.068862 sshd[5498]: Accepted publickey for core from 10.0.0.1 port 51016 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:17:17.070410 sshd[5498]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:17:17.074583 systemd-logind[1419]: New session 12 of user core. Sep 13 00:17:17.086798 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 13 00:17:17.241007 sshd[5498]: pam_unix(sshd:session): session closed for user core Sep 13 00:17:17.245076 systemd[1]: sshd@11-10.0.0.130:22-10.0.0.1:51016.service: Deactivated successfully. Sep 13 00:17:17.247036 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 00:17:17.247694 systemd-logind[1419]: Session 12 logged out. Waiting for processes to exit. Sep 13 00:17:17.248790 systemd-logind[1419]: Removed session 12. Sep 13 00:17:17.664019 containerd[1444]: time="2025-09-13T00:17:17.663939655Z" level=info msg="StopPodSandbox for \"72bb7d3d6bce7112110c73bc80c881c9ae46c5d3e9b2c37640418d330431e6a6\"" Sep 13 00:17:17.741780 containerd[1444]: 2025-09-13 00:17:17.708 [WARNING][5524] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="72bb7d3d6bce7112110c73bc80c881c9ae46c5d3e9b2c37640418d330431e6a6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--fs48x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0b49a5c7-d8ed-4263-b267-b04f7372f88c", ResourceVersion:"1097", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 16, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7fe730565fc50e613efc2b53124308e6db9888465473d6de76981595e78067e5", Pod:"csi-node-driver-fs48x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9c7a8e4bb20", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:17:17.741780 containerd[1444]: 2025-09-13 00:17:17.709 [INFO][5524] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="72bb7d3d6bce7112110c73bc80c881c9ae46c5d3e9b2c37640418d330431e6a6" Sep 13 00:17:17.741780 containerd[1444]: 2025-09-13 00:17:17.709 [INFO][5524] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="72bb7d3d6bce7112110c73bc80c881c9ae46c5d3e9b2c37640418d330431e6a6" iface="eth0" netns="" Sep 13 00:17:17.741780 containerd[1444]: 2025-09-13 00:17:17.709 [INFO][5524] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="72bb7d3d6bce7112110c73bc80c881c9ae46c5d3e9b2c37640418d330431e6a6" Sep 13 00:17:17.741780 containerd[1444]: 2025-09-13 00:17:17.709 [INFO][5524] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="72bb7d3d6bce7112110c73bc80c881c9ae46c5d3e9b2c37640418d330431e6a6" Sep 13 00:17:17.741780 containerd[1444]: 2025-09-13 00:17:17.728 [INFO][5534] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="72bb7d3d6bce7112110c73bc80c881c9ae46c5d3e9b2c37640418d330431e6a6" HandleID="k8s-pod-network.72bb7d3d6bce7112110c73bc80c881c9ae46c5d3e9b2c37640418d330431e6a6" Workload="localhost-k8s-csi--node--driver--fs48x-eth0" Sep 13 00:17:17.741780 containerd[1444]: 2025-09-13 00:17:17.728 [INFO][5534] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:17:17.741780 containerd[1444]: 2025-09-13 00:17:17.728 [INFO][5534] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:17:17.741780 containerd[1444]: 2025-09-13 00:17:17.737 [WARNING][5534] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="72bb7d3d6bce7112110c73bc80c881c9ae46c5d3e9b2c37640418d330431e6a6" HandleID="k8s-pod-network.72bb7d3d6bce7112110c73bc80c881c9ae46c5d3e9b2c37640418d330431e6a6" Workload="localhost-k8s-csi--node--driver--fs48x-eth0" Sep 13 00:17:17.741780 containerd[1444]: 2025-09-13 00:17:17.737 [INFO][5534] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="72bb7d3d6bce7112110c73bc80c881c9ae46c5d3e9b2c37640418d330431e6a6" HandleID="k8s-pod-network.72bb7d3d6bce7112110c73bc80c881c9ae46c5d3e9b2c37640418d330431e6a6" Workload="localhost-k8s-csi--node--driver--fs48x-eth0" Sep 13 00:17:17.741780 containerd[1444]: 2025-09-13 00:17:17.738 [INFO][5534] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:17:17.741780 containerd[1444]: 2025-09-13 00:17:17.740 [INFO][5524] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="72bb7d3d6bce7112110c73bc80c881c9ae46c5d3e9b2c37640418d330431e6a6" Sep 13 00:17:17.742458 containerd[1444]: time="2025-09-13T00:17:17.742236633Z" level=info msg="TearDown network for sandbox \"72bb7d3d6bce7112110c73bc80c881c9ae46c5d3e9b2c37640418d330431e6a6\" successfully" Sep 13 00:17:17.742458 containerd[1444]: time="2025-09-13T00:17:17.742269473Z" level=info msg="StopPodSandbox for \"72bb7d3d6bce7112110c73bc80c881c9ae46c5d3e9b2c37640418d330431e6a6\" returns successfully" Sep 13 00:17:17.743160 containerd[1444]: time="2025-09-13T00:17:17.743112153Z" level=info msg="RemovePodSandbox for \"72bb7d3d6bce7112110c73bc80c881c9ae46c5d3e9b2c37640418d330431e6a6\"" Sep 13 00:17:17.750472 containerd[1444]: time="2025-09-13T00:17:17.750421395Z" level=info msg="Forcibly stopping sandbox \"72bb7d3d6bce7112110c73bc80c881c9ae46c5d3e9b2c37640418d330431e6a6\"" Sep 13 00:17:17.826588 containerd[1444]: 2025-09-13 00:17:17.784 [WARNING][5551] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="72bb7d3d6bce7112110c73bc80c881c9ae46c5d3e9b2c37640418d330431e6a6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--fs48x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0b49a5c7-d8ed-4263-b267-b04f7372f88c", ResourceVersion:"1097", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 16, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7fe730565fc50e613efc2b53124308e6db9888465473d6de76981595e78067e5", Pod:"csi-node-driver-fs48x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9c7a8e4bb20", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:17:17.826588 containerd[1444]: 2025-09-13 00:17:17.785 [INFO][5551] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="72bb7d3d6bce7112110c73bc80c881c9ae46c5d3e9b2c37640418d330431e6a6" Sep 13 00:17:17.826588 containerd[1444]: 2025-09-13 00:17:17.785 [INFO][5551] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="72bb7d3d6bce7112110c73bc80c881c9ae46c5d3e9b2c37640418d330431e6a6" iface="eth0" netns="" Sep 13 00:17:17.826588 containerd[1444]: 2025-09-13 00:17:17.785 [INFO][5551] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="72bb7d3d6bce7112110c73bc80c881c9ae46c5d3e9b2c37640418d330431e6a6" Sep 13 00:17:17.826588 containerd[1444]: 2025-09-13 00:17:17.785 [INFO][5551] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="72bb7d3d6bce7112110c73bc80c881c9ae46c5d3e9b2c37640418d330431e6a6" Sep 13 00:17:17.826588 containerd[1444]: 2025-09-13 00:17:17.804 [INFO][5560] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="72bb7d3d6bce7112110c73bc80c881c9ae46c5d3e9b2c37640418d330431e6a6" HandleID="k8s-pod-network.72bb7d3d6bce7112110c73bc80c881c9ae46c5d3e9b2c37640418d330431e6a6" Workload="localhost-k8s-csi--node--driver--fs48x-eth0" Sep 13 00:17:17.826588 containerd[1444]: 2025-09-13 00:17:17.804 [INFO][5560] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:17:17.826588 containerd[1444]: 2025-09-13 00:17:17.804 [INFO][5560] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:17:17.826588 containerd[1444]: 2025-09-13 00:17:17.819 [WARNING][5560] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="72bb7d3d6bce7112110c73bc80c881c9ae46c5d3e9b2c37640418d330431e6a6" HandleID="k8s-pod-network.72bb7d3d6bce7112110c73bc80c881c9ae46c5d3e9b2c37640418d330431e6a6" Workload="localhost-k8s-csi--node--driver--fs48x-eth0" Sep 13 00:17:17.826588 containerd[1444]: 2025-09-13 00:17:17.819 [INFO][5560] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="72bb7d3d6bce7112110c73bc80c881c9ae46c5d3e9b2c37640418d330431e6a6" HandleID="k8s-pod-network.72bb7d3d6bce7112110c73bc80c881c9ae46c5d3e9b2c37640418d330431e6a6" Workload="localhost-k8s-csi--node--driver--fs48x-eth0" Sep 13 00:17:17.826588 containerd[1444]: 2025-09-13 00:17:17.822 [INFO][5560] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:17:17.826588 containerd[1444]: 2025-09-13 00:17:17.825 [INFO][5551] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="72bb7d3d6bce7112110c73bc80c881c9ae46c5d3e9b2c37640418d330431e6a6" Sep 13 00:17:17.827093 containerd[1444]: time="2025-09-13T00:17:17.826813812Z" level=info msg="TearDown network for sandbox \"72bb7d3d6bce7112110c73bc80c881c9ae46c5d3e9b2c37640418d330431e6a6\" successfully" Sep 13 00:17:17.863625 containerd[1444]: time="2025-09-13T00:17:17.863558860Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"72bb7d3d6bce7112110c73bc80c881c9ae46c5d3e9b2c37640418d330431e6a6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:17:17.863768 containerd[1444]: time="2025-09-13T00:17:17.863664700Z" level=info msg="RemovePodSandbox \"72bb7d3d6bce7112110c73bc80c881c9ae46c5d3e9b2c37640418d330431e6a6\" returns successfully" Sep 13 00:17:17.864161 containerd[1444]: time="2025-09-13T00:17:17.864132980Z" level=info msg="StopPodSandbox for \"6f045093ea64e7ab6523efd22b71b84a995a0b8aa3bb0d7c2b4957db7a20f022\"" Sep 13 00:17:17.931953 containerd[1444]: 2025-09-13 00:17:17.897 [WARNING][5579] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6f045093ea64e7ab6523efd22b71b84a995a0b8aa3bb0d7c2b4957db7a20f022" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--859b9fb76c--tctwm-eth0", GenerateName:"calico-apiserver-859b9fb76c-", Namespace:"calico-apiserver", SelfLink:"", UID:"c8e6805e-32c3-4cf6-8f89-9d8311bf375d", ResourceVersion:"1082", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 16, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"859b9fb76c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6ea8db0aee61cf1b23d675ffaf58f48c494fa9d9b102eae8769caac090de2712", Pod:"calico-apiserver-859b9fb76c-tctwm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3b2bdcbd070", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:17:17.931953 containerd[1444]: 2025-09-13 00:17:17.898 [INFO][5579] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6f045093ea64e7ab6523efd22b71b84a995a0b8aa3bb0d7c2b4957db7a20f022" Sep 13 00:17:17.931953 containerd[1444]: 2025-09-13 00:17:17.898 [INFO][5579] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6f045093ea64e7ab6523efd22b71b84a995a0b8aa3bb0d7c2b4957db7a20f022" iface="eth0" netns="" Sep 13 00:17:17.931953 containerd[1444]: 2025-09-13 00:17:17.898 [INFO][5579] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6f045093ea64e7ab6523efd22b71b84a995a0b8aa3bb0d7c2b4957db7a20f022" Sep 13 00:17:17.931953 containerd[1444]: 2025-09-13 00:17:17.898 [INFO][5579] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6f045093ea64e7ab6523efd22b71b84a995a0b8aa3bb0d7c2b4957db7a20f022" Sep 13 00:17:17.931953 containerd[1444]: 2025-09-13 00:17:17.917 [INFO][5588] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6f045093ea64e7ab6523efd22b71b84a995a0b8aa3bb0d7c2b4957db7a20f022" HandleID="k8s-pod-network.6f045093ea64e7ab6523efd22b71b84a995a0b8aa3bb0d7c2b4957db7a20f022" Workload="localhost-k8s-calico--apiserver--859b9fb76c--tctwm-eth0" Sep 13 00:17:17.931953 containerd[1444]: 2025-09-13 00:17:17.917 [INFO][5588] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:17:17.931953 containerd[1444]: 2025-09-13 00:17:17.917 [INFO][5588] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:17:17.931953 containerd[1444]: 2025-09-13 00:17:17.926 [WARNING][5588] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6f045093ea64e7ab6523efd22b71b84a995a0b8aa3bb0d7c2b4957db7a20f022" HandleID="k8s-pod-network.6f045093ea64e7ab6523efd22b71b84a995a0b8aa3bb0d7c2b4957db7a20f022" Workload="localhost-k8s-calico--apiserver--859b9fb76c--tctwm-eth0" Sep 13 00:17:17.931953 containerd[1444]: 2025-09-13 00:17:17.926 [INFO][5588] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6f045093ea64e7ab6523efd22b71b84a995a0b8aa3bb0d7c2b4957db7a20f022" HandleID="k8s-pod-network.6f045093ea64e7ab6523efd22b71b84a995a0b8aa3bb0d7c2b4957db7a20f022" Workload="localhost-k8s-calico--apiserver--859b9fb76c--tctwm-eth0" Sep 13 00:17:17.931953 containerd[1444]: 2025-09-13 00:17:17.927 [INFO][5588] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:17:17.931953 containerd[1444]: 2025-09-13 00:17:17.930 [INFO][5579] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6f045093ea64e7ab6523efd22b71b84a995a0b8aa3bb0d7c2b4957db7a20f022" Sep 13 00:17:17.932318 containerd[1444]: time="2025-09-13T00:17:17.931942715Z" level=info msg="TearDown network for sandbox \"6f045093ea64e7ab6523efd22b71b84a995a0b8aa3bb0d7c2b4957db7a20f022\" successfully" Sep 13 00:17:17.932318 containerd[1444]: time="2025-09-13T00:17:17.931980315Z" level=info msg="StopPodSandbox for \"6f045093ea64e7ab6523efd22b71b84a995a0b8aa3bb0d7c2b4957db7a20f022\" returns successfully" Sep 13 00:17:17.933207 containerd[1444]: time="2025-09-13T00:17:17.932801636Z" level=info msg="RemovePodSandbox for \"6f045093ea64e7ab6523efd22b71b84a995a0b8aa3bb0d7c2b4957db7a20f022\"" Sep 13 00:17:17.933207 containerd[1444]: time="2025-09-13T00:17:17.932836396Z" level=info msg="Forcibly stopping sandbox \"6f045093ea64e7ab6523efd22b71b84a995a0b8aa3bb0d7c2b4957db7a20f022\"" Sep 13 00:17:18.000896 containerd[1444]: 2025-09-13 00:17:17.969 [WARNING][5606] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6f045093ea64e7ab6523efd22b71b84a995a0b8aa3bb0d7c2b4957db7a20f022" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--859b9fb76c--tctwm-eth0", GenerateName:"calico-apiserver-859b9fb76c-", Namespace:"calico-apiserver", SelfLink:"", UID:"c8e6805e-32c3-4cf6-8f89-9d8311bf375d", ResourceVersion:"1082", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 16, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"859b9fb76c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6ea8db0aee61cf1b23d675ffaf58f48c494fa9d9b102eae8769caac090de2712", Pod:"calico-apiserver-859b9fb76c-tctwm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3b2bdcbd070", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:17:18.000896 containerd[1444]: 2025-09-13 00:17:17.969 [INFO][5606] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6f045093ea64e7ab6523efd22b71b84a995a0b8aa3bb0d7c2b4957db7a20f022" Sep 13 00:17:18.000896 containerd[1444]: 2025-09-13 00:17:17.969 [INFO][5606] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6f045093ea64e7ab6523efd22b71b84a995a0b8aa3bb0d7c2b4957db7a20f022" iface="eth0" netns="" Sep 13 00:17:18.000896 containerd[1444]: 2025-09-13 00:17:17.969 [INFO][5606] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6f045093ea64e7ab6523efd22b71b84a995a0b8aa3bb0d7c2b4957db7a20f022" Sep 13 00:17:18.000896 containerd[1444]: 2025-09-13 00:17:17.969 [INFO][5606] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6f045093ea64e7ab6523efd22b71b84a995a0b8aa3bb0d7c2b4957db7a20f022" Sep 13 00:17:18.000896 containerd[1444]: 2025-09-13 00:17:17.986 [INFO][5614] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6f045093ea64e7ab6523efd22b71b84a995a0b8aa3bb0d7c2b4957db7a20f022" HandleID="k8s-pod-network.6f045093ea64e7ab6523efd22b71b84a995a0b8aa3bb0d7c2b4957db7a20f022" Workload="localhost-k8s-calico--apiserver--859b9fb76c--tctwm-eth0" Sep 13 00:17:18.000896 containerd[1444]: 2025-09-13 00:17:17.987 [INFO][5614] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:17:18.000896 containerd[1444]: 2025-09-13 00:17:17.987 [INFO][5614] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:17:18.000896 containerd[1444]: 2025-09-13 00:17:17.996 [WARNING][5614] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6f045093ea64e7ab6523efd22b71b84a995a0b8aa3bb0d7c2b4957db7a20f022" HandleID="k8s-pod-network.6f045093ea64e7ab6523efd22b71b84a995a0b8aa3bb0d7c2b4957db7a20f022" Workload="localhost-k8s-calico--apiserver--859b9fb76c--tctwm-eth0" Sep 13 00:17:18.000896 containerd[1444]: 2025-09-13 00:17:17.996 [INFO][5614] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6f045093ea64e7ab6523efd22b71b84a995a0b8aa3bb0d7c2b4957db7a20f022" HandleID="k8s-pod-network.6f045093ea64e7ab6523efd22b71b84a995a0b8aa3bb0d7c2b4957db7a20f022" Workload="localhost-k8s-calico--apiserver--859b9fb76c--tctwm-eth0" Sep 13 00:17:18.000896 containerd[1444]: 2025-09-13 00:17:17.997 [INFO][5614] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:17:18.000896 containerd[1444]: 2025-09-13 00:17:17.999 [INFO][5606] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6f045093ea64e7ab6523efd22b71b84a995a0b8aa3bb0d7c2b4957db7a20f022" Sep 13 00:17:18.001294 containerd[1444]: time="2025-09-13T00:17:18.000942011Z" level=info msg="TearDown network for sandbox \"6f045093ea64e7ab6523efd22b71b84a995a0b8aa3bb0d7c2b4957db7a20f022\" successfully" Sep 13 00:17:18.006523 containerd[1444]: time="2025-09-13T00:17:18.006471892Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6f045093ea64e7ab6523efd22b71b84a995a0b8aa3bb0d7c2b4957db7a20f022\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:17:18.006634 containerd[1444]: time="2025-09-13T00:17:18.006551892Z" level=info msg="RemovePodSandbox \"6f045093ea64e7ab6523efd22b71b84a995a0b8aa3bb0d7c2b4957db7a20f022\" returns successfully" Sep 13 00:17:18.007043 containerd[1444]: time="2025-09-13T00:17:18.007014612Z" level=info msg="StopPodSandbox for \"d1d75c534df5d5f68fb52f69ac65e56c2c79024319ca0e563bfbaa673fc45661\"" Sep 13 00:17:18.079023 containerd[1444]: 2025-09-13 00:17:18.042 [WARNING][5632] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d1d75c534df5d5f68fb52f69ac65e56c2c79024319ca0e563bfbaa673fc45661" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--22r4m-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"c78523f1-1b2e-44f8-9fd4-6f6c075a99ad", ResourceVersion:"1069", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 16, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5f15513d95ad8550fcb083b05d4a8653f94a421f739ec19edb1ae3fe436843a4", Pod:"goldmane-54d579b49d-22r4m", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali16afb906159", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:17:18.079023 containerd[1444]: 2025-09-13 00:17:18.042 [INFO][5632] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d1d75c534df5d5f68fb52f69ac65e56c2c79024319ca0e563bfbaa673fc45661" Sep 13 00:17:18.079023 containerd[1444]: 2025-09-13 00:17:18.043 [INFO][5632] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d1d75c534df5d5f68fb52f69ac65e56c2c79024319ca0e563bfbaa673fc45661" iface="eth0" netns="" Sep 13 00:17:18.079023 containerd[1444]: 2025-09-13 00:17:18.043 [INFO][5632] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d1d75c534df5d5f68fb52f69ac65e56c2c79024319ca0e563bfbaa673fc45661" Sep 13 00:17:18.079023 containerd[1444]: 2025-09-13 00:17:18.043 [INFO][5632] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d1d75c534df5d5f68fb52f69ac65e56c2c79024319ca0e563bfbaa673fc45661" Sep 13 00:17:18.079023 containerd[1444]: 2025-09-13 00:17:18.062 [INFO][5641] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d1d75c534df5d5f68fb52f69ac65e56c2c79024319ca0e563bfbaa673fc45661" HandleID="k8s-pod-network.d1d75c534df5d5f68fb52f69ac65e56c2c79024319ca0e563bfbaa673fc45661" Workload="localhost-k8s-goldmane--54d579b49d--22r4m-eth0" Sep 13 00:17:18.079023 containerd[1444]: 2025-09-13 00:17:18.062 [INFO][5641] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:17:18.079023 containerd[1444]: 2025-09-13 00:17:18.062 [INFO][5641] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:17:18.079023 containerd[1444]: 2025-09-13 00:17:18.072 [WARNING][5641] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d1d75c534df5d5f68fb52f69ac65e56c2c79024319ca0e563bfbaa673fc45661" HandleID="k8s-pod-network.d1d75c534df5d5f68fb52f69ac65e56c2c79024319ca0e563bfbaa673fc45661" Workload="localhost-k8s-goldmane--54d579b49d--22r4m-eth0" Sep 13 00:17:18.079023 containerd[1444]: 2025-09-13 00:17:18.072 [INFO][5641] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d1d75c534df5d5f68fb52f69ac65e56c2c79024319ca0e563bfbaa673fc45661" HandleID="k8s-pod-network.d1d75c534df5d5f68fb52f69ac65e56c2c79024319ca0e563bfbaa673fc45661" Workload="localhost-k8s-goldmane--54d579b49d--22r4m-eth0" Sep 13 00:17:18.079023 containerd[1444]: 2025-09-13 00:17:18.073 [INFO][5641] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:17:18.079023 containerd[1444]: 2025-09-13 00:17:18.077 [INFO][5632] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d1d75c534df5d5f68fb52f69ac65e56c2c79024319ca0e563bfbaa673fc45661" Sep 13 00:17:18.079871 containerd[1444]: time="2025-09-13T00:17:18.079068107Z" level=info msg="TearDown network for sandbox \"d1d75c534df5d5f68fb52f69ac65e56c2c79024319ca0e563bfbaa673fc45661\" successfully" Sep 13 00:17:18.079871 containerd[1444]: time="2025-09-13T00:17:18.079092907Z" level=info msg="StopPodSandbox for \"d1d75c534df5d5f68fb52f69ac65e56c2c79024319ca0e563bfbaa673fc45661\" returns successfully" Sep 13 00:17:18.079871 containerd[1444]: time="2025-09-13T00:17:18.079565227Z" level=info msg="RemovePodSandbox for \"d1d75c534df5d5f68fb52f69ac65e56c2c79024319ca0e563bfbaa673fc45661\"" Sep 13 00:17:18.079871 containerd[1444]: time="2025-09-13T00:17:18.079618667Z" level=info msg="Forcibly stopping sandbox \"d1d75c534df5d5f68fb52f69ac65e56c2c79024319ca0e563bfbaa673fc45661\"" Sep 13 00:17:18.154301 containerd[1444]: 2025-09-13 00:17:18.118 [WARNING][5659] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d1d75c534df5d5f68fb52f69ac65e56c2c79024319ca0e563bfbaa673fc45661" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--22r4m-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"c78523f1-1b2e-44f8-9fd4-6f6c075a99ad", ResourceVersion:"1069", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 16, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5f15513d95ad8550fcb083b05d4a8653f94a421f739ec19edb1ae3fe436843a4", Pod:"goldmane-54d579b49d-22r4m", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali16afb906159", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:17:18.154301 containerd[1444]: 2025-09-13 00:17:18.118 [INFO][5659] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d1d75c534df5d5f68fb52f69ac65e56c2c79024319ca0e563bfbaa673fc45661" Sep 13 00:17:18.154301 containerd[1444]: 2025-09-13 00:17:18.118 [INFO][5659] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d1d75c534df5d5f68fb52f69ac65e56c2c79024319ca0e563bfbaa673fc45661" iface="eth0" netns="" Sep 13 00:17:18.154301 containerd[1444]: 2025-09-13 00:17:18.118 [INFO][5659] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d1d75c534df5d5f68fb52f69ac65e56c2c79024319ca0e563bfbaa673fc45661" Sep 13 00:17:18.154301 containerd[1444]: 2025-09-13 00:17:18.118 [INFO][5659] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d1d75c534df5d5f68fb52f69ac65e56c2c79024319ca0e563bfbaa673fc45661" Sep 13 00:17:18.154301 containerd[1444]: 2025-09-13 00:17:18.137 [INFO][5668] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d1d75c534df5d5f68fb52f69ac65e56c2c79024319ca0e563bfbaa673fc45661" HandleID="k8s-pod-network.d1d75c534df5d5f68fb52f69ac65e56c2c79024319ca0e563bfbaa673fc45661" Workload="localhost-k8s-goldmane--54d579b49d--22r4m-eth0" Sep 13 00:17:18.154301 containerd[1444]: 2025-09-13 00:17:18.137 [INFO][5668] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:17:18.154301 containerd[1444]: 2025-09-13 00:17:18.137 [INFO][5668] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:17:18.154301 containerd[1444]: 2025-09-13 00:17:18.146 [WARNING][5668] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d1d75c534df5d5f68fb52f69ac65e56c2c79024319ca0e563bfbaa673fc45661" HandleID="k8s-pod-network.d1d75c534df5d5f68fb52f69ac65e56c2c79024319ca0e563bfbaa673fc45661" Workload="localhost-k8s-goldmane--54d579b49d--22r4m-eth0" Sep 13 00:17:18.154301 containerd[1444]: 2025-09-13 00:17:18.146 [INFO][5668] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d1d75c534df5d5f68fb52f69ac65e56c2c79024319ca0e563bfbaa673fc45661" HandleID="k8s-pod-network.d1d75c534df5d5f68fb52f69ac65e56c2c79024319ca0e563bfbaa673fc45661" Workload="localhost-k8s-goldmane--54d579b49d--22r4m-eth0" Sep 13 00:17:18.154301 containerd[1444]: 2025-09-13 00:17:18.148 [INFO][5668] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:17:18.154301 containerd[1444]: 2025-09-13 00:17:18.150 [INFO][5659] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d1d75c534df5d5f68fb52f69ac65e56c2c79024319ca0e563bfbaa673fc45661" Sep 13 00:17:18.154301 containerd[1444]: time="2025-09-13T00:17:18.154266923Z" level=info msg="TearDown network for sandbox \"d1d75c534df5d5f68fb52f69ac65e56c2c79024319ca0e563bfbaa673fc45661\" successfully" Sep 13 00:17:18.226268 containerd[1444]: time="2025-09-13T00:17:18.226209858Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d1d75c534df5d5f68fb52f69ac65e56c2c79024319ca0e563bfbaa673fc45661\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:17:18.226405 containerd[1444]: time="2025-09-13T00:17:18.226296298Z" level=info msg="RemovePodSandbox \"d1d75c534df5d5f68fb52f69ac65e56c2c79024319ca0e563bfbaa673fc45661\" returns successfully" Sep 13 00:17:18.226797 containerd[1444]: time="2025-09-13T00:17:18.226764178Z" level=info msg="StopPodSandbox for \"48be175f15263d25946c5fafd5d93ea0ba82b221a8a78f549856a0774ad11b96\"" Sep 13 00:17:18.295583 containerd[1444]: 2025-09-13 00:17:18.260 [WARNING][5686] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="48be175f15263d25946c5fafd5d93ea0ba82b221a8a78f549856a0774ad11b96" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--qj2mh-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"da7a220d-d282-4f5a-9d7b-1fe40051e284", ResourceVersion:"1072", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 16, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f723fc0f611c8d0a1c03a5963f3f921529d994be4b99527a28a851911f60330a", Pod:"coredns-674b8bbfcf-qj2mh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidef2df53869", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:17:18.295583 containerd[1444]: 2025-09-13 00:17:18.261 [INFO][5686] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="48be175f15263d25946c5fafd5d93ea0ba82b221a8a78f549856a0774ad11b96" Sep 13 00:17:18.295583 containerd[1444]: 2025-09-13 00:17:18.261 [INFO][5686] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="48be175f15263d25946c5fafd5d93ea0ba82b221a8a78f549856a0774ad11b96" iface="eth0" netns="" Sep 13 00:17:18.295583 containerd[1444]: 2025-09-13 00:17:18.261 [INFO][5686] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="48be175f15263d25946c5fafd5d93ea0ba82b221a8a78f549856a0774ad11b96" Sep 13 00:17:18.295583 containerd[1444]: 2025-09-13 00:17:18.261 [INFO][5686] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="48be175f15263d25946c5fafd5d93ea0ba82b221a8a78f549856a0774ad11b96" Sep 13 00:17:18.295583 containerd[1444]: 2025-09-13 00:17:18.279 [INFO][5694] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="48be175f15263d25946c5fafd5d93ea0ba82b221a8a78f549856a0774ad11b96" HandleID="k8s-pod-network.48be175f15263d25946c5fafd5d93ea0ba82b221a8a78f549856a0774ad11b96" Workload="localhost-k8s-coredns--674b8bbfcf--qj2mh-eth0" Sep 13 00:17:18.295583 containerd[1444]: 2025-09-13 00:17:18.280 [INFO][5694] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:17:18.295583 containerd[1444]: 2025-09-13 00:17:18.280 [INFO][5694] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:17:18.295583 containerd[1444]: 2025-09-13 00:17:18.290 [WARNING][5694] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="48be175f15263d25946c5fafd5d93ea0ba82b221a8a78f549856a0774ad11b96" HandleID="k8s-pod-network.48be175f15263d25946c5fafd5d93ea0ba82b221a8a78f549856a0774ad11b96" Workload="localhost-k8s-coredns--674b8bbfcf--qj2mh-eth0" Sep 13 00:17:18.295583 containerd[1444]: 2025-09-13 00:17:18.290 [INFO][5694] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="48be175f15263d25946c5fafd5d93ea0ba82b221a8a78f549856a0774ad11b96" HandleID="k8s-pod-network.48be175f15263d25946c5fafd5d93ea0ba82b221a8a78f549856a0774ad11b96" Workload="localhost-k8s-coredns--674b8bbfcf--qj2mh-eth0" Sep 13 00:17:18.295583 containerd[1444]: 2025-09-13 00:17:18.292 [INFO][5694] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:17:18.295583 containerd[1444]: 2025-09-13 00:17:18.294 [INFO][5686] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="48be175f15263d25946c5fafd5d93ea0ba82b221a8a78f549856a0774ad11b96" Sep 13 00:17:18.296113 containerd[1444]: time="2025-09-13T00:17:18.295636913Z" level=info msg="TearDown network for sandbox \"48be175f15263d25946c5fafd5d93ea0ba82b221a8a78f549856a0774ad11b96\" successfully" Sep 13 00:17:18.296113 containerd[1444]: time="2025-09-13T00:17:18.295661433Z" level=info msg="StopPodSandbox for \"48be175f15263d25946c5fafd5d93ea0ba82b221a8a78f549856a0774ad11b96\" returns successfully" Sep 13 00:17:18.296113 containerd[1444]: time="2025-09-13T00:17:18.296096673Z" level=info msg="RemovePodSandbox for \"48be175f15263d25946c5fafd5d93ea0ba82b221a8a78f549856a0774ad11b96\"" Sep 13 00:17:18.296182 containerd[1444]: time="2025-09-13T00:17:18.296126913Z" level=info msg="Forcibly stopping sandbox \"48be175f15263d25946c5fafd5d93ea0ba82b221a8a78f549856a0774ad11b96\"" Sep 13 00:17:18.383467 containerd[1444]: 2025-09-13 00:17:18.344 [WARNING][5712] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="48be175f15263d25946c5fafd5d93ea0ba82b221a8a78f549856a0774ad11b96" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--qj2mh-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"da7a220d-d282-4f5a-9d7b-1fe40051e284", ResourceVersion:"1072", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 16, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f723fc0f611c8d0a1c03a5963f3f921529d994be4b99527a28a851911f60330a", Pod:"coredns-674b8bbfcf-qj2mh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidef2df53869", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:17:18.383467 containerd[1444]: 2025-09-13 00:17:18.344 [INFO][5712] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="48be175f15263d25946c5fafd5d93ea0ba82b221a8a78f549856a0774ad11b96" Sep 13 00:17:18.383467 containerd[1444]: 2025-09-13 00:17:18.344 [INFO][5712] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="48be175f15263d25946c5fafd5d93ea0ba82b221a8a78f549856a0774ad11b96" iface="eth0" netns="" Sep 13 00:17:18.383467 containerd[1444]: 2025-09-13 00:17:18.344 [INFO][5712] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="48be175f15263d25946c5fafd5d93ea0ba82b221a8a78f549856a0774ad11b96" Sep 13 00:17:18.383467 containerd[1444]: 2025-09-13 00:17:18.344 [INFO][5712] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="48be175f15263d25946c5fafd5d93ea0ba82b221a8a78f549856a0774ad11b96" Sep 13 00:17:18.383467 containerd[1444]: 2025-09-13 00:17:18.369 [INFO][5721] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="48be175f15263d25946c5fafd5d93ea0ba82b221a8a78f549856a0774ad11b96" HandleID="k8s-pod-network.48be175f15263d25946c5fafd5d93ea0ba82b221a8a78f549856a0774ad11b96" Workload="localhost-k8s-coredns--674b8bbfcf--qj2mh-eth0" Sep 13 00:17:18.383467 containerd[1444]: 2025-09-13 00:17:18.369 [INFO][5721] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:17:18.383467 containerd[1444]: 2025-09-13 00:17:18.369 [INFO][5721] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:17:18.383467 containerd[1444]: 2025-09-13 00:17:18.378 [WARNING][5721] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="48be175f15263d25946c5fafd5d93ea0ba82b221a8a78f549856a0774ad11b96" HandleID="k8s-pod-network.48be175f15263d25946c5fafd5d93ea0ba82b221a8a78f549856a0774ad11b96" Workload="localhost-k8s-coredns--674b8bbfcf--qj2mh-eth0" Sep 13 00:17:18.383467 containerd[1444]: 2025-09-13 00:17:18.378 [INFO][5721] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="48be175f15263d25946c5fafd5d93ea0ba82b221a8a78f549856a0774ad11b96" HandleID="k8s-pod-network.48be175f15263d25946c5fafd5d93ea0ba82b221a8a78f549856a0774ad11b96" Workload="localhost-k8s-coredns--674b8bbfcf--qj2mh-eth0" Sep 13 00:17:18.383467 containerd[1444]: 2025-09-13 00:17:18.380 [INFO][5721] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:17:18.383467 containerd[1444]: 2025-09-13 00:17:18.381 [INFO][5712] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="48be175f15263d25946c5fafd5d93ea0ba82b221a8a78f549856a0774ad11b96" Sep 13 00:17:18.384028 containerd[1444]: time="2025-09-13T00:17:18.383500652Z" level=info msg="TearDown network for sandbox \"48be175f15263d25946c5fafd5d93ea0ba82b221a8a78f549856a0774ad11b96\" successfully" Sep 13 00:17:18.389211 containerd[1444]: time="2025-09-13T00:17:18.389162133Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"48be175f15263d25946c5fafd5d93ea0ba82b221a8a78f549856a0774ad11b96\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:17:18.389301 containerd[1444]: time="2025-09-13T00:17:18.389246413Z" level=info msg="RemovePodSandbox \"48be175f15263d25946c5fafd5d93ea0ba82b221a8a78f549856a0774ad11b96\" returns successfully" Sep 13 00:17:18.390217 containerd[1444]: time="2025-09-13T00:17:18.390114493Z" level=info msg="StopPodSandbox for \"2790ce029e84719cd3b1faf406cefb51cef139ee1c33ef00950cc85c9ef9ccbb\"" Sep 13 00:17:18.463583 containerd[1444]: 2025-09-13 00:17:18.425 [WARNING][5739] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2790ce029e84719cd3b1faf406cefb51cef139ee1c33ef00950cc85c9ef9ccbb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--sczp2-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"95502567-42ee-47eb-b6a6-72d10242f778", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 16, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"552863bb3112d550909c809b2193e979bc46d0d9f4acd2b78177014c14a75937", Pod:"coredns-674b8bbfcf-sczp2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia42bbcf6463", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:17:18.463583 containerd[1444]: 2025-09-13 00:17:18.426 [INFO][5739] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2790ce029e84719cd3b1faf406cefb51cef139ee1c33ef00950cc85c9ef9ccbb" Sep 13 00:17:18.463583 containerd[1444]: 2025-09-13 00:17:18.426 [INFO][5739] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2790ce029e84719cd3b1faf406cefb51cef139ee1c33ef00950cc85c9ef9ccbb" iface="eth0" netns="" Sep 13 00:17:18.463583 containerd[1444]: 2025-09-13 00:17:18.426 [INFO][5739] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2790ce029e84719cd3b1faf406cefb51cef139ee1c33ef00950cc85c9ef9ccbb" Sep 13 00:17:18.463583 containerd[1444]: 2025-09-13 00:17:18.426 [INFO][5739] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2790ce029e84719cd3b1faf406cefb51cef139ee1c33ef00950cc85c9ef9ccbb" Sep 13 00:17:18.463583 containerd[1444]: 2025-09-13 00:17:18.444 [INFO][5748] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2790ce029e84719cd3b1faf406cefb51cef139ee1c33ef00950cc85c9ef9ccbb" HandleID="k8s-pod-network.2790ce029e84719cd3b1faf406cefb51cef139ee1c33ef00950cc85c9ef9ccbb" Workload="localhost-k8s-coredns--674b8bbfcf--sczp2-eth0" Sep 13 00:17:18.463583 containerd[1444]: 2025-09-13 00:17:18.444 [INFO][5748] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:17:18.463583 containerd[1444]: 2025-09-13 00:17:18.444 [INFO][5748] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:17:18.463583 containerd[1444]: 2025-09-13 00:17:18.457 [WARNING][5748] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2790ce029e84719cd3b1faf406cefb51cef139ee1c33ef00950cc85c9ef9ccbb" HandleID="k8s-pod-network.2790ce029e84719cd3b1faf406cefb51cef139ee1c33ef00950cc85c9ef9ccbb" Workload="localhost-k8s-coredns--674b8bbfcf--sczp2-eth0" Sep 13 00:17:18.463583 containerd[1444]: 2025-09-13 00:17:18.457 [INFO][5748] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2790ce029e84719cd3b1faf406cefb51cef139ee1c33ef00950cc85c9ef9ccbb" HandleID="k8s-pod-network.2790ce029e84719cd3b1faf406cefb51cef139ee1c33ef00950cc85c9ef9ccbb" Workload="localhost-k8s-coredns--674b8bbfcf--sczp2-eth0" Sep 13 00:17:18.463583 containerd[1444]: 2025-09-13 00:17:18.459 [INFO][5748] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:17:18.463583 containerd[1444]: 2025-09-13 00:17:18.462 [INFO][5739] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2790ce029e84719cd3b1faf406cefb51cef139ee1c33ef00950cc85c9ef9ccbb" Sep 13 00:17:18.464315 containerd[1444]: time="2025-09-13T00:17:18.463637628Z" level=info msg="TearDown network for sandbox \"2790ce029e84719cd3b1faf406cefb51cef139ee1c33ef00950cc85c9ef9ccbb\" successfully" Sep 13 00:17:18.464315 containerd[1444]: time="2025-09-13T00:17:18.463665268Z" level=info msg="StopPodSandbox for \"2790ce029e84719cd3b1faf406cefb51cef139ee1c33ef00950cc85c9ef9ccbb\" returns successfully" Sep 13 00:17:18.464315 containerd[1444]: time="2025-09-13T00:17:18.464170229Z" level=info msg="RemovePodSandbox for \"2790ce029e84719cd3b1faf406cefb51cef139ee1c33ef00950cc85c9ef9ccbb\"" Sep 13 00:17:18.464315 containerd[1444]: time="2025-09-13T00:17:18.464246909Z" level=info msg="Forcibly stopping sandbox \"2790ce029e84719cd3b1faf406cefb51cef139ee1c33ef00950cc85c9ef9ccbb\"" Sep 13 00:17:18.542861 containerd[1444]: 2025-09-13 00:17:18.501 [WARNING][5766] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2790ce029e84719cd3b1faf406cefb51cef139ee1c33ef00950cc85c9ef9ccbb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--sczp2-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"95502567-42ee-47eb-b6a6-72d10242f778", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 16, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"552863bb3112d550909c809b2193e979bc46d0d9f4acd2b78177014c14a75937", Pod:"coredns-674b8bbfcf-sczp2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia42bbcf6463", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:17:18.542861 containerd[1444]: 2025-09-13 00:17:18.501 [INFO][5766] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2790ce029e84719cd3b1faf406cefb51cef139ee1c33ef00950cc85c9ef9ccbb" Sep 13 00:17:18.542861 containerd[1444]: 2025-09-13 00:17:18.501 [INFO][5766] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2790ce029e84719cd3b1faf406cefb51cef139ee1c33ef00950cc85c9ef9ccbb" iface="eth0" netns="" Sep 13 00:17:18.542861 containerd[1444]: 2025-09-13 00:17:18.501 [INFO][5766] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2790ce029e84719cd3b1faf406cefb51cef139ee1c33ef00950cc85c9ef9ccbb" Sep 13 00:17:18.542861 containerd[1444]: 2025-09-13 00:17:18.501 [INFO][5766] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2790ce029e84719cd3b1faf406cefb51cef139ee1c33ef00950cc85c9ef9ccbb" Sep 13 00:17:18.542861 containerd[1444]: 2025-09-13 00:17:18.527 [INFO][5775] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2790ce029e84719cd3b1faf406cefb51cef139ee1c33ef00950cc85c9ef9ccbb" HandleID="k8s-pod-network.2790ce029e84719cd3b1faf406cefb51cef139ee1c33ef00950cc85c9ef9ccbb" Workload="localhost-k8s-coredns--674b8bbfcf--sczp2-eth0" Sep 13 00:17:18.542861 containerd[1444]: 2025-09-13 00:17:18.527 [INFO][5775] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:17:18.542861 containerd[1444]: 2025-09-13 00:17:18.527 [INFO][5775] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:17:18.542861 containerd[1444]: 2025-09-13 00:17:18.537 [WARNING][5775] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2790ce029e84719cd3b1faf406cefb51cef139ee1c33ef00950cc85c9ef9ccbb" HandleID="k8s-pod-network.2790ce029e84719cd3b1faf406cefb51cef139ee1c33ef00950cc85c9ef9ccbb" Workload="localhost-k8s-coredns--674b8bbfcf--sczp2-eth0" Sep 13 00:17:18.542861 containerd[1444]: 2025-09-13 00:17:18.537 [INFO][5775] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2790ce029e84719cd3b1faf406cefb51cef139ee1c33ef00950cc85c9ef9ccbb" HandleID="k8s-pod-network.2790ce029e84719cd3b1faf406cefb51cef139ee1c33ef00950cc85c9ef9ccbb" Workload="localhost-k8s-coredns--674b8bbfcf--sczp2-eth0" Sep 13 00:17:18.542861 containerd[1444]: 2025-09-13 00:17:18.539 [INFO][5775] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:17:18.542861 containerd[1444]: 2025-09-13 00:17:18.541 [INFO][5766] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2790ce029e84719cd3b1faf406cefb51cef139ee1c33ef00950cc85c9ef9ccbb" Sep 13 00:17:18.542861 containerd[1444]: time="2025-09-13T00:17:18.542846245Z" level=info msg="TearDown network for sandbox \"2790ce029e84719cd3b1faf406cefb51cef139ee1c33ef00950cc85c9ef9ccbb\" successfully" Sep 13 00:17:18.549066 containerd[1444]: time="2025-09-13T00:17:18.549024046Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2790ce029e84719cd3b1faf406cefb51cef139ee1c33ef00950cc85c9ef9ccbb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:17:18.549473 containerd[1444]: time="2025-09-13T00:17:18.549099686Z" level=info msg="RemovePodSandbox \"2790ce029e84719cd3b1faf406cefb51cef139ee1c33ef00950cc85c9ef9ccbb\" returns successfully" Sep 13 00:17:18.549618 containerd[1444]: time="2025-09-13T00:17:18.549572527Z" level=info msg="StopPodSandbox for \"b46982dd6853b5e80e42bf4510f8168884956bedb7fb50987d00163f9bd6678a\"" Sep 13 00:17:18.624394 containerd[1444]: 2025-09-13 00:17:18.581 [WARNING][5793] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b46982dd6853b5e80e42bf4510f8168884956bedb7fb50987d00163f9bd6678a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--fd6458858--dk8sk-eth0", GenerateName:"calico-kube-controllers-fd6458858-", Namespace:"calico-system", SelfLink:"", UID:"68701d44-98db-4175-b680-f21da2b19c48", ResourceVersion:"1144", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 16, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"fd6458858", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1020e36dbd60caf926d432a588d129cfe36dcc8a183375db37db834a7313280c", Pod:"calico-kube-controllers-fd6458858-dk8sk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia9d34eff079", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:17:18.624394 containerd[1444]: 2025-09-13 00:17:18.582 [INFO][5793] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b46982dd6853b5e80e42bf4510f8168884956bedb7fb50987d00163f9bd6678a" Sep 13 00:17:18.624394 containerd[1444]: 2025-09-13 00:17:18.582 [INFO][5793] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b46982dd6853b5e80e42bf4510f8168884956bedb7fb50987d00163f9bd6678a" iface="eth0" netns="" Sep 13 00:17:18.624394 containerd[1444]: 2025-09-13 00:17:18.582 [INFO][5793] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b46982dd6853b5e80e42bf4510f8168884956bedb7fb50987d00163f9bd6678a" Sep 13 00:17:18.624394 containerd[1444]: 2025-09-13 00:17:18.582 [INFO][5793] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b46982dd6853b5e80e42bf4510f8168884956bedb7fb50987d00163f9bd6678a" Sep 13 00:17:18.624394 containerd[1444]: 2025-09-13 00:17:18.611 [INFO][5803] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b46982dd6853b5e80e42bf4510f8168884956bedb7fb50987d00163f9bd6678a" HandleID="k8s-pod-network.b46982dd6853b5e80e42bf4510f8168884956bedb7fb50987d00163f9bd6678a" Workload="localhost-k8s-calico--kube--controllers--fd6458858--dk8sk-eth0" Sep 13 00:17:18.624394 containerd[1444]: 2025-09-13 00:17:18.611 [INFO][5803] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:17:18.624394 containerd[1444]: 2025-09-13 00:17:18.611 [INFO][5803] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:17:18.624394 containerd[1444]: 2025-09-13 00:17:18.619 [WARNING][5803] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b46982dd6853b5e80e42bf4510f8168884956bedb7fb50987d00163f9bd6678a" HandleID="k8s-pod-network.b46982dd6853b5e80e42bf4510f8168884956bedb7fb50987d00163f9bd6678a" Workload="localhost-k8s-calico--kube--controllers--fd6458858--dk8sk-eth0" Sep 13 00:17:18.624394 containerd[1444]: 2025-09-13 00:17:18.619 [INFO][5803] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b46982dd6853b5e80e42bf4510f8168884956bedb7fb50987d00163f9bd6678a" HandleID="k8s-pod-network.b46982dd6853b5e80e42bf4510f8168884956bedb7fb50987d00163f9bd6678a" Workload="localhost-k8s-calico--kube--controllers--fd6458858--dk8sk-eth0" Sep 13 00:17:18.624394 containerd[1444]: 2025-09-13 00:17:18.621 [INFO][5803] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:17:18.624394 containerd[1444]: 2025-09-13 00:17:18.622 [INFO][5793] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b46982dd6853b5e80e42bf4510f8168884956bedb7fb50987d00163f9bd6678a" Sep 13 00:17:18.624394 containerd[1444]: time="2025-09-13T00:17:18.624269582Z" level=info msg="TearDown network for sandbox \"b46982dd6853b5e80e42bf4510f8168884956bedb7fb50987d00163f9bd6678a\" successfully" Sep 13 00:17:18.624394 containerd[1444]: time="2025-09-13T00:17:18.624294542Z" level=info msg="StopPodSandbox for \"b46982dd6853b5e80e42bf4510f8168884956bedb7fb50987d00163f9bd6678a\" returns successfully" Sep 13 00:17:18.625145 containerd[1444]: time="2025-09-13T00:17:18.625020102Z" level=info msg="RemovePodSandbox for \"b46982dd6853b5e80e42bf4510f8168884956bedb7fb50987d00163f9bd6678a\"" Sep 13 00:17:18.625145 containerd[1444]: time="2025-09-13T00:17:18.625051902Z" level=info msg="Forcibly stopping sandbox \"b46982dd6853b5e80e42bf4510f8168884956bedb7fb50987d00163f9bd6678a\"" Sep 13 00:17:18.689663 containerd[1444]: 2025-09-13 00:17:18.657 [WARNING][5822] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b46982dd6853b5e80e42bf4510f8168884956bedb7fb50987d00163f9bd6678a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--fd6458858--dk8sk-eth0", GenerateName:"calico-kube-controllers-fd6458858-", Namespace:"calico-system", SelfLink:"", UID:"68701d44-98db-4175-b680-f21da2b19c48", ResourceVersion:"1144", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 16, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"fd6458858", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1020e36dbd60caf926d432a588d129cfe36dcc8a183375db37db834a7313280c", Pod:"calico-kube-controllers-fd6458858-dk8sk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia9d34eff079", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:17:18.689663 containerd[1444]: 2025-09-13 00:17:18.657 [INFO][5822] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b46982dd6853b5e80e42bf4510f8168884956bedb7fb50987d00163f9bd6678a" Sep 13 00:17:18.689663 containerd[1444]: 2025-09-13 00:17:18.657 [INFO][5822] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b46982dd6853b5e80e42bf4510f8168884956bedb7fb50987d00163f9bd6678a" iface="eth0" netns="" Sep 13 00:17:18.689663 containerd[1444]: 2025-09-13 00:17:18.657 [INFO][5822] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b46982dd6853b5e80e42bf4510f8168884956bedb7fb50987d00163f9bd6678a" Sep 13 00:17:18.689663 containerd[1444]: 2025-09-13 00:17:18.657 [INFO][5822] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b46982dd6853b5e80e42bf4510f8168884956bedb7fb50987d00163f9bd6678a" Sep 13 00:17:18.689663 containerd[1444]: 2025-09-13 00:17:18.675 [INFO][5831] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b46982dd6853b5e80e42bf4510f8168884956bedb7fb50987d00163f9bd6678a" HandleID="k8s-pod-network.b46982dd6853b5e80e42bf4510f8168884956bedb7fb50987d00163f9bd6678a" Workload="localhost-k8s-calico--kube--controllers--fd6458858--dk8sk-eth0" Sep 13 00:17:18.689663 containerd[1444]: 2025-09-13 00:17:18.676 [INFO][5831] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:17:18.689663 containerd[1444]: 2025-09-13 00:17:18.676 [INFO][5831] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:17:18.689663 containerd[1444]: 2025-09-13 00:17:18.684 [WARNING][5831] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b46982dd6853b5e80e42bf4510f8168884956bedb7fb50987d00163f9bd6678a" HandleID="k8s-pod-network.b46982dd6853b5e80e42bf4510f8168884956bedb7fb50987d00163f9bd6678a" Workload="localhost-k8s-calico--kube--controllers--fd6458858--dk8sk-eth0" Sep 13 00:17:18.689663 containerd[1444]: 2025-09-13 00:17:18.684 [INFO][5831] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b46982dd6853b5e80e42bf4510f8168884956bedb7fb50987d00163f9bd6678a" HandleID="k8s-pod-network.b46982dd6853b5e80e42bf4510f8168884956bedb7fb50987d00163f9bd6678a" Workload="localhost-k8s-calico--kube--controllers--fd6458858--dk8sk-eth0" Sep 13 00:17:18.689663 containerd[1444]: 2025-09-13 00:17:18.686 [INFO][5831] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:17:18.689663 containerd[1444]: 2025-09-13 00:17:18.688 [INFO][5822] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b46982dd6853b5e80e42bf4510f8168884956bedb7fb50987d00163f9bd6678a" Sep 13 00:17:18.690274 containerd[1444]: time="2025-09-13T00:17:18.689703716Z" level=info msg="TearDown network for sandbox \"b46982dd6853b5e80e42bf4510f8168884956bedb7fb50987d00163f9bd6678a\" successfully" Sep 13 00:17:18.692658 containerd[1444]: time="2025-09-13T00:17:18.692624197Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b46982dd6853b5e80e42bf4510f8168884956bedb7fb50987d00163f9bd6678a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:17:18.692731 containerd[1444]: time="2025-09-13T00:17:18.692692557Z" level=info msg="RemovePodSandbox \"b46982dd6853b5e80e42bf4510f8168884956bedb7fb50987d00163f9bd6678a\" returns successfully" Sep 13 00:17:18.693229 containerd[1444]: time="2025-09-13T00:17:18.693202197Z" level=info msg="StopPodSandbox for \"5f98a97fe507fc5b0a47df8b5f016508e0f6a34fe089e993814fe3cc835fd60a\"" Sep 13 00:17:18.764523 containerd[1444]: 2025-09-13 00:17:18.733 [WARNING][5849] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5f98a97fe507fc5b0a47df8b5f016508e0f6a34fe089e993814fe3cc835fd60a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--859b9fb76c--hnqvt-eth0", GenerateName:"calico-apiserver-859b9fb76c-", Namespace:"calico-apiserver", SelfLink:"", UID:"91ec7ae0-fffa-447c-b970-cf0f2591c90d", ResourceVersion:"1099", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 16, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"859b9fb76c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1f630dcccce7b0ba35bf91308813256abe57dfdecc67d61c19ed2990d6fc2935", Pod:"calico-apiserver-859b9fb76c-hnqvt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7c24b383a16", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:17:18.764523 containerd[1444]: 2025-09-13 00:17:18.733 [INFO][5849] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5f98a97fe507fc5b0a47df8b5f016508e0f6a34fe089e993814fe3cc835fd60a" Sep 13 00:17:18.764523 containerd[1444]: 2025-09-13 00:17:18.733 [INFO][5849] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5f98a97fe507fc5b0a47df8b5f016508e0f6a34fe089e993814fe3cc835fd60a" iface="eth0" netns="" Sep 13 00:17:18.764523 containerd[1444]: 2025-09-13 00:17:18.733 [INFO][5849] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5f98a97fe507fc5b0a47df8b5f016508e0f6a34fe089e993814fe3cc835fd60a" Sep 13 00:17:18.764523 containerd[1444]: 2025-09-13 00:17:18.733 [INFO][5849] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5f98a97fe507fc5b0a47df8b5f016508e0f6a34fe089e993814fe3cc835fd60a" Sep 13 00:17:18.764523 containerd[1444]: 2025-09-13 00:17:18.751 [INFO][5859] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5f98a97fe507fc5b0a47df8b5f016508e0f6a34fe089e993814fe3cc835fd60a" HandleID="k8s-pod-network.5f98a97fe507fc5b0a47df8b5f016508e0f6a34fe089e993814fe3cc835fd60a" Workload="localhost-k8s-calico--apiserver--859b9fb76c--hnqvt-eth0" Sep 13 00:17:18.764523 containerd[1444]: 2025-09-13 00:17:18.751 [INFO][5859] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:17:18.764523 containerd[1444]: 2025-09-13 00:17:18.751 [INFO][5859] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:17:18.764523 containerd[1444]: 2025-09-13 00:17:18.760 [WARNING][5859] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5f98a97fe507fc5b0a47df8b5f016508e0f6a34fe089e993814fe3cc835fd60a" HandleID="k8s-pod-network.5f98a97fe507fc5b0a47df8b5f016508e0f6a34fe089e993814fe3cc835fd60a" Workload="localhost-k8s-calico--apiserver--859b9fb76c--hnqvt-eth0" Sep 13 00:17:18.764523 containerd[1444]: 2025-09-13 00:17:18.760 [INFO][5859] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5f98a97fe507fc5b0a47df8b5f016508e0f6a34fe089e993814fe3cc835fd60a" HandleID="k8s-pod-network.5f98a97fe507fc5b0a47df8b5f016508e0f6a34fe089e993814fe3cc835fd60a" Workload="localhost-k8s-calico--apiserver--859b9fb76c--hnqvt-eth0" Sep 13 00:17:18.764523 containerd[1444]: 2025-09-13 00:17:18.761 [INFO][5859] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:17:18.764523 containerd[1444]: 2025-09-13 00:17:18.763 [INFO][5849] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5f98a97fe507fc5b0a47df8b5f016508e0f6a34fe089e993814fe3cc835fd60a" Sep 13 00:17:18.765188 containerd[1444]: time="2025-09-13T00:17:18.764564972Z" level=info msg="TearDown network for sandbox \"5f98a97fe507fc5b0a47df8b5f016508e0f6a34fe089e993814fe3cc835fd60a\" successfully" Sep 13 00:17:18.765188 containerd[1444]: time="2025-09-13T00:17:18.764591052Z" level=info msg="StopPodSandbox for \"5f98a97fe507fc5b0a47df8b5f016508e0f6a34fe089e993814fe3cc835fd60a\" returns successfully" Sep 13 00:17:18.766049 containerd[1444]: time="2025-09-13T00:17:18.765722932Z" level=info msg="RemovePodSandbox for \"5f98a97fe507fc5b0a47df8b5f016508e0f6a34fe089e993814fe3cc835fd60a\"" Sep 13 00:17:18.766049 containerd[1444]: time="2025-09-13T00:17:18.765754612Z" level=info msg="Forcibly stopping sandbox \"5f98a97fe507fc5b0a47df8b5f016508e0f6a34fe089e993814fe3cc835fd60a\"" Sep 13 00:17:18.834755 containerd[1444]: 2025-09-13 00:17:18.797 [WARNING][5877] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5f98a97fe507fc5b0a47df8b5f016508e0f6a34fe089e993814fe3cc835fd60a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--859b9fb76c--hnqvt-eth0", GenerateName:"calico-apiserver-859b9fb76c-", Namespace:"calico-apiserver", SelfLink:"", UID:"91ec7ae0-fffa-447c-b970-cf0f2591c90d", ResourceVersion:"1099", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 16, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"859b9fb76c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1f630dcccce7b0ba35bf91308813256abe57dfdecc67d61c19ed2990d6fc2935", Pod:"calico-apiserver-859b9fb76c-hnqvt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7c24b383a16", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:17:18.834755 containerd[1444]: 2025-09-13 00:17:18.797 [INFO][5877] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5f98a97fe507fc5b0a47df8b5f016508e0f6a34fe089e993814fe3cc835fd60a" Sep 13 00:17:18.834755 containerd[1444]: 2025-09-13 00:17:18.797 [INFO][5877] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5f98a97fe507fc5b0a47df8b5f016508e0f6a34fe089e993814fe3cc835fd60a" iface="eth0" netns="" Sep 13 00:17:18.834755 containerd[1444]: 2025-09-13 00:17:18.797 [INFO][5877] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5f98a97fe507fc5b0a47df8b5f016508e0f6a34fe089e993814fe3cc835fd60a" Sep 13 00:17:18.834755 containerd[1444]: 2025-09-13 00:17:18.797 [INFO][5877] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5f98a97fe507fc5b0a47df8b5f016508e0f6a34fe089e993814fe3cc835fd60a" Sep 13 00:17:18.834755 containerd[1444]: 2025-09-13 00:17:18.818 [INFO][5887] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5f98a97fe507fc5b0a47df8b5f016508e0f6a34fe089e993814fe3cc835fd60a" HandleID="k8s-pod-network.5f98a97fe507fc5b0a47df8b5f016508e0f6a34fe089e993814fe3cc835fd60a" Workload="localhost-k8s-calico--apiserver--859b9fb76c--hnqvt-eth0" Sep 13 00:17:18.834755 containerd[1444]: 2025-09-13 00:17:18.818 [INFO][5887] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:17:18.834755 containerd[1444]: 2025-09-13 00:17:18.818 [INFO][5887] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:17:18.834755 containerd[1444]: 2025-09-13 00:17:18.828 [WARNING][5887] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5f98a97fe507fc5b0a47df8b5f016508e0f6a34fe089e993814fe3cc835fd60a" HandleID="k8s-pod-network.5f98a97fe507fc5b0a47df8b5f016508e0f6a34fe089e993814fe3cc835fd60a" Workload="localhost-k8s-calico--apiserver--859b9fb76c--hnqvt-eth0" Sep 13 00:17:18.834755 containerd[1444]: 2025-09-13 00:17:18.828 [INFO][5887] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5f98a97fe507fc5b0a47df8b5f016508e0f6a34fe089e993814fe3cc835fd60a" HandleID="k8s-pod-network.5f98a97fe507fc5b0a47df8b5f016508e0f6a34fe089e993814fe3cc835fd60a" Workload="localhost-k8s-calico--apiserver--859b9fb76c--hnqvt-eth0" Sep 13 00:17:18.834755 containerd[1444]: 2025-09-13 00:17:18.829 [INFO][5887] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:17:18.834755 containerd[1444]: 2025-09-13 00:17:18.831 [INFO][5877] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5f98a97fe507fc5b0a47df8b5f016508e0f6a34fe089e993814fe3cc835fd60a" Sep 13 00:17:18.834755 containerd[1444]: time="2025-09-13T00:17:18.832678066Z" level=info msg="TearDown network for sandbox \"5f98a97fe507fc5b0a47df8b5f016508e0f6a34fe089e993814fe3cc835fd60a\" successfully" Sep 13 00:17:18.836810 containerd[1444]: time="2025-09-13T00:17:18.836776267Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5f98a97fe507fc5b0a47df8b5f016508e0f6a34fe089e993814fe3cc835fd60a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:17:18.836871 containerd[1444]: time="2025-09-13T00:17:18.836840067Z" level=info msg="RemovePodSandbox \"5f98a97fe507fc5b0a47df8b5f016508e0f6a34fe089e993814fe3cc835fd60a\" returns successfully" Sep 13 00:17:18.837320 containerd[1444]: time="2025-09-13T00:17:18.837294747Z" level=info msg="StopPodSandbox for \"61578558f0fd523090e1f7292c78ab7dcd7551b3847bedd499846f518ca7ef01\"" Sep 13 00:17:18.901464 containerd[1444]: 2025-09-13 00:17:18.869 [WARNING][5905] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="61578558f0fd523090e1f7292c78ab7dcd7551b3847bedd499846f518ca7ef01" WorkloadEndpoint="localhost-k8s-whisker--85c5c448c9--pnhk4-eth0" Sep 13 00:17:18.901464 containerd[1444]: 2025-09-13 00:17:18.869 [INFO][5905] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="61578558f0fd523090e1f7292c78ab7dcd7551b3847bedd499846f518ca7ef01" Sep 13 00:17:18.901464 containerd[1444]: 2025-09-13 00:17:18.869 [INFO][5905] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="61578558f0fd523090e1f7292c78ab7dcd7551b3847bedd499846f518ca7ef01" iface="eth0" netns="" Sep 13 00:17:18.901464 containerd[1444]: 2025-09-13 00:17:18.869 [INFO][5905] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="61578558f0fd523090e1f7292c78ab7dcd7551b3847bedd499846f518ca7ef01" Sep 13 00:17:18.901464 containerd[1444]: 2025-09-13 00:17:18.869 [INFO][5905] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="61578558f0fd523090e1f7292c78ab7dcd7551b3847bedd499846f518ca7ef01" Sep 13 00:17:18.901464 containerd[1444]: 2025-09-13 00:17:18.886 [INFO][5914] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="61578558f0fd523090e1f7292c78ab7dcd7551b3847bedd499846f518ca7ef01" HandleID="k8s-pod-network.61578558f0fd523090e1f7292c78ab7dcd7551b3847bedd499846f518ca7ef01" Workload="localhost-k8s-whisker--85c5c448c9--pnhk4-eth0" Sep 13 00:17:18.901464 containerd[1444]: 2025-09-13 00:17:18.886 [INFO][5914] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:17:18.901464 containerd[1444]: 2025-09-13 00:17:18.886 [INFO][5914] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:17:18.901464 containerd[1444]: 2025-09-13 00:17:18.896 [WARNING][5914] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="61578558f0fd523090e1f7292c78ab7dcd7551b3847bedd499846f518ca7ef01" HandleID="k8s-pod-network.61578558f0fd523090e1f7292c78ab7dcd7551b3847bedd499846f518ca7ef01" Workload="localhost-k8s-whisker--85c5c448c9--pnhk4-eth0" Sep 13 00:17:18.901464 containerd[1444]: 2025-09-13 00:17:18.897 [INFO][5914] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="61578558f0fd523090e1f7292c78ab7dcd7551b3847bedd499846f518ca7ef01" HandleID="k8s-pod-network.61578558f0fd523090e1f7292c78ab7dcd7551b3847bedd499846f518ca7ef01" Workload="localhost-k8s-whisker--85c5c448c9--pnhk4-eth0" Sep 13 00:17:18.901464 containerd[1444]: 2025-09-13 00:17:18.898 [INFO][5914] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:17:18.901464 containerd[1444]: 2025-09-13 00:17:18.899 [INFO][5905] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="61578558f0fd523090e1f7292c78ab7dcd7551b3847bedd499846f518ca7ef01" Sep 13 00:17:18.901897 containerd[1444]: time="2025-09-13T00:17:18.901499321Z" level=info msg="TearDown network for sandbox \"61578558f0fd523090e1f7292c78ab7dcd7551b3847bedd499846f518ca7ef01\" successfully" Sep 13 00:17:18.901897 containerd[1444]: time="2025-09-13T00:17:18.901523001Z" level=info msg="StopPodSandbox for \"61578558f0fd523090e1f7292c78ab7dcd7551b3847bedd499846f518ca7ef01\" returns successfully" Sep 13 00:17:18.902070 containerd[1444]: time="2025-09-13T00:17:18.902044721Z" level=info msg="RemovePodSandbox for \"61578558f0fd523090e1f7292c78ab7dcd7551b3847bedd499846f518ca7ef01\"" Sep 13 00:17:18.902115 containerd[1444]: time="2025-09-13T00:17:18.902076441Z" level=info msg="Forcibly stopping sandbox \"61578558f0fd523090e1f7292c78ab7dcd7551b3847bedd499846f518ca7ef01\"" Sep 13 00:17:18.968454 containerd[1444]: 2025-09-13 00:17:18.933 [WARNING][5933] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="61578558f0fd523090e1f7292c78ab7dcd7551b3847bedd499846f518ca7ef01" WorkloadEndpoint="localhost-k8s-whisker--85c5c448c9--pnhk4-eth0" Sep 13 00:17:18.968454 containerd[1444]: 2025-09-13 00:17:18.933 [INFO][5933] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="61578558f0fd523090e1f7292c78ab7dcd7551b3847bedd499846f518ca7ef01" Sep 13 00:17:18.968454 containerd[1444]: 2025-09-13 00:17:18.933 [INFO][5933] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="61578558f0fd523090e1f7292c78ab7dcd7551b3847bedd499846f518ca7ef01" iface="eth0" netns="" Sep 13 00:17:18.968454 containerd[1444]: 2025-09-13 00:17:18.933 [INFO][5933] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="61578558f0fd523090e1f7292c78ab7dcd7551b3847bedd499846f518ca7ef01" Sep 13 00:17:18.968454 containerd[1444]: 2025-09-13 00:17:18.933 [INFO][5933] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="61578558f0fd523090e1f7292c78ab7dcd7551b3847bedd499846f518ca7ef01" Sep 13 00:17:18.968454 containerd[1444]: 2025-09-13 00:17:18.955 [INFO][5941] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="61578558f0fd523090e1f7292c78ab7dcd7551b3847bedd499846f518ca7ef01" HandleID="k8s-pod-network.61578558f0fd523090e1f7292c78ab7dcd7551b3847bedd499846f518ca7ef01" Workload="localhost-k8s-whisker--85c5c448c9--pnhk4-eth0" Sep 13 00:17:18.968454 containerd[1444]: 2025-09-13 00:17:18.955 [INFO][5941] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:17:18.968454 containerd[1444]: 2025-09-13 00:17:18.955 [INFO][5941] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:17:18.968454 containerd[1444]: 2025-09-13 00:17:18.963 [WARNING][5941] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="61578558f0fd523090e1f7292c78ab7dcd7551b3847bedd499846f518ca7ef01" HandleID="k8s-pod-network.61578558f0fd523090e1f7292c78ab7dcd7551b3847bedd499846f518ca7ef01" Workload="localhost-k8s-whisker--85c5c448c9--pnhk4-eth0" Sep 13 00:17:18.968454 containerd[1444]: 2025-09-13 00:17:18.963 [INFO][5941] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="61578558f0fd523090e1f7292c78ab7dcd7551b3847bedd499846f518ca7ef01" HandleID="k8s-pod-network.61578558f0fd523090e1f7292c78ab7dcd7551b3847bedd499846f518ca7ef01" Workload="localhost-k8s-whisker--85c5c448c9--pnhk4-eth0" Sep 13 00:17:18.968454 containerd[1444]: 2025-09-13 00:17:18.965 [INFO][5941] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:17:18.968454 containerd[1444]: 2025-09-13 00:17:18.966 [INFO][5933] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="61578558f0fd523090e1f7292c78ab7dcd7551b3847bedd499846f518ca7ef01" Sep 13 00:17:18.969021 containerd[1444]: time="2025-09-13T00:17:18.968465655Z" level=info msg="TearDown network for sandbox \"61578558f0fd523090e1f7292c78ab7dcd7551b3847bedd499846f518ca7ef01\" successfully" Sep 13 00:17:18.975914 containerd[1444]: time="2025-09-13T00:17:18.975685536Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"61578558f0fd523090e1f7292c78ab7dcd7551b3847bedd499846f518ca7ef01\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:17:18.975914 containerd[1444]: time="2025-09-13T00:17:18.975896536Z" level=info msg="RemovePodSandbox \"61578558f0fd523090e1f7292c78ab7dcd7551b3847bedd499846f518ca7ef01\" returns successfully" Sep 13 00:17:22.254321 systemd[1]: Started sshd@12-10.0.0.130:22-10.0.0.1:53562.service - OpenSSH per-connection server daemon (10.0.0.1:53562). Sep 13 00:17:22.308911 sshd[5975]: Accepted publickey for core from 10.0.0.1 port 53562 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:17:22.310423 sshd[5975]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:17:22.314076 systemd-logind[1419]: New session 13 of user core. Sep 13 00:17:22.325760 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 13 00:17:22.535808 sshd[5975]: pam_unix(sshd:session): session closed for user core Sep 13 00:17:22.548050 systemd[1]: sshd@12-10.0.0.130:22-10.0.0.1:53562.service: Deactivated successfully. Sep 13 00:17:22.549788 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 00:17:22.551357 systemd-logind[1419]: Session 13 logged out. Waiting for processes to exit. Sep 13 00:17:22.559836 systemd[1]: Started sshd@13-10.0.0.130:22-10.0.0.1:53574.service - OpenSSH per-connection server daemon (10.0.0.1:53574). Sep 13 00:17:22.560979 systemd-logind[1419]: Removed session 13. Sep 13 00:17:22.588983 sshd[5990]: Accepted publickey for core from 10.0.0.1 port 53574 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:17:22.590519 sshd[5990]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:17:22.595560 systemd-logind[1419]: New session 14 of user core. Sep 13 00:17:22.606745 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 13 00:17:22.798248 sshd[5990]: pam_unix(sshd:session): session closed for user core Sep 13 00:17:22.808329 systemd[1]: sshd@13-10.0.0.130:22-10.0.0.1:53574.service: Deactivated successfully. Sep 13 00:17:22.810183 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 00:17:22.812115 systemd-logind[1419]: Session 14 logged out. Waiting for processes to exit. Sep 13 00:17:22.820912 systemd[1]: Started sshd@14-10.0.0.130:22-10.0.0.1:53576.service - OpenSSH per-connection server daemon (10.0.0.1:53576). Sep 13 00:17:22.822806 systemd-logind[1419]: Removed session 14. Sep 13 00:17:22.856132 sshd[6002]: Accepted publickey for core from 10.0.0.1 port 53576 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:17:22.857461 sshd[6002]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:17:22.861979 systemd-logind[1419]: New session 15 of user core. Sep 13 00:17:22.871751 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 13 00:17:23.402393 sshd[6002]: pam_unix(sshd:session): session closed for user core Sep 13 00:17:23.410033 systemd[1]: sshd@14-10.0.0.130:22-10.0.0.1:53576.service: Deactivated successfully. Sep 13 00:17:23.414533 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 00:17:23.416344 systemd-logind[1419]: Session 15 logged out. Waiting for processes to exit. Sep 13 00:17:23.421861 systemd[1]: Started sshd@15-10.0.0.130:22-10.0.0.1:53592.service - OpenSSH per-connection server daemon (10.0.0.1:53592). Sep 13 00:17:23.423185 systemd-logind[1419]: Removed session 15. Sep 13 00:17:23.465016 sshd[6022]: Accepted publickey for core from 10.0.0.1 port 53592 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:17:23.466425 sshd[6022]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:17:23.470768 systemd-logind[1419]: New session 16 of user core. Sep 13 00:17:23.476748 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 13 00:17:23.913218 sshd[6022]: pam_unix(sshd:session): session closed for user core Sep 13 00:17:23.922578 systemd[1]: sshd@15-10.0.0.130:22-10.0.0.1:53592.service: Deactivated successfully. Sep 13 00:17:23.924679 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 00:17:23.925927 systemd-logind[1419]: Session 16 logged out. Waiting for processes to exit. Sep 13 00:17:23.941164 systemd[1]: Started sshd@16-10.0.0.130:22-10.0.0.1:53608.service - OpenSSH per-connection server daemon (10.0.0.1:53608). Sep 13 00:17:23.941941 systemd-logind[1419]: Removed session 16. Sep 13 00:17:23.973848 sshd[6036]: Accepted publickey for core from 10.0.0.1 port 53608 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:17:23.975097 sshd[6036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:17:23.978551 systemd-logind[1419]: New session 17 of user core. Sep 13 00:17:23.984739 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 13 00:17:24.118442 sshd[6036]: pam_unix(sshd:session): session closed for user core Sep 13 00:17:24.121364 systemd[1]: sshd@16-10.0.0.130:22-10.0.0.1:53608.service: Deactivated successfully. Sep 13 00:17:24.123116 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 00:17:24.123664 systemd-logind[1419]: Session 17 logged out. Waiting for processes to exit. Sep 13 00:17:24.124482 systemd-logind[1419]: Removed session 17. Sep 13 00:17:28.305015 kubelet[2490]: I0913 00:17:28.304971 2490 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:17:29.129885 systemd[1]: Started sshd@17-10.0.0.130:22-10.0.0.1:53616.service - OpenSSH per-connection server daemon (10.0.0.1:53616). Sep 13 00:17:29.162150 sshd[6057]: Accepted publickey for core from 10.0.0.1 port 53616 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:17:29.163379 sshd[6057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:17:29.166672 systemd-logind[1419]: New session 18 of user core. Sep 13 00:17:29.177827 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 13 00:17:29.333988 sshd[6057]: pam_unix(sshd:session): session closed for user core Sep 13 00:17:29.339656 systemd[1]: sshd@17-10.0.0.130:22-10.0.0.1:53616.service: Deactivated successfully. Sep 13 00:17:29.343998 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 00:17:29.344644 systemd-logind[1419]: Session 18 logged out. Waiting for processes to exit. Sep 13 00:17:29.347358 systemd-logind[1419]: Removed session 18. Sep 13 00:17:34.345645 systemd[1]: Started sshd@18-10.0.0.130:22-10.0.0.1:51544.service - OpenSSH per-connection server daemon (10.0.0.1:51544). Sep 13 00:17:34.390516 sshd[6079]: Accepted publickey for core from 10.0.0.1 port 51544 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:17:34.392017 sshd[6079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:17:34.395816 systemd-logind[1419]: New session 19 of user core. Sep 13 00:17:34.410914 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 13 00:17:34.784489 sshd[6079]: pam_unix(sshd:session): session closed for user core Sep 13 00:17:34.792308 systemd[1]: sshd@18-10.0.0.130:22-10.0.0.1:51544.service: Deactivated successfully. Sep 13 00:17:34.795459 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 00:17:34.796341 systemd-logind[1419]: Session 19 logged out. Waiting for processes to exit. Sep 13 00:17:34.797308 systemd-logind[1419]: Removed session 19. Sep 13 00:17:35.669128 kubelet[2490]: E0913 00:17:35.669073 2490 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"