Feb 13 15:27:02.965125 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 15:27:02.965148 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu Feb 13 13:57:00 -00 2025 Feb 13 15:27:02.965158 kernel: KASLR enabled Feb 13 15:27:02.965164 kernel: efi: EFI v2.7 by EDK II Feb 13 15:27:02.965169 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbbf018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40d98 Feb 13 15:27:02.965175 kernel: random: crng init done Feb 13 15:27:02.965182 kernel: secureboot: Secure boot disabled Feb 13 15:27:02.965188 kernel: ACPI: Early table checksum verification disabled Feb 13 15:27:02.965194 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Feb 13 15:27:02.965201 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 13 15:27:02.965207 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:27:02.965213 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:27:02.965219 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:27:02.965225 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:27:02.965232 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:27:02.965240 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:27:02.965246 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:27:02.965252 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:27:02.965258 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:27:02.965265 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 13 15:27:02.965271 kernel: NUMA: Failed to initialise from firmware Feb 13 15:27:02.965277 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 15:27:02.965283 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Feb 13 15:27:02.965289 kernel: Zone ranges: Feb 13 15:27:02.965295 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 15:27:02.965303 kernel: DMA32 empty Feb 13 15:27:02.965309 kernel: Normal empty Feb 13 15:27:02.965315 kernel: Movable zone start for each node Feb 13 15:27:02.965321 kernel: Early memory node ranges Feb 13 15:27:02.965327 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Feb 13 15:27:02.965333 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Feb 13 15:27:02.965339 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Feb 13 15:27:02.965345 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Feb 13 15:27:02.965352 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Feb 13 15:27:02.965358 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Feb 13 15:27:02.965364 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Feb 13 15:27:02.965370 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 15:27:02.965378 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 13 15:27:02.965384 kernel: psci: probing for conduit method from ACPI. Feb 13 15:27:02.965394 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 15:27:02.965404 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 15:27:02.965411 kernel: psci: Trusted OS migration not required Feb 13 15:27:02.965420 kernel: psci: SMC Calling Convention v1.1 Feb 13 15:27:02.965429 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 13 15:27:02.965436 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 15:27:02.965443 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 15:27:02.965450 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 13 15:27:02.965468 kernel: Detected PIPT I-cache on CPU0 Feb 13 15:27:02.965474 kernel: CPU features: detected: GIC system register CPU interface Feb 13 15:27:02.965481 kernel: CPU features: detected: Hardware dirty bit management Feb 13 15:27:02.965488 kernel: CPU features: detected: Spectre-v4 Feb 13 15:27:02.965495 kernel: CPU features: detected: Spectre-BHB Feb 13 15:27:02.965501 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 15:27:02.965510 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 15:27:02.965516 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 15:27:02.965523 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 15:27:02.965530 kernel: alternatives: applying boot alternatives Feb 13 15:27:02.965537 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=07e9b8867aadd0b2e77ba5338d18cdd10706c658e0d745a78e129bcae9a0e4c6 Feb 13 15:27:02.965544 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:27:02.965551 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 15:27:02.965558 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 15:27:02.965565 kernel: Fallback order for Node 0: 0 Feb 13 15:27:02.965572 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 13 15:27:02.965578 kernel: Policy zone: DMA Feb 13 15:27:02.965587 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:27:02.965594 kernel: software IO TLB: area num 4. Feb 13 15:27:02.965601 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Feb 13 15:27:02.965608 kernel: Memory: 2386324K/2572288K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39680K init, 897K bss, 185964K reserved, 0K cma-reserved) Feb 13 15:27:02.965614 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 15:27:02.965621 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:27:02.965628 kernel: rcu: RCU event tracing is enabled. Feb 13 15:27:02.965635 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 15:27:02.965642 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:27:02.965648 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:27:02.965655 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:27:02.965662 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 15:27:02.965671 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 15:27:02.965677 kernel: GICv3: 256 SPIs implemented Feb 13 15:27:02.965684 kernel: GICv3: 0 Extended SPIs implemented Feb 13 15:27:02.965691 kernel: Root IRQ handler: gic_handle_irq Feb 13 15:27:02.965698 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 15:27:02.965704 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 13 15:27:02.965711 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 13 15:27:02.965718 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 15:27:02.965725 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Feb 13 15:27:02.965732 kernel: GICv3: using LPI property table @0x00000000400f0000 Feb 13 15:27:02.965738 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Feb 13 15:27:02.965747 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:27:02.965754 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:27:02.965761 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 15:27:02.965768 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 15:27:02.965775 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 15:27:02.965781 kernel: arm-pv: using stolen time PV Feb 13 15:27:02.965788 kernel: Console: colour dummy device 80x25 Feb 13 15:27:02.965795 kernel: ACPI: Core revision 20230628 Feb 13 15:27:02.965803 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 15:27:02.965810 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:27:02.965818 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:27:02.965825 kernel: landlock: Up and running. Feb 13 15:27:02.965832 kernel: SELinux: Initializing. Feb 13 15:27:02.965839 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:27:02.965846 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:27:02.965853 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:27:02.965860 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:27:02.965867 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:27:02.965874 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:27:02.965882 kernel: Platform MSI: ITS@0x8080000 domain created Feb 13 15:27:02.965889 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 13 15:27:02.965896 kernel: Remapping and enabling EFI services. Feb 13 15:27:02.965905 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:27:02.965913 kernel: Detected PIPT I-cache on CPU1 Feb 13 15:27:02.965919 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 13 15:27:02.965927 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Feb 13 15:27:02.965934 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:27:02.965940 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 15:27:02.965947 kernel: Detected PIPT I-cache on CPU2 Feb 13 15:27:02.965956 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 13 15:27:02.965963 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Feb 13 15:27:02.965974 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:27:02.965983 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 13 15:27:02.965990 kernel: Detected PIPT I-cache on CPU3 Feb 13 15:27:02.965997 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 13 15:27:02.966005 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Feb 13 15:27:02.966012 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:27:02.966019 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 13 15:27:02.966028 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 15:27:02.966035 kernel: SMP: Total of 4 processors activated. Feb 13 15:27:02.966047 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 15:27:02.966055 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 15:27:02.966062 kernel: CPU features: detected: Common not Private translations Feb 13 15:27:02.966087 kernel: CPU features: detected: CRC32 instructions Feb 13 15:27:02.966095 kernel: CPU features: detected: Enhanced Virtualization Traps Feb 13 15:27:02.966105 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 15:27:02.966114 kernel: CPU features: detected: LSE atomic instructions Feb 13 15:27:02.966121 kernel: CPU features: detected: Privileged Access Never Feb 13 15:27:02.966129 kernel: CPU features: detected: RAS Extension Support Feb 13 15:27:02.966136 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 13 15:27:02.966143 kernel: CPU: All CPU(s) started at EL1 Feb 13 15:27:02.966151 kernel: alternatives: applying system-wide alternatives Feb 13 15:27:02.966158 kernel: devtmpfs: initialized Feb 13 15:27:02.966165 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:27:02.966175 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 15:27:02.966183 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:27:02.966191 kernel: SMBIOS 3.0.0 present. Feb 13 15:27:02.966198 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Feb 13 15:27:02.966206 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:27:02.966213 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 15:27:02.966221 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 15:27:02.966228 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 15:27:02.966236 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:27:02.966243 kernel: audit: type=2000 audit(0.029:1): state=initialized audit_enabled=0 res=1 Feb 13 15:27:02.966252 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:27:02.966259 kernel: cpuidle: using governor menu Feb 13 15:27:02.966266 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 15:27:02.966274 kernel: ASID allocator initialised with 32768 entries Feb 13 15:27:02.966281 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:27:02.966288 kernel: Serial: AMBA PL011 UART driver Feb 13 15:27:02.966295 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 15:27:02.966303 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 15:27:02.966310 kernel: Modules: 508960 pages in range for PLT usage Feb 13 15:27:02.966321 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 15:27:02.966328 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 15:27:02.966336 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 15:27:02.966343 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 15:27:02.966350 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:27:02.966358 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:27:02.966365 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 15:27:02.966372 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 15:27:02.966379 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:27:02.966388 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:27:02.966395 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:27:02.966402 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:27:02.966410 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 15:27:02.966417 kernel: ACPI: Interpreter enabled Feb 13 15:27:02.966424 kernel: ACPI: Using GIC for interrupt routing Feb 13 15:27:02.966431 kernel: ACPI: MCFG table detected, 1 entries Feb 13 15:27:02.966439 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 13 15:27:02.966446 kernel: printk: console [ttyAMA0] enabled Feb 13 15:27:02.966455 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 15:27:02.966599 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 15:27:02.966696 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 15:27:02.966768 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 15:27:02.966839 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 13 15:27:02.966915 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 13 15:27:02.966926 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 13 15:27:02.966936 kernel: PCI host bridge to bus 0000:00 Feb 13 15:27:02.967008 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 13 15:27:02.967168 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 15:27:02.967235 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 13 15:27:02.967296 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 15:27:02.967382 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 13 15:27:02.967541 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 15:27:02.967619 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 13 15:27:02.967684 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 13 15:27:02.967749 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 15:27:02.967813 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 15:27:02.967878 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 13 15:27:02.967941 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 13 15:27:02.967998 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 13 15:27:02.968081 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 15:27:02.968157 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 13 15:27:02.968167 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 15:27:02.968175 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 15:27:02.968182 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 15:27:02.968189 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 15:27:02.968196 kernel: iommu: Default domain type: Translated Feb 13 15:27:02.968204 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 15:27:02.968214 kernel: efivars: Registered efivars operations Feb 13 15:27:02.968222 kernel: vgaarb: loaded Feb 13 15:27:02.968229 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 15:27:02.968236 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:27:02.968244 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:27:02.968251 kernel: pnp: PnP ACPI init Feb 13 15:27:02.968326 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 13 15:27:02.968337 kernel: pnp: PnP ACPI: found 1 devices Feb 13 15:27:02.968346 kernel: NET: Registered PF_INET protocol family Feb 13 15:27:02.968353 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 15:27:02.968361 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 15:27:02.968368 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:27:02.968375 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 15:27:02.968383 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 15:27:02.968390 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 15:27:02.968397 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:27:02.968405 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:27:02.968414 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:27:02.968421 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:27:02.968428 kernel: kvm [1]: HYP mode not available Feb 13 15:27:02.968435 kernel: Initialise system trusted keyrings Feb 13 15:27:02.968442 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 15:27:02.968450 kernel: Key type asymmetric registered Feb 13 15:27:02.968457 kernel: Asymmetric key parser 'x509' registered Feb 13 15:27:02.968464 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 15:27:02.968471 kernel: io scheduler mq-deadline registered Feb 13 15:27:02.968480 kernel: io scheduler kyber registered Feb 13 15:27:02.968487 kernel: io scheduler bfq registered Feb 13 15:27:02.968494 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 15:27:02.968501 kernel: ACPI: button: Power Button [PWRB] Feb 13 15:27:02.968509 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 15:27:02.968575 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 13 15:27:02.968584 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:27:02.968592 kernel: thunder_xcv, ver 1.0 Feb 13 15:27:02.968599 kernel: thunder_bgx, ver 1.0 Feb 13 15:27:02.968608 kernel: nicpf, ver 1.0 Feb 13 15:27:02.968615 kernel: nicvf, ver 1.0 Feb 13 15:27:02.968699 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 15:27:02.968802 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T15:27:02 UTC (1739460422) Feb 13 15:27:02.968813 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 15:27:02.968821 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 13 15:27:02.968829 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 15:27:02.968836 kernel: watchdog: Hard watchdog permanently disabled Feb 13 15:27:02.968847 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:27:02.968854 kernel: Segment Routing with IPv6 Feb 13 15:27:02.968861 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:27:02.968868 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:27:02.968875 kernel: Key type dns_resolver registered Feb 13 15:27:02.968882 kernel: registered taskstats version 1 Feb 13 15:27:02.968890 kernel: Loading compiled-in X.509 certificates Feb 13 15:27:02.968897 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 4531cdb19689f90a81e7969ac7d8e25a95254f51' Feb 13 15:27:02.968904 kernel: Key type .fscrypt registered Feb 13 15:27:02.968913 kernel: Key type fscrypt-provisioning registered Feb 13 15:27:02.968920 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 15:27:02.968927 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:27:02.968934 kernel: ima: No architecture policies found Feb 13 15:27:02.968942 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 15:27:02.968949 kernel: clk: Disabling unused clocks Feb 13 15:27:02.968957 kernel: Freeing unused kernel memory: 39680K Feb 13 15:27:02.968964 kernel: Run /init as init process Feb 13 15:27:02.968971 kernel: with arguments: Feb 13 15:27:02.968980 kernel: /init Feb 13 15:27:02.968986 kernel: with environment: Feb 13 15:27:02.968993 kernel: HOME=/ Feb 13 15:27:02.969000 kernel: TERM=linux Feb 13 15:27:02.969007 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:27:02.969016 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:27:02.969026 systemd[1]: Detected virtualization kvm. Feb 13 15:27:02.969033 systemd[1]: Detected architecture arm64. Feb 13 15:27:02.969051 systemd[1]: Running in initrd. Feb 13 15:27:02.969059 systemd[1]: No hostname configured, using default hostname. Feb 13 15:27:02.969088 systemd[1]: Hostname set to . Feb 13 15:27:02.969097 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:27:02.969105 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:27:02.969113 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:27:02.969121 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:27:02.969129 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:27:02.969139 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:27:02.969147 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:27:02.969160 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:27:02.969170 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:27:02.969178 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:27:02.969186 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:27:02.969195 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:27:02.969203 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:27:02.969210 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:27:02.969220 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:27:02.969228 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:27:02.969236 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:27:02.969244 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:27:02.969252 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:27:02.969260 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 15:27:02.969270 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:27:02.969278 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:27:02.969285 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:27:02.969293 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:27:02.969311 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:27:02.969319 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:27:02.969327 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:27:02.969335 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:27:02.969345 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:27:02.969355 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:27:02.969363 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:27:02.969371 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:27:02.969378 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:27:02.969386 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:27:02.969417 systemd-journald[239]: Collecting audit messages is disabled. Feb 13 15:27:02.969439 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:27:02.969448 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:27:02.969458 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:27:02.969466 systemd-journald[239]: Journal started Feb 13 15:27:02.969485 systemd-journald[239]: Runtime Journal (/run/log/journal/49b3396e99f64b4c9e468e31fc90e4eb) is 5.9M, max 47.3M, 41.4M free. Feb 13 15:27:02.953455 systemd-modules-load[240]: Inserted module 'overlay' Feb 13 15:27:02.972965 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:27:02.973643 systemd-modules-load[240]: Inserted module 'br_netfilter' Feb 13 15:27:02.975453 kernel: Bridge firewalling registered Feb 13 15:27:02.975472 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:27:02.976842 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:27:02.979108 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:27:02.983263 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:27:02.984971 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:27:02.988230 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:27:02.991696 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:27:02.998329 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:27:03.001150 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:27:03.005390 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:27:03.007883 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:27:03.011858 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:27:03.017399 dracut-cmdline[270]: dracut-dracut-053 Feb 13 15:27:03.019767 dracut-cmdline[270]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=07e9b8867aadd0b2e77ba5338d18cdd10706c658e0d745a78e129bcae9a0e4c6 Feb 13 15:27:03.045484 systemd-resolved[280]: Positive Trust Anchors: Feb 13 15:27:03.045565 systemd-resolved[280]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:27:03.045597 systemd-resolved[280]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:27:03.050565 systemd-resolved[280]: Defaulting to hostname 'linux'. Feb 13 15:27:03.051624 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:27:03.056097 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:27:03.118104 kernel: SCSI subsystem initialized Feb 13 15:27:03.123090 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:27:03.133101 kernel: iscsi: registered transport (tcp) Feb 13 15:27:03.146094 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:27:03.146126 kernel: QLogic iSCSI HBA Driver Feb 13 15:27:03.196947 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:27:03.205269 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:27:03.226362 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:27:03.226431 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:27:03.228155 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:27:03.277112 kernel: raid6: neonx8 gen() 15758 MB/s Feb 13 15:27:03.294101 kernel: raid6: neonx4 gen() 15613 MB/s Feb 13 15:27:03.311114 kernel: raid6: neonx2 gen() 13056 MB/s Feb 13 15:27:03.328107 kernel: raid6: neonx1 gen() 10458 MB/s Feb 13 15:27:03.345123 kernel: raid6: int64x8 gen() 6969 MB/s Feb 13 15:27:03.362099 kernel: raid6: int64x4 gen() 7324 MB/s Feb 13 15:27:03.379093 kernel: raid6: int64x2 gen() 6120 MB/s Feb 13 15:27:03.396290 kernel: raid6: int64x1 gen() 5041 MB/s Feb 13 15:27:03.396312 kernel: raid6: using algorithm neonx8 gen() 15758 MB/s Feb 13 15:27:03.414236 kernel: raid6: .... xor() 11820 MB/s, rmw enabled Feb 13 15:27:03.414254 kernel: raid6: using neon recovery algorithm Feb 13 15:27:03.420258 kernel: xor: measuring software checksum speed Feb 13 15:27:03.420300 kernel: 8regs : 19326 MB/sec Feb 13 15:27:03.421092 kernel: 32regs : 19660 MB/sec Feb 13 15:27:03.421104 kernel: arm64_neon : 22658 MB/sec Feb 13 15:27:03.422252 kernel: xor: using function: arm64_neon (22658 MB/sec) Feb 13 15:27:03.478098 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:27:03.488856 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:27:03.510231 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:27:03.525688 systemd-udevd[460]: Using default interface naming scheme 'v255'. Feb 13 15:27:03.528942 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:27:03.534251 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:27:03.555289 dracut-pre-trigger[463]: rd.md=0: removing MD RAID activation Feb 13 15:27:03.599181 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:27:03.611279 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:27:03.660153 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:27:03.684299 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:27:03.698132 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:27:03.700361 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:27:03.702164 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:27:03.704320 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:27:03.711217 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:27:03.723557 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:27:03.735715 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Feb 13 15:27:03.735921 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 15:27:03.736006 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 15:27:03.736017 kernel: GPT:9289727 != 19775487 Feb 13 15:27:03.736027 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 15:27:03.736049 kernel: GPT:9289727 != 19775487 Feb 13 15:27:03.736059 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 15:27:03.736092 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:27:03.732689 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:27:03.732793 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:27:03.743026 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:27:03.744594 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:27:03.744743 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:27:03.747254 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:27:03.758309 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:27:03.766095 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (511) Feb 13 15:27:03.766134 kernel: BTRFS: device fsid 27ad543d-6fdb-4ace-b8f1-8f50b124bd06 devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (524) Feb 13 15:27:03.767719 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:27:03.773650 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 15:27:03.784587 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 15:27:03.789329 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 15:27:03.793283 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 15:27:03.794525 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 15:27:03.811265 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:27:03.813219 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:27:03.819927 disk-uuid[550]: Primary Header is updated. Feb 13 15:27:03.819927 disk-uuid[550]: Secondary Entries is updated. Feb 13 15:27:03.819927 disk-uuid[550]: Secondary Header is updated. Feb 13 15:27:03.823232 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:27:03.843100 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:27:04.835186 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:27:04.835532 disk-uuid[552]: The operation has completed successfully. Feb 13 15:27:04.859518 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:27:04.859610 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:27:04.876240 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:27:04.878908 sh[574]: Success Feb 13 15:27:04.897426 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 15:27:04.933562 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:27:04.935374 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:27:04.936423 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:27:04.947973 kernel: BTRFS info (device dm-0): first mount of filesystem 27ad543d-6fdb-4ace-b8f1-8f50b124bd06 Feb 13 15:27:04.948010 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:27:04.948021 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:27:04.950562 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:27:04.950580 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:27:04.953986 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:27:04.955408 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:27:04.963218 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:27:04.964808 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:27:04.972333 kernel: BTRFS info (device vda6): first mount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:27:04.972370 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:27:04.972381 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:27:04.975105 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:27:04.982190 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 15:27:04.984306 kernel: BTRFS info (device vda6): last unmount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:27:04.990697 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:27:04.996570 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:27:05.063388 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:27:05.073262 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:27:05.100822 systemd-networkd[762]: lo: Link UP Feb 13 15:27:05.100837 systemd-networkd[762]: lo: Gained carrier Feb 13 15:27:05.101873 systemd-networkd[762]: Enumeration completed Feb 13 15:27:05.102428 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:27:05.103582 systemd[1]: Reached target network.target - Network. Feb 13 15:27:05.105100 systemd-networkd[762]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:27:05.105103 systemd-networkd[762]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:27:05.109079 ignition[668]: Ignition 2.20.0 Feb 13 15:27:05.105858 systemd-networkd[762]: eth0: Link UP Feb 13 15:27:05.109086 ignition[668]: Stage: fetch-offline Feb 13 15:27:05.105861 systemd-networkd[762]: eth0: Gained carrier Feb 13 15:27:05.109119 ignition[668]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:27:05.105867 systemd-networkd[762]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:27:05.109127 ignition[668]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:27:05.109280 ignition[668]: parsed url from cmdline: "" Feb 13 15:27:05.109283 ignition[668]: no config URL provided Feb 13 15:27:05.109288 ignition[668]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:27:05.109295 ignition[668]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:27:05.109320 ignition[668]: op(1): [started] loading QEMU firmware config module Feb 13 15:27:05.109325 ignition[668]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 15:27:05.117301 ignition[668]: op(1): [finished] loading QEMU firmware config module Feb 13 15:27:05.135139 systemd-networkd[762]: eth0: DHCPv4 address 10.0.0.93/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:27:05.162181 ignition[668]: parsing config with SHA512: 84f7c572c76a38a2dcfdfd8e8066f37ee580c1fe2f5fb3aab682ae16456944af78ff4ebc7e0848fe04bec1833452c3316f19cfe5a0fa3bd1dc8a4a74fdd6d7dc Feb 13 15:27:05.166694 unknown[668]: fetched base config from "system" Feb 13 15:27:05.166703 unknown[668]: fetched user config from "qemu" Feb 13 15:27:05.168643 ignition[668]: fetch-offline: fetch-offline passed Feb 13 15:27:05.168822 ignition[668]: Ignition finished successfully Feb 13 15:27:05.170393 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:27:05.171757 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 15:27:05.176223 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:27:05.186523 ignition[771]: Ignition 2.20.0 Feb 13 15:27:05.186532 ignition[771]: Stage: kargs Feb 13 15:27:05.186683 ignition[771]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:27:05.186692 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:27:05.187584 ignition[771]: kargs: kargs passed Feb 13 15:27:05.187624 ignition[771]: Ignition finished successfully Feb 13 15:27:05.191127 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:27:05.200209 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:27:05.209431 ignition[780]: Ignition 2.20.0 Feb 13 15:27:05.209441 ignition[780]: Stage: disks Feb 13 15:27:05.209609 ignition[780]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:27:05.209618 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:27:05.211835 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:27:05.210482 ignition[780]: disks: disks passed Feb 13 15:27:05.213340 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:27:05.210525 ignition[780]: Ignition finished successfully Feb 13 15:27:05.214857 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:27:05.216425 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:27:05.218088 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:27:05.219547 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:27:05.229203 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:27:05.239398 systemd-fsck[791]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 15:27:05.243999 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:27:05.246129 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:27:05.294097 kernel: EXT4-fs (vda9): mounted filesystem b8d8a7c2-9667-48db-9266-035fd118dfdf r/w with ordered data mode. Quota mode: none. Feb 13 15:27:05.294092 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:27:05.295312 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:27:05.309153 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:27:05.311271 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:27:05.312576 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 15:27:05.316983 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (799) Feb 13 15:27:05.312610 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:27:05.312631 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:27:05.322760 kernel: BTRFS info (device vda6): first mount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:27:05.322778 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:27:05.322788 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:27:05.319688 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:27:05.324668 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:27:05.327086 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:27:05.328830 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:27:05.364978 initrd-setup-root[823]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:27:05.369216 initrd-setup-root[830]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:27:05.372859 initrd-setup-root[837]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:27:05.376859 initrd-setup-root[844]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:27:05.443586 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:27:05.455204 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:27:05.457299 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:27:05.461115 kernel: BTRFS info (device vda6): last unmount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:27:05.475415 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:27:05.478725 ignition[913]: INFO : Ignition 2.20.0 Feb 13 15:27:05.478725 ignition[913]: INFO : Stage: mount Feb 13 15:27:05.480287 ignition[913]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:27:05.480287 ignition[913]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:27:05.480287 ignition[913]: INFO : mount: mount passed Feb 13 15:27:05.480287 ignition[913]: INFO : Ignition finished successfully Feb 13 15:27:05.484102 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:27:05.496228 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:27:05.946650 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:27:05.959232 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:27:05.965950 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (926) Feb 13 15:27:05.965979 kernel: BTRFS info (device vda6): first mount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:27:05.965990 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:27:05.967669 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:27:05.970092 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:27:05.970807 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:27:05.991178 ignition[943]: INFO : Ignition 2.20.0 Feb 13 15:27:05.991178 ignition[943]: INFO : Stage: files Feb 13 15:27:05.992761 ignition[943]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:27:05.992761 ignition[943]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:27:05.992761 ignition[943]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:27:05.996248 ignition[943]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:27:05.996248 ignition[943]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:27:05.996248 ignition[943]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:27:05.996248 ignition[943]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:27:05.996248 ignition[943]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:27:05.995391 unknown[943]: wrote ssh authorized keys file for user: core Feb 13 15:27:06.003813 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 15:27:06.003813 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 15:27:06.078323 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 15:27:06.323257 systemd-networkd[762]: eth0: Gained IPv6LL Feb 13 15:27:06.993275 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 15:27:06.995400 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:27:06.995400 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:27:06.995400 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:27:06.995400 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:27:06.995400 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:27:06.995400 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:27:06.995400 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:27:06.995400 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:27:06.995400 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:27:06.995400 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:27:06.995400 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 15:27:06.995400 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 15:27:06.995400 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 15:27:06.995400 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Feb 13 15:27:07.328718 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 15:27:07.538335 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 15:27:07.538335 ignition[943]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 15:27:07.542625 ignition[943]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:27:07.542625 ignition[943]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:27:07.542625 ignition[943]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 15:27:07.542625 ignition[943]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Feb 13 15:27:07.542625 ignition[943]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 15:27:07.542625 ignition[943]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 15:27:07.542625 ignition[943]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Feb 13 15:27:07.542625 ignition[943]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 15:27:07.566465 ignition[943]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 15:27:07.570281 ignition[943]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 15:27:07.570281 ignition[943]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 15:27:07.570281 ignition[943]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Feb 13 15:27:07.570281 ignition[943]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 15:27:07.577662 ignition[943]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:27:07.577662 ignition[943]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:27:07.577662 ignition[943]: INFO : files: files passed Feb 13 15:27:07.577662 ignition[943]: INFO : Ignition finished successfully Feb 13 15:27:07.577657 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:27:07.585225 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:27:07.592173 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:27:07.593512 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:27:07.595183 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:27:07.598798 initrd-setup-root-after-ignition[972]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 15:27:07.601154 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:27:07.601154 initrd-setup-root-after-ignition[974]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:27:07.604169 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:27:07.603742 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:27:07.606664 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:27:07.614256 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:27:07.632577 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:27:07.632681 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:27:07.634941 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:27:07.636903 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:27:07.638786 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:27:07.639629 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:27:07.657550 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:27:07.661432 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:27:07.671720 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:27:07.672881 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:27:07.674780 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:27:07.676592 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:27:07.676703 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:27:07.679102 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:27:07.681149 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:27:07.682805 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:27:07.684710 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:27:07.686629 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:27:07.688580 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:27:07.690412 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:27:07.692371 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:27:07.694298 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:27:07.696029 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:27:07.697594 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:27:07.697745 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:27:07.699919 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:27:07.701156 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:27:07.703074 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:27:07.704161 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:27:07.706251 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:27:07.706398 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:27:07.709086 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:27:07.709231 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:27:07.711274 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:27:07.712836 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:27:07.716132 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:27:07.717781 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:27:07.719865 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:27:07.721432 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:27:07.721551 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:27:07.723154 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:27:07.723279 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:27:07.724810 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:27:07.724950 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:27:07.726695 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:27:07.726832 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:27:07.740301 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:27:07.742060 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:27:07.742256 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:27:07.747314 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:27:07.748216 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:27:07.748396 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:27:07.750255 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:27:07.750400 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:27:07.758408 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:27:07.763162 ignition[998]: INFO : Ignition 2.20.0 Feb 13 15:27:07.763162 ignition[998]: INFO : Stage: umount Feb 13 15:27:07.763162 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:27:07.763162 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:27:07.763162 ignition[998]: INFO : umount: umount passed Feb 13 15:27:07.763162 ignition[998]: INFO : Ignition finished successfully Feb 13 15:27:07.760110 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:27:07.762188 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:27:07.765312 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:27:07.767092 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:27:07.771514 systemd[1]: Stopped target network.target - Network. Feb 13 15:27:07.772607 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:27:07.772666 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:27:07.775007 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:27:07.775114 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:27:07.777027 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:27:07.777092 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:27:07.779098 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:27:07.779145 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:27:07.781654 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:27:07.783521 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:27:07.796262 systemd-networkd[762]: eth0: DHCPv6 lease lost Feb 13 15:27:07.801500 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:27:07.802634 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:27:07.804478 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:27:07.804582 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:27:07.806463 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:27:07.806513 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:27:07.822239 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:27:07.823201 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:27:07.823265 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:27:07.825631 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:27:07.825677 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:27:07.827686 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:27:07.827733 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:27:07.829837 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:27:07.829883 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:27:07.831934 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:27:07.834997 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:27:07.835104 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:27:07.843257 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:27:07.843311 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:27:07.845592 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:27:07.845709 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:27:07.848432 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:27:07.848510 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:27:07.850560 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:27:07.850615 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:27:07.851986 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:27:07.852022 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:27:07.853663 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:27:07.853704 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:27:07.856253 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:27:07.856293 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:27:07.858844 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:27:07.858884 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:27:07.872195 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:27:07.873246 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:27:07.873302 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:27:07.875300 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:27:07.875341 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:27:07.877387 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:27:07.877464 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:27:07.880584 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:27:07.882820 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:27:07.896297 systemd[1]: Switching root. Feb 13 15:27:07.918969 systemd-journald[239]: Journal stopped Feb 13 15:27:08.626254 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). Feb 13 15:27:08.626312 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 15:27:08.626328 kernel: SELinux: policy capability open_perms=1 Feb 13 15:27:08.626339 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 15:27:08.626350 kernel: SELinux: policy capability always_check_network=0 Feb 13 15:27:08.626360 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 15:27:08.626373 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 15:27:08.626382 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 15:27:08.626391 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 15:27:08.626401 kernel: audit: type=1403 audit(1739460428.056:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 15:27:08.626411 systemd[1]: Successfully loaded SELinux policy in 32.911ms. Feb 13 15:27:08.626428 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 26.174ms. Feb 13 15:27:08.626441 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:27:08.626452 systemd[1]: Detected virtualization kvm. Feb 13 15:27:08.626462 systemd[1]: Detected architecture arm64. Feb 13 15:27:08.626472 systemd[1]: Detected first boot. Feb 13 15:27:08.626484 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:27:08.626494 zram_generator::config[1043]: No configuration found. Feb 13 15:27:08.626506 systemd[1]: Populated /etc with preset unit settings. Feb 13 15:27:08.626517 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 15:27:08.626527 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 15:27:08.626539 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 15:27:08.626550 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 15:27:08.626561 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 15:27:08.626572 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 15:27:08.626583 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 15:27:08.626593 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 15:27:08.626604 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 15:27:08.626614 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 15:27:08.626626 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 15:27:08.626637 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:27:08.626647 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:27:08.626658 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 15:27:08.626668 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 15:27:08.626679 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 15:27:08.626689 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:27:08.626699 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 15:27:08.626711 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:27:08.626724 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 15:27:08.626734 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 15:27:08.626745 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 15:27:08.626755 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 15:27:08.626765 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:27:08.626776 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:27:08.626786 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:27:08.626797 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:27:08.626809 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 15:27:08.626819 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 15:27:08.626830 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:27:08.626840 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:27:08.626850 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:27:08.626860 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 15:27:08.626871 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 15:27:08.626882 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 15:27:08.626892 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 15:27:08.626904 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 15:27:08.626914 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 15:27:08.626924 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 15:27:08.626936 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 15:27:08.626946 systemd[1]: Reached target machines.target - Containers. Feb 13 15:27:08.626957 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 15:27:08.626967 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:27:08.626978 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:27:08.626989 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 15:27:08.627000 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:27:08.627019 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:27:08.627030 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:27:08.627041 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 15:27:08.627051 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:27:08.627062 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 15:27:08.627079 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 15:27:08.627090 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 15:27:08.627103 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 15:27:08.627113 kernel: fuse: init (API version 7.39) Feb 13 15:27:08.627122 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 15:27:08.627132 kernel: loop: module loaded Feb 13 15:27:08.627141 kernel: ACPI: bus type drm_connector registered Feb 13 15:27:08.627151 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:27:08.627161 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:27:08.627173 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 15:27:08.627184 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 15:27:08.627196 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:27:08.627226 systemd-journald[1110]: Collecting audit messages is disabled. Feb 13 15:27:08.627249 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 15:27:08.627260 systemd[1]: Stopped verity-setup.service. Feb 13 15:27:08.627271 systemd-journald[1110]: Journal started Feb 13 15:27:08.627292 systemd-journald[1110]: Runtime Journal (/run/log/journal/49b3396e99f64b4c9e468e31fc90e4eb) is 5.9M, max 47.3M, 41.4M free. Feb 13 15:27:08.423348 systemd[1]: Queued start job for default target multi-user.target. Feb 13 15:27:08.443387 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 15:27:08.443722 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 15:27:08.630823 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:27:08.631468 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 15:27:08.632650 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 15:27:08.633883 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 15:27:08.634964 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 15:27:08.636226 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 15:27:08.637460 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 15:27:08.638702 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 15:27:08.640163 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:27:08.641617 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 15:27:08.641771 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 15:27:08.643168 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:27:08.643297 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:27:08.644712 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:27:08.644914 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:27:08.646278 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:27:08.646426 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:27:08.647940 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 15:27:08.649164 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 15:27:08.650550 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:27:08.650691 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:27:08.652114 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:27:08.653550 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 15:27:08.654978 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 15:27:08.666562 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 15:27:08.676203 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 15:27:08.678436 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 15:27:08.679531 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 15:27:08.679568 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:27:08.681558 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 15:27:08.683633 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 15:27:08.685747 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 15:27:08.686923 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:27:08.688298 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 15:27:08.690103 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 15:27:08.691171 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:27:08.695220 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 15:27:08.696976 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:27:08.698559 systemd-journald[1110]: Time spent on flushing to /var/log/journal/49b3396e99f64b4c9e468e31fc90e4eb is 17.472ms for 852 entries. Feb 13 15:27:08.698559 systemd-journald[1110]: System Journal (/var/log/journal/49b3396e99f64b4c9e468e31fc90e4eb) is 8.0M, max 195.6M, 187.6M free. Feb 13 15:27:08.742494 systemd-journald[1110]: Received client request to flush runtime journal. Feb 13 15:27:08.742551 kernel: loop0: detected capacity change from 0 to 116808 Feb 13 15:27:08.742570 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 15:27:08.698113 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:27:08.705247 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 15:27:08.709233 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 15:27:08.714101 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:27:08.715570 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 15:27:08.717582 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 15:27:08.730336 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 15:27:08.732021 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 15:27:08.737816 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 15:27:08.751473 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 15:27:08.759332 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 15:27:08.762806 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 15:27:08.766444 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:27:08.776173 kernel: loop1: detected capacity change from 0 to 189592 Feb 13 15:27:08.776043 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 15:27:08.778507 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 15:27:08.779242 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 15:27:08.793389 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:27:08.800549 udevadm[1171]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 15:27:08.811122 kernel: loop2: detected capacity change from 0 to 113536 Feb 13 15:27:08.820137 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Feb 13 15:27:08.820153 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Feb 13 15:27:08.824490 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:27:08.841093 kernel: loop3: detected capacity change from 0 to 116808 Feb 13 15:27:08.846114 kernel: loop4: detected capacity change from 0 to 189592 Feb 13 15:27:08.852128 kernel: loop5: detected capacity change from 0 to 113536 Feb 13 15:27:08.855616 (sd-merge)[1182]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 15:27:08.855976 (sd-merge)[1182]: Merged extensions into '/usr'. Feb 13 15:27:08.861724 systemd[1]: Reloading requested from client PID 1154 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 15:27:08.861740 systemd[1]: Reloading... Feb 13 15:27:08.913860 zram_generator::config[1208]: No configuration found. Feb 13 15:27:08.974762 ldconfig[1149]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 15:27:09.006166 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:27:09.041153 systemd[1]: Reloading finished in 179 ms. Feb 13 15:27:09.075620 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 15:27:09.079105 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 15:27:09.089237 systemd[1]: Starting ensure-sysext.service... Feb 13 15:27:09.091125 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:27:09.104508 systemd[1]: Reloading requested from client PID 1242 ('systemctl') (unit ensure-sysext.service)... Feb 13 15:27:09.104526 systemd[1]: Reloading... Feb 13 15:27:09.126379 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 15:27:09.126677 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 15:27:09.127438 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 15:27:09.127707 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. Feb 13 15:27:09.127767 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. Feb 13 15:27:09.130126 systemd-tmpfiles[1243]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:27:09.130140 systemd-tmpfiles[1243]: Skipping /boot Feb 13 15:27:09.138041 systemd-tmpfiles[1243]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:27:09.138056 systemd-tmpfiles[1243]: Skipping /boot Feb 13 15:27:09.154097 zram_generator::config[1273]: No configuration found. Feb 13 15:27:09.243917 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:27:09.278936 systemd[1]: Reloading finished in 174 ms. Feb 13 15:27:09.290899 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 15:27:09.292491 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:27:09.315267 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:27:09.317849 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 15:27:09.320232 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 15:27:09.323248 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:27:09.329505 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:27:09.332621 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 15:27:09.337379 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:27:09.339316 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:27:09.344485 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:27:09.347298 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:27:09.349241 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:27:09.350000 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 15:27:09.357565 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 15:27:09.361671 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 15:27:09.363730 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:27:09.365107 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:27:09.366611 systemd-udevd[1312]: Using default interface naming scheme 'v255'. Feb 13 15:27:09.369048 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:27:09.369506 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:27:09.373589 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 15:27:09.375553 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:27:09.375686 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:27:09.381905 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:27:09.389277 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:27:09.390354 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:27:09.390478 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:27:09.391065 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 15:27:09.396666 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 15:27:09.398296 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:27:09.398419 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:27:09.401037 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:27:09.403651 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:27:09.411413 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:27:09.415594 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:27:09.420257 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:27:09.422285 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:27:09.426557 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:27:09.428873 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:27:09.429780 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 15:27:09.431252 augenrules[1370]: No rules Feb 13 15:27:09.432792 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:27:09.432962 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:27:09.435468 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:27:09.435600 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:27:09.440587 systemd[1]: Finished ensure-sysext.service. Feb 13 15:27:09.442094 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1360) Feb 13 15:27:09.463907 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Feb 13 15:27:09.464973 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:27:09.466019 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:27:09.468640 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:27:09.470088 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:27:09.484658 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 15:27:09.499459 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 15:27:09.500746 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:27:09.500823 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:27:09.502818 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 15:27:09.522504 systemd-resolved[1309]: Positive Trust Anchors: Feb 13 15:27:09.522981 systemd-resolved[1309]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:27:09.523089 systemd-resolved[1309]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:27:09.531758 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 15:27:09.534476 systemd-resolved[1309]: Defaulting to hostname 'linux'. Feb 13 15:27:09.540123 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:27:09.541365 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:27:09.557752 systemd-networkd[1373]: lo: Link UP Feb 13 15:27:09.557762 systemd-networkd[1373]: lo: Gained carrier Feb 13 15:27:09.558573 systemd-networkd[1373]: Enumeration completed Feb 13 15:27:09.558682 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:27:09.559923 systemd[1]: Reached target network.target - Network. Feb 13 15:27:09.564642 systemd-networkd[1373]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:27:09.564730 systemd-networkd[1373]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:27:09.565862 systemd-networkd[1373]: eth0: Link UP Feb 13 15:27:09.565870 systemd-networkd[1373]: eth0: Gained carrier Feb 13 15:27:09.565883 systemd-networkd[1373]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:27:09.568245 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 15:27:09.576842 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 15:27:09.578632 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 15:27:09.588173 systemd-networkd[1373]: eth0: DHCPv4 address 10.0.0.93/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:27:09.589693 systemd-timesyncd[1390]: Network configuration changed, trying to establish connection. Feb 13 15:27:09.590239 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:27:09.590492 systemd-timesyncd[1390]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 15:27:09.590543 systemd-timesyncd[1390]: Initial clock synchronization to Thu 2025-02-13 15:27:09.947894 UTC. Feb 13 15:27:09.608122 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 15:27:09.618226 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 15:27:09.627534 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:27:09.635968 lvm[1402]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:27:09.666641 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 15:27:09.668219 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:27:09.669322 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:27:09.670444 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 15:27:09.671684 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 15:27:09.673113 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 15:27:09.674259 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 15:27:09.675493 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 15:27:09.676721 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 15:27:09.676754 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:27:09.677658 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:27:09.679324 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 15:27:09.681707 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 15:27:09.693963 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 15:27:09.696166 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 15:27:09.697680 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 15:27:09.698875 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:27:09.699835 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:27:09.700838 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:27:09.700869 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:27:09.701766 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 15:27:09.703623 lvm[1409]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:27:09.703690 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 15:27:09.707304 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 15:27:09.712335 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 15:27:09.714254 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 15:27:09.714672 jq[1412]: false Feb 13 15:27:09.715229 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 15:27:09.718033 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 15:27:09.721255 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 15:27:09.724328 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 15:27:09.730611 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 15:27:09.732538 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 15:27:09.732693 extend-filesystems[1413]: Found loop3 Feb 13 15:27:09.735678 extend-filesystems[1413]: Found loop4 Feb 13 15:27:09.735678 extend-filesystems[1413]: Found loop5 Feb 13 15:27:09.735678 extend-filesystems[1413]: Found vda Feb 13 15:27:09.735678 extend-filesystems[1413]: Found vda1 Feb 13 15:27:09.735678 extend-filesystems[1413]: Found vda2 Feb 13 15:27:09.735678 extend-filesystems[1413]: Found vda3 Feb 13 15:27:09.735678 extend-filesystems[1413]: Found usr Feb 13 15:27:09.735678 extend-filesystems[1413]: Found vda4 Feb 13 15:27:09.735678 extend-filesystems[1413]: Found vda6 Feb 13 15:27:09.735678 extend-filesystems[1413]: Found vda7 Feb 13 15:27:09.735678 extend-filesystems[1413]: Found vda9 Feb 13 15:27:09.735678 extend-filesystems[1413]: Checking size of /dev/vda9 Feb 13 15:27:09.733021 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 15:27:09.736575 dbus-daemon[1411]: [system] SELinux support is enabled Feb 13 15:27:09.769412 extend-filesystems[1413]: Resized partition /dev/vda9 Feb 13 15:27:09.736262 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 15:27:09.779652 extend-filesystems[1434]: resize2fs 1.47.1 (20-May-2024) Feb 13 15:27:09.785133 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 15:27:09.740025 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 15:27:09.743960 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 15:27:09.752131 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 15:27:09.786435 jq[1427]: true Feb 13 15:27:09.755181 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 15:27:09.757366 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 15:27:09.757671 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 15:27:09.757822 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 15:27:09.762503 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 15:27:09.763181 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 15:27:09.777751 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 15:27:09.777774 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 15:27:09.780444 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 15:27:09.780465 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 15:27:09.793905 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1360) Feb 13 15:27:09.801046 update_engine[1426]: I20250213 15:27:09.800685 1426 main.cc:92] Flatcar Update Engine starting Feb 13 15:27:09.800455 (ntainerd)[1438]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 15:27:09.802377 tar[1435]: linux-arm64/helm Feb 13 15:27:09.806336 update_engine[1426]: I20250213 15:27:09.803841 1426 update_check_scheduler.cc:74] Next update check in 3m24s Feb 13 15:27:09.805549 systemd-logind[1424]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 15:27:09.806879 jq[1437]: true Feb 13 15:27:09.806679 systemd[1]: Started update-engine.service - Update Engine. Feb 13 15:27:09.809192 systemd-logind[1424]: New seat seat0. Feb 13 15:27:09.810550 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 15:27:09.818541 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 15:27:09.836096 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 15:27:09.851562 extend-filesystems[1434]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 15:27:09.851562 extend-filesystems[1434]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 15:27:09.851562 extend-filesystems[1434]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 15:27:09.865978 extend-filesystems[1413]: Resized filesystem in /dev/vda9 Feb 13 15:27:09.853302 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 15:27:09.853478 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 15:27:09.877088 bash[1467]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:27:09.876928 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 15:27:09.878997 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 15:27:09.891762 locksmithd[1453]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 15:27:10.014121 sshd_keygen[1436]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 15:27:10.014269 containerd[1438]: time="2025-02-13T15:27:10.012779122Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 15:27:10.032881 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 15:27:10.042218 containerd[1438]: time="2025-02-13T15:27:10.042159710Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:27:10.043667 containerd[1438]: time="2025-02-13T15:27:10.043612826Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:27:10.043667 containerd[1438]: time="2025-02-13T15:27:10.043661258Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 15:27:10.043737 containerd[1438]: time="2025-02-13T15:27:10.043681734Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 15:27:10.043866 containerd[1438]: time="2025-02-13T15:27:10.043844412Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 15:27:10.043916 containerd[1438]: time="2025-02-13T15:27:10.043868774Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 15:27:10.044017 containerd[1438]: time="2025-02-13T15:27:10.043983857Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:27:10.044017 containerd[1438]: time="2025-02-13T15:27:10.044012189Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:27:10.044416 containerd[1438]: time="2025-02-13T15:27:10.044378790Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:27:10.044416 containerd[1438]: time="2025-02-13T15:27:10.044413349Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 15:27:10.044477 containerd[1438]: time="2025-02-13T15:27:10.044430440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:27:10.044477 containerd[1438]: time="2025-02-13T15:27:10.044441095Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 15:27:10.044424 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 15:27:10.044579 containerd[1438]: time="2025-02-13T15:27:10.044532359Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:27:10.044769 containerd[1438]: time="2025-02-13T15:27:10.044743678Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:27:10.044871 containerd[1438]: time="2025-02-13T15:27:10.044849526Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:27:10.044871 containerd[1438]: time="2025-02-13T15:27:10.044866993Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 15:27:10.044990 containerd[1438]: time="2025-02-13T15:27:10.044966071Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 15:27:10.045045 containerd[1438]: time="2025-02-13T15:27:10.045025618Z" level=info msg="metadata content store policy set" policy=shared Feb 13 15:27:10.050468 containerd[1438]: time="2025-02-13T15:27:10.050425183Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 15:27:10.050550 containerd[1438]: time="2025-02-13T15:27:10.050491584Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 15:27:10.051231 containerd[1438]: time="2025-02-13T15:27:10.051195284Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 15:27:10.051274 containerd[1438]: time="2025-02-13T15:27:10.051246557Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 15:27:10.051428 containerd[1438]: time="2025-02-13T15:27:10.051272215Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 15:27:10.051592 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 15:27:10.051673 containerd[1438]: time="2025-02-13T15:27:10.051610526Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 15:27:10.051839 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 15:27:10.051970 containerd[1438]: time="2025-02-13T15:27:10.051942694Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 15:27:10.052088 containerd[1438]: time="2025-02-13T15:27:10.052064672Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 15:27:10.052136 containerd[1438]: time="2025-02-13T15:27:10.052094634Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 15:27:10.052173 containerd[1438]: time="2025-02-13T15:27:10.052160532Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 15:27:10.052197 containerd[1438]: time="2025-02-13T15:27:10.052184853Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 15:27:10.052219 containerd[1438]: time="2025-02-13T15:27:10.052203950Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 15:27:10.052248 containerd[1438]: time="2025-02-13T15:27:10.052221542Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 15:27:10.052271 containerd[1438]: time="2025-02-13T15:27:10.052247450Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 15:27:10.052295 containerd[1438]: time="2025-02-13T15:27:10.052269138Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 15:27:10.052295 containerd[1438]: time="2025-02-13T15:27:10.052288193Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 15:27:10.052329 containerd[1438]: time="2025-02-13T15:27:10.052305786Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 15:27:10.052329 containerd[1438]: time="2025-02-13T15:27:10.052324214Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 15:27:10.052364 containerd[1438]: time="2025-02-13T15:27:10.052351794Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 15:27:10.052383 containerd[1438]: time="2025-02-13T15:27:10.052371893Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 15:27:10.052405 containerd[1438]: time="2025-02-13T15:27:10.052390405Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 15:27:10.052427 containerd[1438]: time="2025-02-13T15:27:10.052406911Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 15:27:10.052427 containerd[1438]: time="2025-02-13T15:27:10.052423835Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 15:27:10.052466 containerd[1438]: time="2025-02-13T15:27:10.052441511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 15:27:10.052466 containerd[1438]: time="2025-02-13T15:27:10.052458352Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 15:27:10.052503 containerd[1438]: time="2025-02-13T15:27:10.052484928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 15:27:10.052522 containerd[1438]: time="2025-02-13T15:27:10.052503482Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 15:27:10.052541 containerd[1438]: time="2025-02-13T15:27:10.052529348Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 15:27:10.052559 containerd[1438]: time="2025-02-13T15:27:10.052543431Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 15:27:10.052606 containerd[1438]: time="2025-02-13T15:27:10.052586096Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 15:27:10.053920 containerd[1438]: time="2025-02-13T15:27:10.052613216Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 15:27:10.053984 containerd[1438]: time="2025-02-13T15:27:10.053930147Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 15:27:10.053984 containerd[1438]: time="2025-02-13T15:27:10.053972436Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 15:27:10.054027 containerd[1438]: time="2025-02-13T15:27:10.053995377Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 15:27:10.054027 containerd[1438]: time="2025-02-13T15:27:10.054013012Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 15:27:10.054283 containerd[1438]: time="2025-02-13T15:27:10.054239583Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 15:27:10.054283 containerd[1438]: time="2025-02-13T15:27:10.054273013Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 15:27:10.054365 containerd[1438]: time="2025-02-13T15:27:10.054289478Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 15:27:10.054365 containerd[1438]: time="2025-02-13T15:27:10.054308198Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 15:27:10.054365 containerd[1438]: time="2025-02-13T15:27:10.054331098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 15:27:10.054365 containerd[1438]: time="2025-02-13T15:27:10.054352493Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 15:27:10.054436 containerd[1438]: time="2025-02-13T15:27:10.054364611Z" level=info msg="NRI interface is disabled by configuration." Feb 13 15:27:10.054436 containerd[1438]: time="2025-02-13T15:27:10.054381535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 15:27:10.054778 containerd[1438]: time="2025-02-13T15:27:10.054717047Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 15:27:10.054902 containerd[1438]: time="2025-02-13T15:27:10.054779352Z" level=info msg="Connect containerd service" Feb 13 15:27:10.054902 containerd[1438]: time="2025-02-13T15:27:10.054825987Z" level=info msg="using legacy CRI server" Feb 13 15:27:10.054902 containerd[1438]: time="2025-02-13T15:27:10.054833634Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 15:27:10.055123 containerd[1438]: time="2025-02-13T15:27:10.055088370Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 15:27:10.055937 containerd[1438]: time="2025-02-13T15:27:10.055895745Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:27:10.056748 containerd[1438]: time="2025-02-13T15:27:10.056704750Z" level=info msg="Start subscribing containerd event" Feb 13 15:27:10.056791 containerd[1438]: time="2025-02-13T15:27:10.056764088Z" level=info msg="Start recovering state" Feb 13 15:27:10.056854 containerd[1438]: time="2025-02-13T15:27:10.056832578Z" level=info msg="Start event monitor" Feb 13 15:27:10.056854 containerd[1438]: time="2025-02-13T15:27:10.056851341Z" level=info msg="Start snapshots syncer" Feb 13 15:27:10.056902 containerd[1438]: time="2025-02-13T15:27:10.056862331Z" level=info msg="Start cni network conf syncer for default" Feb 13 15:27:10.056902 containerd[1438]: time="2025-02-13T15:27:10.056870605Z" level=info msg="Start streaming server" Feb 13 15:27:10.057420 containerd[1438]: time="2025-02-13T15:27:10.057390566Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 15:27:10.057478 containerd[1438]: time="2025-02-13T15:27:10.057443636Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 15:27:10.060222 containerd[1438]: time="2025-02-13T15:27:10.060191202Z" level=info msg="containerd successfully booted in 0.050706s" Feb 13 15:27:10.063386 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 15:27:10.064679 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 15:27:10.077158 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 15:27:10.087619 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 15:27:10.090059 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 15:27:10.091516 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 15:27:10.177778 tar[1435]: linux-arm64/LICENSE Feb 13 15:27:10.177778 tar[1435]: linux-arm64/README.md Feb 13 15:27:10.190596 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 15:27:10.936890 systemd-networkd[1373]: eth0: Gained IPv6LL Feb 13 15:27:10.940200 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 15:27:10.942133 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 15:27:10.954386 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 15:27:10.957064 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:27:10.959363 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 15:27:10.976376 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 15:27:10.976587 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 15:27:10.978349 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 15:27:10.981912 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 15:27:11.471658 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:27:11.473397 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 15:27:11.476984 (kubelet)[1525]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:27:11.478238 systemd[1]: Startup finished in 629ms (kernel) + 5.322s (initrd) + 3.458s (userspace) = 9.410s. Feb 13 15:27:11.937648 kubelet[1525]: E0213 15:27:11.937538 1525 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:27:11.940335 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:27:11.940491 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:27:16.280790 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 15:27:16.281894 systemd[1]: Started sshd@0-10.0.0.93:22-10.0.0.1:58374.service - OpenSSH per-connection server daemon (10.0.0.1:58374). Feb 13 15:27:16.342264 sshd[1538]: Accepted publickey for core from 10.0.0.1 port 58374 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:27:16.346314 sshd-session[1538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:27:16.362454 systemd-logind[1424]: New session 1 of user core. Feb 13 15:27:16.363449 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 15:27:16.373335 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 15:27:16.386356 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 15:27:16.388958 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 15:27:16.401542 (systemd)[1542]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 15:27:16.492839 systemd[1542]: Queued start job for default target default.target. Feb 13 15:27:16.503224 systemd[1542]: Created slice app.slice - User Application Slice. Feb 13 15:27:16.503279 systemd[1542]: Reached target paths.target - Paths. Feb 13 15:27:16.503291 systemd[1542]: Reached target timers.target - Timers. Feb 13 15:27:16.504614 systemd[1542]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 15:27:16.515052 systemd[1542]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 15:27:16.515291 systemd[1542]: Reached target sockets.target - Sockets. Feb 13 15:27:16.515315 systemd[1542]: Reached target basic.target - Basic System. Feb 13 15:27:16.515357 systemd[1542]: Reached target default.target - Main User Target. Feb 13 15:27:16.515387 systemd[1542]: Startup finished in 106ms. Feb 13 15:27:16.515497 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 15:27:16.516874 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 15:27:16.577026 systemd[1]: Started sshd@1-10.0.0.93:22-10.0.0.1:58380.service - OpenSSH per-connection server daemon (10.0.0.1:58380). Feb 13 15:27:16.631140 sshd[1553]: Accepted publickey for core from 10.0.0.1 port 58380 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:27:16.631816 sshd-session[1553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:27:16.636138 systemd-logind[1424]: New session 2 of user core. Feb 13 15:27:16.659323 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 15:27:16.712148 sshd[1555]: Connection closed by 10.0.0.1 port 58380 Feb 13 15:27:16.712515 sshd-session[1553]: pam_unix(sshd:session): session closed for user core Feb 13 15:27:16.723719 systemd[1]: sshd@1-10.0.0.93:22-10.0.0.1:58380.service: Deactivated successfully. Feb 13 15:27:16.726500 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 15:27:16.727841 systemd-logind[1424]: Session 2 logged out. Waiting for processes to exit. Feb 13 15:27:16.729566 systemd[1]: Started sshd@2-10.0.0.93:22-10.0.0.1:58394.service - OpenSSH per-connection server daemon (10.0.0.1:58394). Feb 13 15:27:16.730739 systemd-logind[1424]: Removed session 2. Feb 13 15:27:16.770934 sshd[1560]: Accepted publickey for core from 10.0.0.1 port 58394 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:27:16.772298 sshd-session[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:27:16.776717 systemd-logind[1424]: New session 3 of user core. Feb 13 15:27:16.787313 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 15:27:16.837322 sshd[1562]: Connection closed by 10.0.0.1 port 58394 Feb 13 15:27:16.837444 sshd-session[1560]: pam_unix(sshd:session): session closed for user core Feb 13 15:27:16.851735 systemd[1]: sshd@2-10.0.0.93:22-10.0.0.1:58394.service: Deactivated successfully. Feb 13 15:27:16.855560 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 15:27:16.856916 systemd-logind[1424]: Session 3 logged out. Waiting for processes to exit. Feb 13 15:27:16.858977 systemd[1]: Started sshd@3-10.0.0.93:22-10.0.0.1:58410.service - OpenSSH per-connection server daemon (10.0.0.1:58410). Feb 13 15:27:16.859873 systemd-logind[1424]: Removed session 3. Feb 13 15:27:16.900768 sshd[1567]: Accepted publickey for core from 10.0.0.1 port 58410 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:27:16.902343 sshd-session[1567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:27:16.906913 systemd-logind[1424]: New session 4 of user core. Feb 13 15:27:16.914301 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 15:27:16.967670 sshd[1569]: Connection closed by 10.0.0.1 port 58410 Feb 13 15:27:16.968121 sshd-session[1567]: pam_unix(sshd:session): session closed for user core Feb 13 15:27:16.977723 systemd[1]: sshd@3-10.0.0.93:22-10.0.0.1:58410.service: Deactivated successfully. Feb 13 15:27:16.980788 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 15:27:16.982078 systemd-logind[1424]: Session 4 logged out. Waiting for processes to exit. Feb 13 15:27:16.995486 systemd[1]: Started sshd@4-10.0.0.93:22-10.0.0.1:58424.service - OpenSSH per-connection server daemon (10.0.0.1:58424). Feb 13 15:27:16.996431 systemd-logind[1424]: Removed session 4. Feb 13 15:27:17.034057 sshd[1574]: Accepted publickey for core from 10.0.0.1 port 58424 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:27:17.035428 sshd-session[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:27:17.039672 systemd-logind[1424]: New session 5 of user core. Feb 13 15:27:17.051325 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 15:27:17.111340 sudo[1577]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 15:27:17.111666 sudo[1577]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:27:17.125251 sudo[1577]: pam_unix(sudo:session): session closed for user root Feb 13 15:27:17.127150 sshd[1576]: Connection closed by 10.0.0.1 port 58424 Feb 13 15:27:17.127634 sshd-session[1574]: pam_unix(sshd:session): session closed for user core Feb 13 15:27:17.143806 systemd[1]: sshd@4-10.0.0.93:22-10.0.0.1:58424.service: Deactivated successfully. Feb 13 15:27:17.146695 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 15:27:17.148147 systemd-logind[1424]: Session 5 logged out. Waiting for processes to exit. Feb 13 15:27:17.149580 systemd[1]: Started sshd@5-10.0.0.93:22-10.0.0.1:58430.service - OpenSSH per-connection server daemon (10.0.0.1:58430). Feb 13 15:27:17.150373 systemd-logind[1424]: Removed session 5. Feb 13 15:27:17.202288 sshd[1582]: Accepted publickey for core from 10.0.0.1 port 58430 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:27:17.203624 sshd-session[1582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:27:17.207852 systemd-logind[1424]: New session 6 of user core. Feb 13 15:27:17.215320 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 15:27:17.268139 sudo[1586]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 15:27:17.268435 sudo[1586]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:27:17.271852 sudo[1586]: pam_unix(sudo:session): session closed for user root Feb 13 15:27:17.277157 sudo[1585]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 15:27:17.277468 sudo[1585]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:27:17.295426 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:27:17.320258 augenrules[1608]: No rules Feb 13 15:27:17.321589 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:27:17.321779 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:27:17.322790 sudo[1585]: pam_unix(sudo:session): session closed for user root Feb 13 15:27:17.324204 sshd[1584]: Connection closed by 10.0.0.1 port 58430 Feb 13 15:27:17.324751 sshd-session[1582]: pam_unix(sshd:session): session closed for user core Feb 13 15:27:17.336663 systemd[1]: sshd@5-10.0.0.93:22-10.0.0.1:58430.service: Deactivated successfully. Feb 13 15:27:17.338227 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 15:27:17.339688 systemd-logind[1424]: Session 6 logged out. Waiting for processes to exit. Feb 13 15:27:17.340745 systemd[1]: Started sshd@6-10.0.0.93:22-10.0.0.1:58446.service - OpenSSH per-connection server daemon (10.0.0.1:58446). Feb 13 15:27:17.341410 systemd-logind[1424]: Removed session 6. Feb 13 15:27:17.401238 sshd[1616]: Accepted publickey for core from 10.0.0.1 port 58446 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:27:17.402878 sshd-session[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:27:17.406677 systemd-logind[1424]: New session 7 of user core. Feb 13 15:27:17.416280 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 15:27:17.468034 sudo[1619]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 15:27:17.468358 sudo[1619]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:27:17.799491 (dockerd)[1641]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 15:27:17.799972 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 15:27:18.055207 dockerd[1641]: time="2025-02-13T15:27:18.054973809Z" level=info msg="Starting up" Feb 13 15:27:18.206051 dockerd[1641]: time="2025-02-13T15:27:18.205947531Z" level=info msg="Loading containers: start." Feb 13 15:27:18.357113 kernel: Initializing XFRM netlink socket Feb 13 15:27:18.432951 systemd-networkd[1373]: docker0: Link UP Feb 13 15:27:18.467569 dockerd[1641]: time="2025-02-13T15:27:18.467525614Z" level=info msg="Loading containers: done." Feb 13 15:27:18.480563 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck227086086-merged.mount: Deactivated successfully. Feb 13 15:27:18.483057 dockerd[1641]: time="2025-02-13T15:27:18.482991860Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 15:27:18.483165 dockerd[1641]: time="2025-02-13T15:27:18.483135513Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Feb 13 15:27:18.483282 dockerd[1641]: time="2025-02-13T15:27:18.483256299Z" level=info msg="Daemon has completed initialization" Feb 13 15:27:18.518196 dockerd[1641]: time="2025-02-13T15:27:18.518134419Z" level=info msg="API listen on /run/docker.sock" Feb 13 15:27:18.519842 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 15:27:19.049850 containerd[1438]: time="2025-02-13T15:27:19.049807643Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\"" Feb 13 15:27:19.950489 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2769277400.mount: Deactivated successfully. Feb 13 15:27:20.893741 containerd[1438]: time="2025-02-13T15:27:20.893692706Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:20.895309 containerd[1438]: time="2025-02-13T15:27:20.894221612Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.6: active requests=0, bytes read=25620377" Feb 13 15:27:20.895309 containerd[1438]: time="2025-02-13T15:27:20.895257366Z" level=info msg="ImageCreate event name:\"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:20.898410 containerd[1438]: time="2025-02-13T15:27:20.898361838Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:20.899725 containerd[1438]: time="2025-02-13T15:27:20.899686914Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.6\" with image id \"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\", size \"25617175\" in 1.849830199s" Feb 13 15:27:20.899786 containerd[1438]: time="2025-02-13T15:27:20.899731391Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\" returns image reference \"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\"" Feb 13 15:27:20.901172 containerd[1438]: time="2025-02-13T15:27:20.901141698Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\"" Feb 13 15:27:22.084999 containerd[1438]: time="2025-02-13T15:27:22.084934219Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:22.085525 containerd[1438]: time="2025-02-13T15:27:22.085464347Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.6: active requests=0, bytes read=22471775" Feb 13 15:27:22.086599 containerd[1438]: time="2025-02-13T15:27:22.086545391Z" level=info msg="ImageCreate event name:\"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:22.090245 containerd[1438]: time="2025-02-13T15:27:22.090136383Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:22.091370 containerd[1438]: time="2025-02-13T15:27:22.091329991Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.6\" with image id \"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\", size \"23875502\" in 1.190150762s" Feb 13 15:27:22.091370 containerd[1438]: time="2025-02-13T15:27:22.091369342Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\" returns image reference \"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\"" Feb 13 15:27:22.092203 containerd[1438]: time="2025-02-13T15:27:22.091830818Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\"" Feb 13 15:27:22.190775 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 15:27:22.204349 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:27:22.307792 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:27:22.312678 (kubelet)[1904]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:27:22.353245 kubelet[1904]: E0213 15:27:22.353031 1904 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:27:22.356661 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:27:22.356812 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:27:23.303143 containerd[1438]: time="2025-02-13T15:27:23.302990603Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:23.303553 containerd[1438]: time="2025-02-13T15:27:23.303493654Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.6: active requests=0, bytes read=17024542" Feb 13 15:27:23.304530 containerd[1438]: time="2025-02-13T15:27:23.304495361Z" level=info msg="ImageCreate event name:\"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:23.308043 containerd[1438]: time="2025-02-13T15:27:23.307994503Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:23.309213 containerd[1438]: time="2025-02-13T15:27:23.309164365Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.6\" with image id \"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\", size \"18428287\" in 1.217301671s" Feb 13 15:27:23.309213 containerd[1438]: time="2025-02-13T15:27:23.309203591Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\" returns image reference \"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\"" Feb 13 15:27:23.310234 containerd[1438]: time="2025-02-13T15:27:23.310197074Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\"" Feb 13 15:27:24.234286 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount234571083.mount: Deactivated successfully. Feb 13 15:27:24.450454 containerd[1438]: time="2025-02-13T15:27:24.450391810Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:24.450918 containerd[1438]: time="2025-02-13T15:27:24.450876487Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.6: active requests=0, bytes read=26769258" Feb 13 15:27:24.451808 containerd[1438]: time="2025-02-13T15:27:24.451777977Z" level=info msg="ImageCreate event name:\"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:24.454007 containerd[1438]: time="2025-02-13T15:27:24.453943880Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:24.455341 containerd[1438]: time="2025-02-13T15:27:24.454542859Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.6\" with image id \"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\", repo tag \"registry.k8s.io/kube-proxy:v1.31.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\", size \"26768275\" in 1.144304315s" Feb 13 15:27:24.455341 containerd[1438]: time="2025-02-13T15:27:24.454581725Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\" returns image reference \"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\"" Feb 13 15:27:24.455701 containerd[1438]: time="2025-02-13T15:27:24.455680163Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 15:27:25.029021 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1647252424.mount: Deactivated successfully. Feb 13 15:27:25.722242 containerd[1438]: time="2025-02-13T15:27:25.722192110Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:25.723251 containerd[1438]: time="2025-02-13T15:27:25.722976491Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Feb 13 15:27:25.724381 containerd[1438]: time="2025-02-13T15:27:25.723943165Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:25.727260 containerd[1438]: time="2025-02-13T15:27:25.727185719Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:25.728795 containerd[1438]: time="2025-02-13T15:27:25.728557098Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.2727448s" Feb 13 15:27:25.728795 containerd[1438]: time="2025-02-13T15:27:25.728597541Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 15:27:25.729138 containerd[1438]: time="2025-02-13T15:27:25.729106069Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 15:27:26.184475 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount42405599.mount: Deactivated successfully. Feb 13 15:27:26.189951 containerd[1438]: time="2025-02-13T15:27:26.189900107Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:26.190697 containerd[1438]: time="2025-02-13T15:27:26.190404474Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Feb 13 15:27:26.191531 containerd[1438]: time="2025-02-13T15:27:26.191488363Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:26.193924 containerd[1438]: time="2025-02-13T15:27:26.193883332Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:26.194894 containerd[1438]: time="2025-02-13T15:27:26.194860139Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 465.71786ms" Feb 13 15:27:26.194894 containerd[1438]: time="2025-02-13T15:27:26.194892871Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Feb 13 15:27:26.195473 containerd[1438]: time="2025-02-13T15:27:26.195424179Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Feb 13 15:27:26.768060 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2574439120.mount: Deactivated successfully. Feb 13 15:27:28.528831 containerd[1438]: time="2025-02-13T15:27:28.528767052Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:28.529484 containerd[1438]: time="2025-02-13T15:27:28.529411444Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406427" Feb 13 15:27:28.530054 containerd[1438]: time="2025-02-13T15:27:28.530026760Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:28.533524 containerd[1438]: time="2025-02-13T15:27:28.533480053Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:28.534886 containerd[1438]: time="2025-02-13T15:27:28.534856109Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.339377417s" Feb 13 15:27:28.534922 containerd[1438]: time="2025-02-13T15:27:28.534886511Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Feb 13 15:27:32.607111 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 15:27:32.622630 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:27:32.765675 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:27:32.769103 (kubelet)[2057]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:27:32.801712 kubelet[2057]: E0213 15:27:32.801652 2057 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:27:32.804217 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:27:32.804359 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:27:34.505293 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:27:34.517564 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:27:34.541404 systemd[1]: Reloading requested from client PID 2072 ('systemctl') (unit session-7.scope)... Feb 13 15:27:34.541419 systemd[1]: Reloading... Feb 13 15:27:34.607467 zram_generator::config[2111]: No configuration found. Feb 13 15:27:34.728703 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:27:34.781192 systemd[1]: Reloading finished in 239 ms. Feb 13 15:27:34.819106 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:27:34.822425 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:27:34.822628 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:27:34.826231 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:27:34.918944 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:27:34.923729 (kubelet)[2158]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:27:34.978952 kubelet[2158]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:27:34.978952 kubelet[2158]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:27:34.978952 kubelet[2158]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:27:34.979381 kubelet[2158]: I0213 15:27:34.979308 2158 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:27:36.408671 kubelet[2158]: I0213 15:27:36.408631 2158 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 15:27:36.411053 kubelet[2158]: I0213 15:27:36.409030 2158 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:27:36.411053 kubelet[2158]: I0213 15:27:36.409346 2158 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 15:27:36.443381 kubelet[2158]: E0213 15:27:36.443324 2158 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.93:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:27:36.444337 kubelet[2158]: I0213 15:27:36.444021 2158 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:27:36.454271 kubelet[2158]: E0213 15:27:36.454236 2158 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 15:27:36.454271 kubelet[2158]: I0213 15:27:36.454272 2158 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 15:27:36.458210 kubelet[2158]: I0213 15:27:36.458171 2158 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:27:36.459008 kubelet[2158]: I0213 15:27:36.458983 2158 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 15:27:36.459173 kubelet[2158]: I0213 15:27:36.459140 2158 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:27:36.459340 kubelet[2158]: I0213 15:27:36.459170 2158 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 15:27:36.459480 kubelet[2158]: I0213 15:27:36.459461 2158 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:27:36.459480 kubelet[2158]: I0213 15:27:36.459473 2158 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 15:27:36.459664 kubelet[2158]: I0213 15:27:36.459645 2158 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:27:36.461113 kubelet[2158]: I0213 15:27:36.460936 2158 kubelet.go:408] "Attempting to sync node with API server" Feb 13 15:27:36.461113 kubelet[2158]: I0213 15:27:36.460967 2158 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:27:36.461113 kubelet[2158]: I0213 15:27:36.460996 2158 kubelet.go:314] "Adding apiserver pod source" Feb 13 15:27:36.461113 kubelet[2158]: I0213 15:27:36.461007 2158 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:27:36.464134 kubelet[2158]: W0213 15:27:36.464082 2158 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.93:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.93:6443: connect: connection refused Feb 13 15:27:36.464214 kubelet[2158]: E0213 15:27:36.464187 2158 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.93:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:27:36.465474 kubelet[2158]: W0213 15:27:36.464593 2158 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.93:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.93:6443: connect: connection refused Feb 13 15:27:36.465474 kubelet[2158]: E0213 15:27:36.464648 2158 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.93:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:27:36.467186 kubelet[2158]: I0213 15:27:36.467157 2158 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:27:36.469758 kubelet[2158]: I0213 15:27:36.469741 2158 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:27:36.473139 kubelet[2158]: W0213 15:27:36.472631 2158 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 15:27:36.473428 kubelet[2158]: I0213 15:27:36.473412 2158 server.go:1269] "Started kubelet" Feb 13 15:27:36.474104 kubelet[2158]: I0213 15:27:36.474030 2158 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:27:36.474421 kubelet[2158]: I0213 15:27:36.474391 2158 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:27:36.474556 kubelet[2158]: I0213 15:27:36.474526 2158 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:27:36.477256 kubelet[2158]: I0213 15:27:36.475231 2158 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:27:36.477256 kubelet[2158]: I0213 15:27:36.475855 2158 server.go:460] "Adding debug handlers to kubelet server" Feb 13 15:27:36.477256 kubelet[2158]: I0213 15:27:36.476755 2158 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 15:27:36.477256 kubelet[2158]: E0213 15:27:36.475866 2158 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.93:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.93:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823ce11b0ef5d1d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 15:27:36.473386269 +0000 UTC m=+1.546288225,LastTimestamp:2025-02-13 15:27:36.473386269 +0000 UTC m=+1.546288225,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 15:27:36.478475 kubelet[2158]: E0213 15:27:36.478431 2158 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.93:6443: connect: connection refused" interval="200ms" Feb 13 15:27:36.478598 kubelet[2158]: E0213 15:27:36.478572 2158 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:27:36.478752 kubelet[2158]: I0213 15:27:36.478739 2158 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 15:27:36.478832 kubelet[2158]: I0213 15:27:36.478820 2158 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 15:27:36.478897 kubelet[2158]: I0213 15:27:36.478884 2158 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:27:36.484978 kubelet[2158]: W0213 15:27:36.480325 2158 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.93:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.93:6443: connect: connection refused Feb 13 15:27:36.484978 kubelet[2158]: E0213 15:27:36.480391 2158 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.93:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:27:36.485563 kubelet[2158]: I0213 15:27:36.485540 2158 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:27:36.485831 kubelet[2158]: E0213 15:27:36.485633 2158 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:27:36.487823 kubelet[2158]: I0213 15:27:36.487787 2158 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:27:36.487893 kubelet[2158]: I0213 15:27:36.487831 2158 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:27:36.493024 kubelet[2158]: I0213 15:27:36.492976 2158 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:27:36.493955 kubelet[2158]: I0213 15:27:36.493926 2158 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:27:36.494037 kubelet[2158]: I0213 15:27:36.494019 2158 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:27:36.494137 kubelet[2158]: I0213 15:27:36.494044 2158 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 15:27:36.494316 kubelet[2158]: E0213 15:27:36.494279 2158 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:27:36.501327 kubelet[2158]: I0213 15:27:36.501307 2158 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:27:36.501327 kubelet[2158]: I0213 15:27:36.501323 2158 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:27:36.501416 kubelet[2158]: I0213 15:27:36.501342 2158 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:27:36.501416 kubelet[2158]: W0213 15:27:36.501337 2158 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.93:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.93:6443: connect: connection refused Feb 13 15:27:36.501416 kubelet[2158]: E0213 15:27:36.501386 2158 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.93:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:27:36.570461 kubelet[2158]: I0213 15:27:36.570427 2158 policy_none.go:49] "None policy: Start" Feb 13 15:27:36.571055 kubelet[2158]: I0213 15:27:36.571031 2158 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:27:36.571055 kubelet[2158]: I0213 15:27:36.571060 2158 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:27:36.578015 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 15:27:36.578834 kubelet[2158]: E0213 15:27:36.578719 2158 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:27:36.593665 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 15:27:36.594598 kubelet[2158]: E0213 15:27:36.594390 2158 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 15:27:36.607585 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 15:27:36.609008 kubelet[2158]: I0213 15:27:36.608844 2158 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:27:36.609375 kubelet[2158]: I0213 15:27:36.609049 2158 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 15:27:36.609375 kubelet[2158]: I0213 15:27:36.609062 2158 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:27:36.609375 kubelet[2158]: I0213 15:27:36.609354 2158 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:27:36.610449 kubelet[2158]: E0213 15:27:36.610423 2158 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 15:27:36.679943 kubelet[2158]: E0213 15:27:36.679819 2158 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.93:6443: connect: connection refused" interval="400ms" Feb 13 15:27:36.711046 kubelet[2158]: I0213 15:27:36.711004 2158 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 15:27:36.711417 kubelet[2158]: E0213 15:27:36.711390 2158 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.93:6443/api/v1/nodes\": dial tcp 10.0.0.93:6443: connect: connection refused" node="localhost" Feb 13 15:27:36.802214 systemd[1]: Created slice kubepods-burstable-pod010c778a25b514b5ad45c75ad63f18c3.slice - libcontainer container kubepods-burstable-pod010c778a25b514b5ad45c75ad63f18c3.slice. Feb 13 15:27:36.822949 systemd[1]: Created slice kubepods-burstable-pod98eb2295280bc6da80e83f7636be329c.slice - libcontainer container kubepods-burstable-pod98eb2295280bc6da80e83f7636be329c.slice. Feb 13 15:27:36.834370 systemd[1]: Created slice kubepods-burstable-pod04cca2c455deeb5da380812dcab224d8.slice - libcontainer container kubepods-burstable-pod04cca2c455deeb5da380812dcab224d8.slice. Feb 13 15:27:36.880103 kubelet[2158]: I0213 15:27:36.880058 2158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/010c778a25b514b5ad45c75ad63f18c3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"010c778a25b514b5ad45c75ad63f18c3\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:27:36.880103 kubelet[2158]: I0213 15:27:36.880106 2158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:27:36.880260 kubelet[2158]: I0213 15:27:36.880133 2158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:27:36.880260 kubelet[2158]: I0213 15:27:36.880161 2158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:27:36.880260 kubelet[2158]: I0213 15:27:36.880176 2158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:27:36.880260 kubelet[2158]: I0213 15:27:36.880192 2158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/04cca2c455deeb5da380812dcab224d8-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"04cca2c455deeb5da380812dcab224d8\") " pod="kube-system/kube-scheduler-localhost" Feb 13 15:27:36.880260 kubelet[2158]: I0213 15:27:36.880206 2158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/010c778a25b514b5ad45c75ad63f18c3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"010c778a25b514b5ad45c75ad63f18c3\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:27:36.880365 kubelet[2158]: I0213 15:27:36.880222 2158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/010c778a25b514b5ad45c75ad63f18c3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"010c778a25b514b5ad45c75ad63f18c3\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:27:36.880365 kubelet[2158]: I0213 15:27:36.880238 2158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:27:36.913270 kubelet[2158]: I0213 15:27:36.913245 2158 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 15:27:36.913591 kubelet[2158]: E0213 15:27:36.913566 2158 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.93:6443/api/v1/nodes\": dial tcp 10.0.0.93:6443: connect: connection refused" node="localhost" Feb 13 15:27:37.080610 kubelet[2158]: E0213 15:27:37.080487 2158 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.93:6443: connect: connection refused" interval="800ms" Feb 13 15:27:37.119923 kubelet[2158]: E0213 15:27:37.119884 2158 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:37.124300 containerd[1438]: time="2025-02-13T15:27:37.124243562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:010c778a25b514b5ad45c75ad63f18c3,Namespace:kube-system,Attempt:0,}" Feb 13 15:27:37.133576 kubelet[2158]: E0213 15:27:37.133364 2158 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:37.134561 containerd[1438]: time="2025-02-13T15:27:37.134525795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:98eb2295280bc6da80e83f7636be329c,Namespace:kube-system,Attempt:0,}" Feb 13 15:27:37.136760 kubelet[2158]: E0213 15:27:37.136725 2158 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:37.137053 containerd[1438]: time="2025-02-13T15:27:37.137028230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:04cca2c455deeb5da380812dcab224d8,Namespace:kube-system,Attempt:0,}" Feb 13 15:27:37.315354 kubelet[2158]: I0213 15:27:37.315110 2158 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 15:27:37.315621 kubelet[2158]: E0213 15:27:37.315594 2158 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.93:6443/api/v1/nodes\": dial tcp 10.0.0.93:6443: connect: connection refused" node="localhost" Feb 13 15:27:37.469979 kubelet[2158]: W0213 15:27:37.469805 2158 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.93:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.93:6443: connect: connection refused Feb 13 15:27:37.469979 kubelet[2158]: E0213 15:27:37.469877 2158 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.93:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:27:37.584213 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2195517360.mount: Deactivated successfully. Feb 13 15:27:37.593756 containerd[1438]: time="2025-02-13T15:27:37.593694931Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:27:37.594404 containerd[1438]: time="2025-02-13T15:27:37.594352408Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Feb 13 15:27:37.599911 containerd[1438]: time="2025-02-13T15:27:37.599880754Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:27:37.603305 containerd[1438]: time="2025-02-13T15:27:37.603254006Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:27:37.604041 containerd[1438]: time="2025-02-13T15:27:37.603891820Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:27:37.604801 containerd[1438]: time="2025-02-13T15:27:37.604771086Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:27:37.605047 containerd[1438]: time="2025-02-13T15:27:37.605012659Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:27:37.605854 containerd[1438]: time="2025-02-13T15:27:37.605826927Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:27:37.606631 containerd[1438]: time="2025-02-13T15:27:37.606604110Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 482.274804ms" Feb 13 15:27:37.610381 containerd[1438]: time="2025-02-13T15:27:37.610345007Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 473.117054ms" Feb 13 15:27:37.612659 containerd[1438]: time="2025-02-13T15:27:37.612618405Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 478.01756ms" Feb 13 15:27:37.663272 kubelet[2158]: W0213 15:27:37.663165 2158 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.93:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.93:6443: connect: connection refused Feb 13 15:27:37.663272 kubelet[2158]: E0213 15:27:37.663234 2158 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.93:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:27:37.686925 kubelet[2158]: W0213 15:27:37.686816 2158 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.93:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.93:6443: connect: connection refused Feb 13 15:27:37.686925 kubelet[2158]: E0213 15:27:37.686888 2158 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.93:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:27:37.740455 containerd[1438]: time="2025-02-13T15:27:37.740229398Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:27:37.740455 containerd[1438]: time="2025-02-13T15:27:37.740351747Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:27:37.740455 containerd[1438]: time="2025-02-13T15:27:37.740369008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:27:37.741124 containerd[1438]: time="2025-02-13T15:27:37.740975503Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:27:37.741659 containerd[1438]: time="2025-02-13T15:27:37.741558651Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:27:37.741784 containerd[1438]: time="2025-02-13T15:27:37.741650282Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:27:37.741784 containerd[1438]: time="2025-02-13T15:27:37.741769787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:27:37.744340 containerd[1438]: time="2025-02-13T15:27:37.744151676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:27:37.747580 containerd[1438]: time="2025-02-13T15:27:37.747388122Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:27:37.747580 containerd[1438]: time="2025-02-13T15:27:37.747442788Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:27:37.747580 containerd[1438]: time="2025-02-13T15:27:37.747458287Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:27:37.747580 containerd[1438]: time="2025-02-13T15:27:37.747538304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:27:37.762585 systemd[1]: Started cri-containerd-da2dc47848a22dea81a1388496ec98787e04dc6c85381e57cb48970e7e48f5a8.scope - libcontainer container da2dc47848a22dea81a1388496ec98787e04dc6c85381e57cb48970e7e48f5a8. Feb 13 15:27:37.767093 systemd[1]: Started cri-containerd-1b884e14a998c0c8193ed49db45aa5e4277614aa1cac20fa5b89333d392a443f.scope - libcontainer container 1b884e14a998c0c8193ed49db45aa5e4277614aa1cac20fa5b89333d392a443f. Feb 13 15:27:37.768916 systemd[1]: Started cri-containerd-4fdede2e9a37b7fb71369e64255b612326dd73ad264bd22ca256657fcc64fa20.scope - libcontainer container 4fdede2e9a37b7fb71369e64255b612326dd73ad264bd22ca256657fcc64fa20. Feb 13 15:27:37.797429 kubelet[2158]: W0213 15:27:37.797362 2158 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.93:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.93:6443: connect: connection refused Feb 13 15:27:37.797679 kubelet[2158]: E0213 15:27:37.797656 2158 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.93:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:27:37.801376 containerd[1438]: time="2025-02-13T15:27:37.801326189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:010c778a25b514b5ad45c75ad63f18c3,Namespace:kube-system,Attempt:0,} returns sandbox id \"da2dc47848a22dea81a1388496ec98787e04dc6c85381e57cb48970e7e48f5a8\"" Feb 13 15:27:37.803348 containerd[1438]: time="2025-02-13T15:27:37.803320408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:04cca2c455deeb5da380812dcab224d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"4fdede2e9a37b7fb71369e64255b612326dd73ad264bd22ca256657fcc64fa20\"" Feb 13 15:27:37.811289 containerd[1438]: time="2025-02-13T15:27:37.811253511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:98eb2295280bc6da80e83f7636be329c,Namespace:kube-system,Attempt:0,} returns sandbox id \"1b884e14a998c0c8193ed49db45aa5e4277614aa1cac20fa5b89333d392a443f\"" Feb 13 15:27:37.811561 kubelet[2158]: E0213 15:27:37.811540 2158 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:37.811836 kubelet[2158]: E0213 15:27:37.811817 2158 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:37.812139 kubelet[2158]: E0213 15:27:37.811877 2158 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:37.814467 containerd[1438]: time="2025-02-13T15:27:37.814439376Z" level=info msg="CreateContainer within sandbox \"da2dc47848a22dea81a1388496ec98787e04dc6c85381e57cb48970e7e48f5a8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 15:27:37.814676 containerd[1438]: time="2025-02-13T15:27:37.814465647Z" level=info msg="CreateContainer within sandbox \"1b884e14a998c0c8193ed49db45aa5e4277614aa1cac20fa5b89333d392a443f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 15:27:37.814919 containerd[1438]: time="2025-02-13T15:27:37.814473537Z" level=info msg="CreateContainer within sandbox \"4fdede2e9a37b7fb71369e64255b612326dd73ad264bd22ca256657fcc64fa20\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 15:27:37.831206 containerd[1438]: time="2025-02-13T15:27:37.831151608Z" level=info msg="CreateContainer within sandbox \"4fdede2e9a37b7fb71369e64255b612326dd73ad264bd22ca256657fcc64fa20\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"885ea4a795ddf0a1c07cdf2d201fc37184bbf32389c6fcd21224b8f091ddcb1b\"" Feb 13 15:27:37.831762 containerd[1438]: time="2025-02-13T15:27:37.831728347Z" level=info msg="StartContainer for \"885ea4a795ddf0a1c07cdf2d201fc37184bbf32389c6fcd21224b8f091ddcb1b\"" Feb 13 15:27:37.832900 containerd[1438]: time="2025-02-13T15:27:37.832867970Z" level=info msg="CreateContainer within sandbox \"da2dc47848a22dea81a1388496ec98787e04dc6c85381e57cb48970e7e48f5a8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"dc4eca06ab5764bfdb4b54d9111fb668521eb467dfe1f2409d4d6aa1707b6ad4\"" Feb 13 15:27:37.834500 containerd[1438]: time="2025-02-13T15:27:37.833404540Z" level=info msg="StartContainer for \"dc4eca06ab5764bfdb4b54d9111fb668521eb467dfe1f2409d4d6aa1707b6ad4\"" Feb 13 15:27:37.837408 containerd[1438]: time="2025-02-13T15:27:37.837374556Z" level=info msg="CreateContainer within sandbox \"1b884e14a998c0c8193ed49db45aa5e4277614aa1cac20fa5b89333d392a443f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"146a0e74b35a6e031e8708acfcf94d7303650029a8e8055f4cec268c445cc65a\"" Feb 13 15:27:37.838463 containerd[1438]: time="2025-02-13T15:27:37.838435203Z" level=info msg="StartContainer for \"146a0e74b35a6e031e8708acfcf94d7303650029a8e8055f4cec268c445cc65a\"" Feb 13 15:27:37.859267 systemd[1]: Started cri-containerd-885ea4a795ddf0a1c07cdf2d201fc37184bbf32389c6fcd21224b8f091ddcb1b.scope - libcontainer container 885ea4a795ddf0a1c07cdf2d201fc37184bbf32389c6fcd21224b8f091ddcb1b. Feb 13 15:27:37.860419 systemd[1]: Started cri-containerd-dc4eca06ab5764bfdb4b54d9111fb668521eb467dfe1f2409d4d6aa1707b6ad4.scope - libcontainer container dc4eca06ab5764bfdb4b54d9111fb668521eb467dfe1f2409d4d6aa1707b6ad4. Feb 13 15:27:37.866228 systemd[1]: Started cri-containerd-146a0e74b35a6e031e8708acfcf94d7303650029a8e8055f4cec268c445cc65a.scope - libcontainer container 146a0e74b35a6e031e8708acfcf94d7303650029a8e8055f4cec268c445cc65a. Feb 13 15:27:37.881464 kubelet[2158]: E0213 15:27:37.881423 2158 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.93:6443: connect: connection refused" interval="1.6s" Feb 13 15:27:37.936710 containerd[1438]: time="2025-02-13T15:27:37.936486940Z" level=info msg="StartContainer for \"dc4eca06ab5764bfdb4b54d9111fb668521eb467dfe1f2409d4d6aa1707b6ad4\" returns successfully" Feb 13 15:27:37.936710 containerd[1438]: time="2025-02-13T15:27:37.936664876Z" level=info msg="StartContainer for \"885ea4a795ddf0a1c07cdf2d201fc37184bbf32389c6fcd21224b8f091ddcb1b\" returns successfully" Feb 13 15:27:37.937709 containerd[1438]: time="2025-02-13T15:27:37.937481907Z" level=info msg="StartContainer for \"146a0e74b35a6e031e8708acfcf94d7303650029a8e8055f4cec268c445cc65a\" returns successfully" Feb 13 15:27:38.117662 kubelet[2158]: I0213 15:27:38.116792 2158 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 15:27:38.511976 kubelet[2158]: E0213 15:27:38.511861 2158 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:38.516916 kubelet[2158]: E0213 15:27:38.516482 2158 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:38.520172 kubelet[2158]: E0213 15:27:38.520107 2158 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:39.521509 kubelet[2158]: E0213 15:27:39.521478 2158 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:39.892667 kubelet[2158]: E0213 15:27:39.892545 2158 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 13 15:27:39.953392 kubelet[2158]: I0213 15:27:39.953221 2158 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Feb 13 15:27:39.953392 kubelet[2158]: E0213 15:27:39.953260 2158 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Feb 13 15:27:40.467486 kubelet[2158]: I0213 15:27:40.467377 2158 apiserver.go:52] "Watching apiserver" Feb 13 15:27:40.478946 kubelet[2158]: I0213 15:27:40.478892 2158 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 15:27:40.524950 kubelet[2158]: E0213 15:27:40.524916 2158 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Feb 13 15:27:40.525291 kubelet[2158]: E0213 15:27:40.525095 2158 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:41.910620 systemd[1]: Reloading requested from client PID 2440 ('systemctl') (unit session-7.scope)... Feb 13 15:27:41.910634 systemd[1]: Reloading... Feb 13 15:27:41.976191 zram_generator::config[2477]: No configuration found. Feb 13 15:27:42.058157 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:27:42.120129 systemd[1]: Reloading finished in 209 ms. Feb 13 15:27:42.149616 kubelet[2158]: I0213 15:27:42.149453 2158 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:27:42.149583 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:27:42.171119 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:27:42.172168 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:27:42.172230 systemd[1]: kubelet.service: Consumed 1.925s CPU time, 119.3M memory peak, 0B memory swap peak. Feb 13 15:27:42.182344 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:27:42.268445 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:27:42.273297 (kubelet)[2521]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:27:42.308787 kubelet[2521]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:27:42.308787 kubelet[2521]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:27:42.308787 kubelet[2521]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:27:42.309205 kubelet[2521]: I0213 15:27:42.308839 2521 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:27:42.314823 kubelet[2521]: I0213 15:27:42.314778 2521 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 15:27:42.314823 kubelet[2521]: I0213 15:27:42.314814 2521 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:27:42.315029 kubelet[2521]: I0213 15:27:42.315004 2521 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 15:27:42.317021 kubelet[2521]: I0213 15:27:42.316996 2521 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 15:27:42.319487 kubelet[2521]: I0213 15:27:42.319466 2521 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:27:42.325786 kubelet[2521]: E0213 15:27:42.325748 2521 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 15:27:42.325786 kubelet[2521]: I0213 15:27:42.325780 2521 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 15:27:42.328000 kubelet[2521]: I0213 15:27:42.327932 2521 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:27:42.328082 kubelet[2521]: I0213 15:27:42.328060 2521 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 15:27:42.328201 kubelet[2521]: I0213 15:27:42.328176 2521 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:27:42.328357 kubelet[2521]: I0213 15:27:42.328203 2521 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 15:27:42.328423 kubelet[2521]: I0213 15:27:42.328367 2521 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:27:42.328423 kubelet[2521]: I0213 15:27:42.328377 2521 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 15:27:42.328423 kubelet[2521]: I0213 15:27:42.328407 2521 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:27:42.328516 kubelet[2521]: I0213 15:27:42.328504 2521 kubelet.go:408] "Attempting to sync node with API server" Feb 13 15:27:42.328539 kubelet[2521]: I0213 15:27:42.328519 2521 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:27:42.328539 kubelet[2521]: I0213 15:27:42.328534 2521 kubelet.go:314] "Adding apiserver pod source" Feb 13 15:27:42.328582 kubelet[2521]: I0213 15:27:42.328543 2521 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:27:42.334097 kubelet[2521]: I0213 15:27:42.331349 2521 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:27:42.334097 kubelet[2521]: I0213 15:27:42.331785 2521 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:27:42.334097 kubelet[2521]: I0213 15:27:42.332170 2521 server.go:1269] "Started kubelet" Feb 13 15:27:42.334097 kubelet[2521]: I0213 15:27:42.332733 2521 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:27:42.335321 kubelet[2521]: I0213 15:27:42.334243 2521 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:27:42.335321 kubelet[2521]: I0213 15:27:42.334278 2521 server.go:460] "Adding debug handlers to kubelet server" Feb 13 15:27:42.335321 kubelet[2521]: I0213 15:27:42.334289 2521 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:27:42.335321 kubelet[2521]: I0213 15:27:42.334488 2521 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:27:42.335321 kubelet[2521]: I0213 15:27:42.334903 2521 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 15:27:42.341998 kubelet[2521]: E0213 15:27:42.341974 2521 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:27:42.342163 kubelet[2521]: I0213 15:27:42.342148 2521 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 15:27:42.342339 kubelet[2521]: I0213 15:27:42.342323 2521 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 15:27:42.342516 kubelet[2521]: I0213 15:27:42.342502 2521 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:27:42.353991 kubelet[2521]: I0213 15:27:42.353959 2521 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:27:42.354109 kubelet[2521]: I0213 15:27:42.354046 2521 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:27:42.359024 kubelet[2521]: E0213 15:27:42.358985 2521 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:27:42.360620 kubelet[2521]: I0213 15:27:42.360599 2521 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:27:42.362610 kubelet[2521]: I0213 15:27:42.362511 2521 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:27:42.365522 kubelet[2521]: I0213 15:27:42.365498 2521 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:27:42.365720 kubelet[2521]: I0213 15:27:42.365708 2521 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:27:42.365797 kubelet[2521]: I0213 15:27:42.365787 2521 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 15:27:42.365908 kubelet[2521]: E0213 15:27:42.365890 2521 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:27:42.392962 kubelet[2521]: I0213 15:27:42.392937 2521 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:27:42.393145 kubelet[2521]: I0213 15:27:42.393130 2521 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:27:42.393210 kubelet[2521]: I0213 15:27:42.393201 2521 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:27:42.393413 kubelet[2521]: I0213 15:27:42.393396 2521 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 15:27:42.393483 kubelet[2521]: I0213 15:27:42.393460 2521 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 15:27:42.393527 kubelet[2521]: I0213 15:27:42.393519 2521 policy_none.go:49] "None policy: Start" Feb 13 15:27:42.394187 kubelet[2521]: I0213 15:27:42.394174 2521 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:27:42.394331 kubelet[2521]: I0213 15:27:42.394321 2521 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:27:42.394556 kubelet[2521]: I0213 15:27:42.394539 2521 state_mem.go:75] "Updated machine memory state" Feb 13 15:27:42.397880 kubelet[2521]: I0213 15:27:42.397856 2521 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:27:42.398343 kubelet[2521]: I0213 15:27:42.398328 2521 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 15:27:42.398448 kubelet[2521]: I0213 15:27:42.398417 2521 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:27:42.398718 kubelet[2521]: I0213 15:27:42.398698 2521 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:27:42.502028 kubelet[2521]: I0213 15:27:42.501908 2521 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 15:27:42.511002 kubelet[2521]: I0213 15:27:42.510774 2521 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Feb 13 15:27:42.511253 kubelet[2521]: I0213 15:27:42.511218 2521 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Feb 13 15:27:42.643981 kubelet[2521]: I0213 15:27:42.643882 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/010c778a25b514b5ad45c75ad63f18c3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"010c778a25b514b5ad45c75ad63f18c3\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:27:42.643981 kubelet[2521]: I0213 15:27:42.643967 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:27:42.643981 kubelet[2521]: I0213 15:27:42.643994 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/04cca2c455deeb5da380812dcab224d8-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"04cca2c455deeb5da380812dcab224d8\") " pod="kube-system/kube-scheduler-localhost" Feb 13 15:27:42.644205 kubelet[2521]: I0213 15:27:42.644013 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:27:42.644205 kubelet[2521]: I0213 15:27:42.644030 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:27:42.644205 kubelet[2521]: I0213 15:27:42.644046 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:27:42.644205 kubelet[2521]: I0213 15:27:42.644061 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/010c778a25b514b5ad45c75ad63f18c3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"010c778a25b514b5ad45c75ad63f18c3\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:27:42.644205 kubelet[2521]: I0213 15:27:42.644117 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/010c778a25b514b5ad45c75ad63f18c3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"010c778a25b514b5ad45c75ad63f18c3\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:27:42.644316 kubelet[2521]: I0213 15:27:42.644167 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:27:42.773232 kubelet[2521]: E0213 15:27:42.773115 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:42.774136 kubelet[2521]: E0213 15:27:42.774113 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:42.774281 kubelet[2521]: E0213 15:27:42.774257 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:43.329046 kubelet[2521]: I0213 15:27:43.329000 2521 apiserver.go:52] "Watching apiserver" Feb 13 15:27:43.343294 kubelet[2521]: I0213 15:27:43.343256 2521 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 15:27:43.374223 kubelet[2521]: E0213 15:27:43.373907 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:43.374223 kubelet[2521]: E0213 15:27:43.374148 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:43.375147 kubelet[2521]: E0213 15:27:43.374549 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:43.408094 kubelet[2521]: I0213 15:27:43.404386 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.404367266 podStartE2EDuration="1.404367266s" podCreationTimestamp="2025-02-13 15:27:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:27:43.402319806 +0000 UTC m=+1.125817845" watchObservedRunningTime="2025-02-13 15:27:43.404367266 +0000 UTC m=+1.127865305" Feb 13 15:27:43.421689 kubelet[2521]: I0213 15:27:43.421627 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.421608892 podStartE2EDuration="1.421608892s" podCreationTimestamp="2025-02-13 15:27:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:27:43.412199193 +0000 UTC m=+1.135697232" watchObservedRunningTime="2025-02-13 15:27:43.421608892 +0000 UTC m=+1.145106931" Feb 13 15:27:43.431272 kubelet[2521]: I0213 15:27:43.431214 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.431198827 podStartE2EDuration="1.431198827s" podCreationTimestamp="2025-02-13 15:27:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:27:43.421967083 +0000 UTC m=+1.145465122" watchObservedRunningTime="2025-02-13 15:27:43.431198827 +0000 UTC m=+1.154696866" Feb 13 15:27:44.377650 kubelet[2521]: E0213 15:27:44.377284 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:47.153020 kubelet[2521]: I0213 15:27:47.152984 2521 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 15:27:47.160308 containerd[1438]: time="2025-02-13T15:27:47.160245099Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 15:27:47.162525 kubelet[2521]: I0213 15:27:47.160958 2521 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 15:27:47.248266 sudo[1619]: pam_unix(sudo:session): session closed for user root Feb 13 15:27:47.249461 sshd[1618]: Connection closed by 10.0.0.1 port 58446 Feb 13 15:27:47.251309 sshd-session[1616]: pam_unix(sshd:session): session closed for user core Feb 13 15:27:47.255330 systemd[1]: sshd@6-10.0.0.93:22-10.0.0.1:58446.service: Deactivated successfully. Feb 13 15:27:47.257299 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 15:27:47.258121 systemd[1]: session-7.scope: Consumed 8.073s CPU time, 152.6M memory peak, 0B memory swap peak. Feb 13 15:27:47.258761 systemd-logind[1424]: Session 7 logged out. Waiting for processes to exit. Feb 13 15:27:47.259834 systemd-logind[1424]: Removed session 7. Feb 13 15:27:48.016820 kubelet[2521]: E0213 15:27:48.016507 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:48.095407 systemd[1]: Created slice kubepods-besteffort-podf1e46175_f51b_4873_b5f2_58432f85b763.slice - libcontainer container kubepods-besteffort-podf1e46175_f51b_4873_b5f2_58432f85b763.slice. Feb 13 15:27:48.180954 kubelet[2521]: I0213 15:27:48.180909 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f1e46175-f51b-4873-b5f2-58432f85b763-kube-proxy\") pod \"kube-proxy-7sngn\" (UID: \"f1e46175-f51b-4873-b5f2-58432f85b763\") " pod="kube-system/kube-proxy-7sngn" Feb 13 15:27:48.181468 kubelet[2521]: I0213 15:27:48.181447 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f1e46175-f51b-4873-b5f2-58432f85b763-xtables-lock\") pod \"kube-proxy-7sngn\" (UID: \"f1e46175-f51b-4873-b5f2-58432f85b763\") " pod="kube-system/kube-proxy-7sngn" Feb 13 15:27:48.181606 kubelet[2521]: I0213 15:27:48.181536 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skjhn\" (UniqueName: \"kubernetes.io/projected/f1e46175-f51b-4873-b5f2-58432f85b763-kube-api-access-skjhn\") pod \"kube-proxy-7sngn\" (UID: \"f1e46175-f51b-4873-b5f2-58432f85b763\") " pod="kube-system/kube-proxy-7sngn" Feb 13 15:27:48.181606 kubelet[2521]: I0213 15:27:48.181560 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f1e46175-f51b-4873-b5f2-58432f85b763-lib-modules\") pod \"kube-proxy-7sngn\" (UID: \"f1e46175-f51b-4873-b5f2-58432f85b763\") " pod="kube-system/kube-proxy-7sngn" Feb 13 15:27:48.254199 kubelet[2521]: W0213 15:27:48.254105 2521 reflector.go:561] object-"tigera-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'localhost' and this object Feb 13 15:27:48.254199 kubelet[2521]: E0213 15:27:48.254166 2521 reflector.go:158] "Unhandled Error" err="object-\"tigera-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Feb 13 15:27:48.255849 kubelet[2521]: W0213 15:27:48.254848 2521 reflector.go:561] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'localhost' and this object Feb 13 15:27:48.255849 kubelet[2521]: E0213 15:27:48.254882 2521 reflector.go:158] "Unhandled Error" err="object-\"tigera-operator\"/\"kubernetes-services-endpoint\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kubernetes-services-endpoint\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Feb 13 15:27:48.261402 systemd[1]: Created slice kubepods-besteffort-pode279829c_3500_4b42_9350_c02ffa22e341.slice - libcontainer container kubepods-besteffort-pode279829c_3500_4b42_9350_c02ffa22e341.slice. Feb 13 15:27:48.282012 kubelet[2521]: I0213 15:27:48.281716 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtfnb\" (UniqueName: \"kubernetes.io/projected/e279829c-3500-4b42-9350-c02ffa22e341-kube-api-access-vtfnb\") pod \"tigera-operator-76c4976dd7-4s2h8\" (UID: \"e279829c-3500-4b42-9350-c02ffa22e341\") " pod="tigera-operator/tigera-operator-76c4976dd7-4s2h8" Feb 13 15:27:48.282012 kubelet[2521]: I0213 15:27:48.281786 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e279829c-3500-4b42-9350-c02ffa22e341-var-lib-calico\") pod \"tigera-operator-76c4976dd7-4s2h8\" (UID: \"e279829c-3500-4b42-9350-c02ffa22e341\") " pod="tigera-operator/tigera-operator-76c4976dd7-4s2h8" Feb 13 15:27:48.383943 kubelet[2521]: E0213 15:27:48.383913 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:48.409269 kubelet[2521]: E0213 15:27:48.409212 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:48.409998 containerd[1438]: time="2025-02-13T15:27:48.409960913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7sngn,Uid:f1e46175-f51b-4873-b5f2-58432f85b763,Namespace:kube-system,Attempt:0,}" Feb 13 15:27:48.440922 containerd[1438]: time="2025-02-13T15:27:48.440418468Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:27:48.440922 containerd[1438]: time="2025-02-13T15:27:48.440874688Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:27:48.440922 containerd[1438]: time="2025-02-13T15:27:48.440886456Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:27:48.441173 containerd[1438]: time="2025-02-13T15:27:48.440980798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:27:48.463271 systemd[1]: Started cri-containerd-1150eeb940a848fd6c0d6efe7d638b034cb88da997a8db6926d5f7c874686a56.scope - libcontainer container 1150eeb940a848fd6c0d6efe7d638b034cb88da997a8db6926d5f7c874686a56. Feb 13 15:27:48.481601 containerd[1438]: time="2025-02-13T15:27:48.481546162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7sngn,Uid:f1e46175-f51b-4873-b5f2-58432f85b763,Namespace:kube-system,Attempt:0,} returns sandbox id \"1150eeb940a848fd6c0d6efe7d638b034cb88da997a8db6926d5f7c874686a56\"" Feb 13 15:27:48.482273 kubelet[2521]: E0213 15:27:48.482250 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:48.486046 containerd[1438]: time="2025-02-13T15:27:48.485288984Z" level=info msg="CreateContainer within sandbox \"1150eeb940a848fd6c0d6efe7d638b034cb88da997a8db6926d5f7c874686a56\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 15:27:48.510723 containerd[1438]: time="2025-02-13T15:27:48.510637938Z" level=info msg="CreateContainer within sandbox \"1150eeb940a848fd6c0d6efe7d638b034cb88da997a8db6926d5f7c874686a56\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7e996d183a9519c60513c9ef18ce41c320b67c5156bb5bc70198edd5494e06e1\"" Feb 13 15:27:48.511671 containerd[1438]: time="2025-02-13T15:27:48.511643159Z" level=info msg="StartContainer for \"7e996d183a9519c60513c9ef18ce41c320b67c5156bb5bc70198edd5494e06e1\"" Feb 13 15:27:48.549261 systemd[1]: Started cri-containerd-7e996d183a9519c60513c9ef18ce41c320b67c5156bb5bc70198edd5494e06e1.scope - libcontainer container 7e996d183a9519c60513c9ef18ce41c320b67c5156bb5bc70198edd5494e06e1. Feb 13 15:27:48.576530 containerd[1438]: time="2025-02-13T15:27:48.576487294Z" level=info msg="StartContainer for \"7e996d183a9519c60513c9ef18ce41c320b67c5156bb5bc70198edd5494e06e1\" returns successfully" Feb 13 15:27:49.326759 kubelet[2521]: E0213 15:27:49.326231 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:49.387135 kubelet[2521]: E0213 15:27:49.386760 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:49.387254 kubelet[2521]: E0213 15:27:49.387155 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:49.391617 kubelet[2521]: E0213 15:27:49.391558 2521 projected.go:288] Couldn't get configMap tigera-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 13 15:27:49.391617 kubelet[2521]: E0213 15:27:49.391593 2521 projected.go:194] Error preparing data for projected volume kube-api-access-vtfnb for pod tigera-operator/tigera-operator-76c4976dd7-4s2h8: failed to sync configmap cache: timed out waiting for the condition Feb 13 15:27:49.391851 kubelet[2521]: E0213 15:27:49.391663 2521 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e279829c-3500-4b42-9350-c02ffa22e341-kube-api-access-vtfnb podName:e279829c-3500-4b42-9350-c02ffa22e341 nodeName:}" failed. No retries permitted until 2025-02-13 15:27:49.891637304 +0000 UTC m=+7.615135343 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vtfnb" (UniqueName: "kubernetes.io/projected/e279829c-3500-4b42-9350-c02ffa22e341-kube-api-access-vtfnb") pod "tigera-operator-76c4976dd7-4s2h8" (UID: "e279829c-3500-4b42-9350-c02ffa22e341") : failed to sync configmap cache: timed out waiting for the condition Feb 13 15:27:49.416114 kubelet[2521]: I0213 15:27:49.416035 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7sngn" podStartSLOduration=1.4160186559999999 podStartE2EDuration="1.416018656s" podCreationTimestamp="2025-02-13 15:27:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:27:49.403998206 +0000 UTC m=+7.127496205" watchObservedRunningTime="2025-02-13 15:27:49.416018656 +0000 UTC m=+7.139516695" Feb 13 15:27:50.064929 containerd[1438]: time="2025-02-13T15:27:50.064890238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-4s2h8,Uid:e279829c-3500-4b42-9350-c02ffa22e341,Namespace:tigera-operator,Attempt:0,}" Feb 13 15:27:50.088811 containerd[1438]: time="2025-02-13T15:27:50.088651350Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:27:50.088811 containerd[1438]: time="2025-02-13T15:27:50.088750649Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:27:50.088811 containerd[1438]: time="2025-02-13T15:27:50.088763496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:27:50.089278 containerd[1438]: time="2025-02-13T15:27:50.089229091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:27:50.110246 systemd[1]: Started cri-containerd-d1ebc6ab03777579d2c51e3a6b17150011cc3ea4369f666bd6d2f63ddb332a35.scope - libcontainer container d1ebc6ab03777579d2c51e3a6b17150011cc3ea4369f666bd6d2f63ddb332a35. Feb 13 15:27:50.150954 containerd[1438]: time="2025-02-13T15:27:50.150901353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-4s2h8,Uid:e279829c-3500-4b42-9350-c02ffa22e341,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d1ebc6ab03777579d2c51e3a6b17150011cc3ea4369f666bd6d2f63ddb332a35\"" Feb 13 15:27:50.152590 containerd[1438]: time="2025-02-13T15:27:50.152508142Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Feb 13 15:27:50.392772 kubelet[2521]: E0213 15:27:50.392635 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:50.645710 kubelet[2521]: E0213 15:27:50.645591 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:51.394876 kubelet[2521]: E0213 15:27:51.394839 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:51.634646 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1983913937.mount: Deactivated successfully. Feb 13 15:27:51.928081 containerd[1438]: time="2025-02-13T15:27:51.928025076Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:51.929006 containerd[1438]: time="2025-02-13T15:27:51.928807754Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19124160" Feb 13 15:27:51.929860 containerd[1438]: time="2025-02-13T15:27:51.929591593Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:51.931694 containerd[1438]: time="2025-02-13T15:27:51.931666275Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:51.932528 containerd[1438]: time="2025-02-13T15:27:51.932497660Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 1.779904149s" Feb 13 15:27:51.932585 containerd[1438]: time="2025-02-13T15:27:51.932527597Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Feb 13 15:27:51.938947 containerd[1438]: time="2025-02-13T15:27:51.938819961Z" level=info msg="CreateContainer within sandbox \"d1ebc6ab03777579d2c51e3a6b17150011cc3ea4369f666bd6d2f63ddb332a35\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 13 15:27:51.947437 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3222866221.mount: Deactivated successfully. Feb 13 15:27:51.948724 containerd[1438]: time="2025-02-13T15:27:51.948655390Z" level=info msg="CreateContainer within sandbox \"d1ebc6ab03777579d2c51e3a6b17150011cc3ea4369f666bd6d2f63ddb332a35\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"1bb1e51e718476861f0f202d023da5c2b633229fe1e2fa4e1405de12d1d1b253\"" Feb 13 15:27:51.949492 containerd[1438]: time="2025-02-13T15:27:51.949465364Z" level=info msg="StartContainer for \"1bb1e51e718476861f0f202d023da5c2b633229fe1e2fa4e1405de12d1d1b253\"" Feb 13 15:27:51.978258 systemd[1]: Started cri-containerd-1bb1e51e718476861f0f202d023da5c2b633229fe1e2fa4e1405de12d1d1b253.scope - libcontainer container 1bb1e51e718476861f0f202d023da5c2b633229fe1e2fa4e1405de12d1d1b253. Feb 13 15:27:52.001300 containerd[1438]: time="2025-02-13T15:27:52.001227349Z" level=info msg="StartContainer for \"1bb1e51e718476861f0f202d023da5c2b633229fe1e2fa4e1405de12d1d1b253\" returns successfully" Feb 13 15:27:52.413296 kubelet[2521]: I0213 15:27:52.413172 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4976dd7-4s2h8" podStartSLOduration=2.627705561 podStartE2EDuration="4.413156757s" podCreationTimestamp="2025-02-13 15:27:48 +0000 UTC" firstStartedPulling="2025-02-13 15:27:50.151940526 +0000 UTC m=+7.875438565" lastFinishedPulling="2025-02-13 15:27:51.937391721 +0000 UTC m=+9.660889761" observedRunningTime="2025-02-13 15:27:52.412941202 +0000 UTC m=+10.136439241" watchObservedRunningTime="2025-02-13 15:27:52.413156757 +0000 UTC m=+10.136654796" Feb 13 15:27:55.071170 update_engine[1426]: I20250213 15:27:55.071096 1426 update_attempter.cc:509] Updating boot flags... Feb 13 15:27:55.127153 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2913) Feb 13 15:27:55.182193 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2912) Feb 13 15:27:55.213275 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2912) Feb 13 15:27:55.471518 systemd[1]: Created slice kubepods-besteffort-pod86b3bb03_5b93_48ad_a7b1_856b5698b87f.slice - libcontainer container kubepods-besteffort-pod86b3bb03_5b93_48ad_a7b1_856b5698b87f.slice. Feb 13 15:27:55.526855 kubelet[2521]: I0213 15:27:55.526808 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dnlr\" (UniqueName: \"kubernetes.io/projected/86b3bb03-5b93-48ad-a7b1-856b5698b87f-kube-api-access-4dnlr\") pod \"calico-typha-d8894ffc9-8t9vw\" (UID: \"86b3bb03-5b93-48ad-a7b1-856b5698b87f\") " pod="calico-system/calico-typha-d8894ffc9-8t9vw" Feb 13 15:27:55.526855 kubelet[2521]: I0213 15:27:55.526857 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/abc680b7-f60e-4be8-8ac1-2b28d800e32f-xtables-lock\") pod \"calico-node-48qbp\" (UID: \"abc680b7-f60e-4be8-8ac1-2b28d800e32f\") " pod="calico-system/calico-node-48qbp" Feb 13 15:27:55.527317 kubelet[2521]: I0213 15:27:55.526874 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/86b3bb03-5b93-48ad-a7b1-856b5698b87f-typha-certs\") pod \"calico-typha-d8894ffc9-8t9vw\" (UID: \"86b3bb03-5b93-48ad-a7b1-856b5698b87f\") " pod="calico-system/calico-typha-d8894ffc9-8t9vw" Feb 13 15:27:55.527317 kubelet[2521]: I0213 15:27:55.526889 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/abc680b7-f60e-4be8-8ac1-2b28d800e32f-var-run-calico\") pod \"calico-node-48qbp\" (UID: \"abc680b7-f60e-4be8-8ac1-2b28d800e32f\") " pod="calico-system/calico-node-48qbp" Feb 13 15:27:55.527317 kubelet[2521]: I0213 15:27:55.526904 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/abc680b7-f60e-4be8-8ac1-2b28d800e32f-node-certs\") pod \"calico-node-48qbp\" (UID: \"abc680b7-f60e-4be8-8ac1-2b28d800e32f\") " pod="calico-system/calico-node-48qbp" Feb 13 15:27:55.527317 kubelet[2521]: I0213 15:27:55.526923 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86b3bb03-5b93-48ad-a7b1-856b5698b87f-tigera-ca-bundle\") pod \"calico-typha-d8894ffc9-8t9vw\" (UID: \"86b3bb03-5b93-48ad-a7b1-856b5698b87f\") " pod="calico-system/calico-typha-d8894ffc9-8t9vw" Feb 13 15:27:55.527317 kubelet[2521]: I0213 15:27:55.526940 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/abc680b7-f60e-4be8-8ac1-2b28d800e32f-lib-modules\") pod \"calico-node-48qbp\" (UID: \"abc680b7-f60e-4be8-8ac1-2b28d800e32f\") " pod="calico-system/calico-node-48qbp" Feb 13 15:27:55.527434 kubelet[2521]: I0213 15:27:55.526958 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/abc680b7-f60e-4be8-8ac1-2b28d800e32f-cni-bin-dir\") pod \"calico-node-48qbp\" (UID: \"abc680b7-f60e-4be8-8ac1-2b28d800e32f\") " pod="calico-system/calico-node-48qbp" Feb 13 15:27:55.527434 kubelet[2521]: I0213 15:27:55.526974 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwcjn\" (UniqueName: \"kubernetes.io/projected/abc680b7-f60e-4be8-8ac1-2b28d800e32f-kube-api-access-wwcjn\") pod \"calico-node-48qbp\" (UID: \"abc680b7-f60e-4be8-8ac1-2b28d800e32f\") " pod="calico-system/calico-node-48qbp" Feb 13 15:27:55.527434 kubelet[2521]: I0213 15:27:55.527008 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/abc680b7-f60e-4be8-8ac1-2b28d800e32f-tigera-ca-bundle\") pod \"calico-node-48qbp\" (UID: \"abc680b7-f60e-4be8-8ac1-2b28d800e32f\") " pod="calico-system/calico-node-48qbp" Feb 13 15:27:55.528810 kubelet[2521]: I0213 15:27:55.527533 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/abc680b7-f60e-4be8-8ac1-2b28d800e32f-cni-log-dir\") pod \"calico-node-48qbp\" (UID: \"abc680b7-f60e-4be8-8ac1-2b28d800e32f\") " pod="calico-system/calico-node-48qbp" Feb 13 15:27:55.528810 kubelet[2521]: I0213 15:27:55.527585 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/abc680b7-f60e-4be8-8ac1-2b28d800e32f-cni-net-dir\") pod \"calico-node-48qbp\" (UID: \"abc680b7-f60e-4be8-8ac1-2b28d800e32f\") " pod="calico-system/calico-node-48qbp" Feb 13 15:27:55.528810 kubelet[2521]: I0213 15:27:55.527602 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/abc680b7-f60e-4be8-8ac1-2b28d800e32f-policysync\") pod \"calico-node-48qbp\" (UID: \"abc680b7-f60e-4be8-8ac1-2b28d800e32f\") " pod="calico-system/calico-node-48qbp" Feb 13 15:27:55.528810 kubelet[2521]: I0213 15:27:55.527616 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/abc680b7-f60e-4be8-8ac1-2b28d800e32f-var-lib-calico\") pod \"calico-node-48qbp\" (UID: \"abc680b7-f60e-4be8-8ac1-2b28d800e32f\") " pod="calico-system/calico-node-48qbp" Feb 13 15:27:55.528810 kubelet[2521]: I0213 15:27:55.527632 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/abc680b7-f60e-4be8-8ac1-2b28d800e32f-flexvol-driver-host\") pod \"calico-node-48qbp\" (UID: \"abc680b7-f60e-4be8-8ac1-2b28d800e32f\") " pod="calico-system/calico-node-48qbp" Feb 13 15:27:55.531760 systemd[1]: Created slice kubepods-besteffort-podabc680b7_f60e_4be8_8ac1_2b28d800e32f.slice - libcontainer container kubepods-besteffort-podabc680b7_f60e_4be8_8ac1_2b28d800e32f.slice. Feb 13 15:27:55.639418 kubelet[2521]: E0213 15:27:55.639367 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.639418 kubelet[2521]: W0213 15:27:55.639409 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.639682 kubelet[2521]: E0213 15:27:55.639444 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.639870 kubelet[2521]: E0213 15:27:55.639741 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.639870 kubelet[2521]: W0213 15:27:55.639756 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.639870 kubelet[2521]: E0213 15:27:55.639805 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.640094 kubelet[2521]: E0213 15:27:55.640062 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.640094 kubelet[2521]: W0213 15:27:55.640087 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.640169 kubelet[2521]: E0213 15:27:55.640136 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.640330 kubelet[2521]: E0213 15:27:55.640312 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.640330 kubelet[2521]: W0213 15:27:55.640325 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.640381 kubelet[2521]: E0213 15:27:55.640368 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.640577 kubelet[2521]: E0213 15:27:55.640558 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.640611 kubelet[2521]: W0213 15:27:55.640576 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.640611 kubelet[2521]: E0213 15:27:55.640587 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.640881 kubelet[2521]: E0213 15:27:55.640795 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.640881 kubelet[2521]: W0213 15:27:55.640808 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.640881 kubelet[2521]: E0213 15:27:55.640817 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.656097 kubelet[2521]: E0213 15:27:55.655777 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.656097 kubelet[2521]: W0213 15:27:55.655804 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.656097 kubelet[2521]: E0213 15:27:55.655838 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.656829 kubelet[2521]: E0213 15:27:55.656292 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.656829 kubelet[2521]: W0213 15:27:55.656304 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.656829 kubelet[2521]: E0213 15:27:55.656332 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.657161 kubelet[2521]: E0213 15:27:55.656902 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.657161 kubelet[2521]: W0213 15:27:55.656915 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.657161 kubelet[2521]: E0213 15:27:55.656926 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.660878 kubelet[2521]: E0213 15:27:55.660824 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hmqx4" podUID="d3daff69-f8cf-4771-8db4-eb9251b67560" Feb 13 15:27:55.724063 kubelet[2521]: E0213 15:27:55.723962 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.724396 kubelet[2521]: W0213 15:27:55.724226 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.724396 kubelet[2521]: E0213 15:27:55.724254 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.724705 kubelet[2521]: E0213 15:27:55.724692 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.724830 kubelet[2521]: W0213 15:27:55.724723 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.724830 kubelet[2521]: E0213 15:27:55.724736 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.725077 kubelet[2521]: E0213 15:27:55.725053 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.725175 kubelet[2521]: W0213 15:27:55.725064 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.725280 kubelet[2521]: E0213 15:27:55.725225 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.725617 kubelet[2521]: E0213 15:27:55.725603 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.726479 kubelet[2521]: W0213 15:27:55.726351 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.726479 kubelet[2521]: E0213 15:27:55.726375 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.726649 kubelet[2521]: E0213 15:27:55.726637 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.726717 kubelet[2521]: W0213 15:27:55.726705 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.726795 kubelet[2521]: E0213 15:27:55.726755 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.727189 kubelet[2521]: E0213 15:27:55.727169 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.727325 kubelet[2521]: W0213 15:27:55.727269 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.727325 kubelet[2521]: E0213 15:27:55.727287 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.727629 kubelet[2521]: E0213 15:27:55.727542 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.727629 kubelet[2521]: W0213 15:27:55.727554 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.727629 kubelet[2521]: E0213 15:27:55.727564 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.727802 kubelet[2521]: E0213 15:27:55.727791 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.727916 kubelet[2521]: W0213 15:27:55.727845 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.727916 kubelet[2521]: E0213 15:27:55.727860 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.728193 kubelet[2521]: E0213 15:27:55.728176 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.728369 kubelet[2521]: W0213 15:27:55.728314 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.728369 kubelet[2521]: E0213 15:27:55.728331 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.728906 kubelet[2521]: E0213 15:27:55.728686 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.728906 kubelet[2521]: W0213 15:27:55.728703 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.728906 kubelet[2521]: E0213 15:27:55.728714 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.729095 kubelet[2521]: E0213 15:27:55.729062 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.729179 kubelet[2521]: W0213 15:27:55.729167 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.729336 kubelet[2521]: E0213 15:27:55.729243 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.729492 kubelet[2521]: E0213 15:27:55.729454 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.729492 kubelet[2521]: W0213 15:27:55.729467 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.729492 kubelet[2521]: E0213 15:27:55.729476 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.729874 kubelet[2521]: E0213 15:27:55.729791 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.729874 kubelet[2521]: W0213 15:27:55.729802 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.729874 kubelet[2521]: E0213 15:27:55.729811 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.730435 kubelet[2521]: E0213 15:27:55.730417 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.730529 kubelet[2521]: W0213 15:27:55.730495 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.730689 kubelet[2521]: E0213 15:27:55.730672 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.731018 kubelet[2521]: E0213 15:27:55.730923 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.731018 kubelet[2521]: W0213 15:27:55.730934 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.731018 kubelet[2521]: E0213 15:27:55.730944 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.731347 kubelet[2521]: E0213 15:27:55.731212 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.731347 kubelet[2521]: W0213 15:27:55.731225 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.731347 kubelet[2521]: E0213 15:27:55.731235 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.731510 kubelet[2521]: E0213 15:27:55.731497 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.731616 kubelet[2521]: W0213 15:27:55.731580 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.731956 kubelet[2521]: E0213 15:27:55.731933 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.732560 kubelet[2521]: E0213 15:27:55.732464 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.732560 kubelet[2521]: W0213 15:27:55.732485 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.732560 kubelet[2521]: E0213 15:27:55.732497 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.732963 kubelet[2521]: E0213 15:27:55.732869 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.732963 kubelet[2521]: W0213 15:27:55.732881 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.732963 kubelet[2521]: E0213 15:27:55.732892 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.733659 kubelet[2521]: E0213 15:27:55.733562 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.733948 kubelet[2521]: W0213 15:27:55.733738 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.733948 kubelet[2521]: E0213 15:27:55.733758 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.734210 kubelet[2521]: E0213 15:27:55.734193 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.734210 kubelet[2521]: W0213 15:27:55.734210 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.734282 kubelet[2521]: E0213 15:27:55.734242 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.734282 kubelet[2521]: I0213 15:27:55.734268 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d3daff69-f8cf-4771-8db4-eb9251b67560-socket-dir\") pod \"csi-node-driver-hmqx4\" (UID: \"d3daff69-f8cf-4771-8db4-eb9251b67560\") " pod="calico-system/csi-node-driver-hmqx4" Feb 13 15:27:55.734505 kubelet[2521]: E0213 15:27:55.734491 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.734505 kubelet[2521]: W0213 15:27:55.734504 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.734571 kubelet[2521]: E0213 15:27:55.734525 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.734571 kubelet[2521]: I0213 15:27:55.734542 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d3daff69-f8cf-4771-8db4-eb9251b67560-registration-dir\") pod \"csi-node-driver-hmqx4\" (UID: \"d3daff69-f8cf-4771-8db4-eb9251b67560\") " pod="calico-system/csi-node-driver-hmqx4" Feb 13 15:27:55.734790 kubelet[2521]: E0213 15:27:55.734777 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.734790 kubelet[2521]: W0213 15:27:55.734789 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.734851 kubelet[2521]: E0213 15:27:55.734804 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.734851 kubelet[2521]: I0213 15:27:55.734828 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d3daff69-f8cf-4771-8db4-eb9251b67560-kubelet-dir\") pod \"csi-node-driver-hmqx4\" (UID: \"d3daff69-f8cf-4771-8db4-eb9251b67560\") " pod="calico-system/csi-node-driver-hmqx4" Feb 13 15:27:55.735047 kubelet[2521]: E0213 15:27:55.735032 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.735047 kubelet[2521]: W0213 15:27:55.735044 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.735459 kubelet[2521]: E0213 15:27:55.735058 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.735459 kubelet[2521]: I0213 15:27:55.735088 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g8h5\" (UniqueName: \"kubernetes.io/projected/d3daff69-f8cf-4771-8db4-eb9251b67560-kube-api-access-4g8h5\") pod \"csi-node-driver-hmqx4\" (UID: \"d3daff69-f8cf-4771-8db4-eb9251b67560\") " pod="calico-system/csi-node-driver-hmqx4" Feb 13 15:27:55.735459 kubelet[2521]: E0213 15:27:55.735269 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.735459 kubelet[2521]: W0213 15:27:55.735278 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.735459 kubelet[2521]: E0213 15:27:55.735289 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.735459 kubelet[2521]: I0213 15:27:55.735310 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d3daff69-f8cf-4771-8db4-eb9251b67560-varrun\") pod \"csi-node-driver-hmqx4\" (UID: \"d3daff69-f8cf-4771-8db4-eb9251b67560\") " pod="calico-system/csi-node-driver-hmqx4" Feb 13 15:27:55.735616 kubelet[2521]: E0213 15:27:55.735471 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.735616 kubelet[2521]: W0213 15:27:55.735488 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.735616 kubelet[2521]: E0213 15:27:55.735503 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.735688 kubelet[2521]: E0213 15:27:55.735670 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.735688 kubelet[2521]: W0213 15:27:55.735678 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.735733 kubelet[2521]: E0213 15:27:55.735687 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.735884 kubelet[2521]: E0213 15:27:55.735867 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.735884 kubelet[2521]: W0213 15:27:55.735880 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.735947 kubelet[2521]: E0213 15:27:55.735894 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.736062 kubelet[2521]: E0213 15:27:55.736050 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.736062 kubelet[2521]: W0213 15:27:55.736061 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.736177 kubelet[2521]: E0213 15:27:55.736142 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.736249 kubelet[2521]: E0213 15:27:55.736237 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.736249 kubelet[2521]: W0213 15:27:55.736248 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.736352 kubelet[2521]: E0213 15:27:55.736320 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.736403 kubelet[2521]: E0213 15:27:55.736391 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.736403 kubelet[2521]: W0213 15:27:55.736402 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.736515 kubelet[2521]: E0213 15:27:55.736493 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.736561 kubelet[2521]: E0213 15:27:55.736548 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.736561 kubelet[2521]: W0213 15:27:55.736556 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.736649 kubelet[2521]: E0213 15:27:55.736584 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.737321 kubelet[2521]: E0213 15:27:55.737291 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.737321 kubelet[2521]: W0213 15:27:55.737306 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.737321 kubelet[2521]: E0213 15:27:55.737320 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.737608 kubelet[2521]: E0213 15:27:55.737593 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.737608 kubelet[2521]: W0213 15:27:55.737607 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.737702 kubelet[2521]: E0213 15:27:55.737617 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.737794 kubelet[2521]: E0213 15:27:55.737784 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.737825 kubelet[2521]: W0213 15:27:55.737795 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.737825 kubelet[2521]: E0213 15:27:55.737803 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.776108 kubelet[2521]: E0213 15:27:55.776058 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:55.776947 containerd[1438]: time="2025-02-13T15:27:55.776907935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-d8894ffc9-8t9vw,Uid:86b3bb03-5b93-48ad-a7b1-856b5698b87f,Namespace:calico-system,Attempt:0,}" Feb 13 15:27:55.802008 containerd[1438]: time="2025-02-13T15:27:55.801357643Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:27:55.802008 containerd[1438]: time="2025-02-13T15:27:55.801435359Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:27:55.802008 containerd[1438]: time="2025-02-13T15:27:55.801452287Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:27:55.802008 containerd[1438]: time="2025-02-13T15:27:55.801605677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:27:55.831435 systemd[1]: Started cri-containerd-a7670ece01df9c171702b799726b6b2d474bf4766fad083eda650464d7f3b171.scope - libcontainer container a7670ece01df9c171702b799726b6b2d474bf4766fad083eda650464d7f3b171. Feb 13 15:27:55.836628 kubelet[2521]: E0213 15:27:55.836486 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:55.836767 kubelet[2521]: E0213 15:27:55.836539 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.836767 kubelet[2521]: W0213 15:27:55.836663 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.836824 kubelet[2521]: E0213 15:27:55.836795 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.837412 kubelet[2521]: E0213 15:27:55.837151 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.837412 kubelet[2521]: W0213 15:27:55.837166 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.837412 kubelet[2521]: E0213 15:27:55.837194 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.837525 containerd[1438]: time="2025-02-13T15:27:55.837427370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-48qbp,Uid:abc680b7-f60e-4be8-8ac1-2b28d800e32f,Namespace:calico-system,Attempt:0,}" Feb 13 15:27:55.838586 kubelet[2521]: E0213 15:27:55.838238 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.838794 kubelet[2521]: W0213 15:27:55.838769 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.839125 kubelet[2521]: E0213 15:27:55.839094 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.839399 kubelet[2521]: E0213 15:27:55.839371 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.839399 kubelet[2521]: W0213 15:27:55.839387 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.839499 kubelet[2521]: E0213 15:27:55.839464 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.841655 kubelet[2521]: E0213 15:27:55.840387 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.841655 kubelet[2521]: W0213 15:27:55.840410 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.841655 kubelet[2521]: E0213 15:27:55.840626 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.841655 kubelet[2521]: E0213 15:27:55.840891 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.841655 kubelet[2521]: W0213 15:27:55.840900 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.841655 kubelet[2521]: E0213 15:27:55.841001 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.841655 kubelet[2521]: E0213 15:27:55.841144 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.841655 kubelet[2521]: W0213 15:27:55.841154 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.841655 kubelet[2521]: E0213 15:27:55.841188 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.841655 kubelet[2521]: E0213 15:27:55.841422 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.841936 kubelet[2521]: W0213 15:27:55.841432 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.841936 kubelet[2521]: E0213 15:27:55.841639 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.843284 kubelet[2521]: E0213 15:27:55.842031 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.843284 kubelet[2521]: W0213 15:27:55.842049 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.843284 kubelet[2521]: E0213 15:27:55.842088 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.843284 kubelet[2521]: E0213 15:27:55.842335 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.843284 kubelet[2521]: W0213 15:27:55.842345 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.843284 kubelet[2521]: E0213 15:27:55.842380 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.843284 kubelet[2521]: E0213 15:27:55.842617 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.843284 kubelet[2521]: W0213 15:27:55.842726 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.843284 kubelet[2521]: E0213 15:27:55.843108 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.843544 kubelet[2521]: E0213 15:27:55.843373 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.843544 kubelet[2521]: W0213 15:27:55.843383 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.843544 kubelet[2521]: E0213 15:27:55.843416 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.844263 kubelet[2521]: E0213 15:27:55.843612 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.844263 kubelet[2521]: W0213 15:27:55.843626 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.844263 kubelet[2521]: E0213 15:27:55.843688 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.844263 kubelet[2521]: E0213 15:27:55.843848 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.844263 kubelet[2521]: W0213 15:27:55.843859 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.844263 kubelet[2521]: E0213 15:27:55.843921 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.844263 kubelet[2521]: E0213 15:27:55.844050 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.844263 kubelet[2521]: W0213 15:27:55.844058 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.844263 kubelet[2521]: E0213 15:27:55.844190 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.844891 kubelet[2521]: E0213 15:27:55.844857 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.844891 kubelet[2521]: W0213 15:27:55.844875 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.844891 kubelet[2521]: E0213 15:27:55.844891 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.845287 kubelet[2521]: E0213 15:27:55.845188 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.845351 kubelet[2521]: W0213 15:27:55.845300 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.845472 kubelet[2521]: E0213 15:27:55.845393 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.845868 kubelet[2521]: E0213 15:27:55.845615 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.845868 kubelet[2521]: W0213 15:27:55.845629 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.845868 kubelet[2521]: E0213 15:27:55.845707 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.845868 kubelet[2521]: E0213 15:27:55.845793 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.845868 kubelet[2521]: W0213 15:27:55.845800 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.845868 kubelet[2521]: E0213 15:27:55.845829 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.846165 kubelet[2521]: E0213 15:27:55.845944 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.846165 kubelet[2521]: W0213 15:27:55.845952 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.846165 kubelet[2521]: E0213 15:27:55.846028 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.846229 kubelet[2521]: E0213 15:27:55.846216 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.846850 kubelet[2521]: W0213 15:27:55.846225 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.846850 kubelet[2521]: E0213 15:27:55.846335 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.846850 kubelet[2521]: E0213 15:27:55.846512 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.846850 kubelet[2521]: W0213 15:27:55.846521 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.846850 kubelet[2521]: E0213 15:27:55.846532 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.846850 kubelet[2521]: E0213 15:27:55.846766 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.846850 kubelet[2521]: W0213 15:27:55.846776 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.846850 kubelet[2521]: E0213 15:27:55.846786 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.847110 kubelet[2521]: E0213 15:27:55.846995 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.847110 kubelet[2521]: W0213 15:27:55.847004 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.847110 kubelet[2521]: E0213 15:27:55.847016 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.847285 kubelet[2521]: E0213 15:27:55.847267 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.847285 kubelet[2521]: W0213 15:27:55.847281 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.847345 kubelet[2521]: E0213 15:27:55.847291 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.861267 kubelet[2521]: E0213 15:27:55.861237 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.861267 kubelet[2521]: W0213 15:27:55.861258 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.861413 kubelet[2521]: E0213 15:27:55.861278 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.879678 containerd[1438]: time="2025-02-13T15:27:55.879615766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-d8894ffc9-8t9vw,Uid:86b3bb03-5b93-48ad-a7b1-856b5698b87f,Namespace:calico-system,Attempt:0,} returns sandbox id \"a7670ece01df9c171702b799726b6b2d474bf4766fad083eda650464d7f3b171\"" Feb 13 15:27:55.880950 kubelet[2521]: E0213 15:27:55.880869 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:55.882710 containerd[1438]: time="2025-02-13T15:27:55.882454260Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Feb 13 15:27:55.899587 containerd[1438]: time="2025-02-13T15:27:55.899464656Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:27:55.899712 containerd[1438]: time="2025-02-13T15:27:55.899640616Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:27:55.899712 containerd[1438]: time="2025-02-13T15:27:55.899679874Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:27:55.899894 containerd[1438]: time="2025-02-13T15:27:55.899838266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:27:55.917275 systemd[1]: Started cri-containerd-ce807cc14a06288c004cb56c172c910d527ac79f3401ce6c601af4e44c400266.scope - libcontainer container ce807cc14a06288c004cb56c172c910d527ac79f3401ce6c601af4e44c400266. Feb 13 15:27:55.940499 containerd[1438]: time="2025-02-13T15:27:55.940207233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-48qbp,Uid:abc680b7-f60e-4be8-8ac1-2b28d800e32f,Namespace:calico-system,Attempt:0,} returns sandbox id \"ce807cc14a06288c004cb56c172c910d527ac79f3401ce6c601af4e44c400266\"" Feb 13 15:27:55.941274 kubelet[2521]: E0213 15:27:55.941247 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:56.946744 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1945762384.mount: Deactivated successfully. Feb 13 15:27:57.307719 containerd[1438]: time="2025-02-13T15:27:57.307601885Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:57.309179 containerd[1438]: time="2025-02-13T15:27:57.308141748Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29231308" Feb 13 15:27:57.309179 containerd[1438]: time="2025-02-13T15:27:57.308833914Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:57.313103 containerd[1438]: time="2025-02-13T15:27:57.312397546Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:57.313103 containerd[1438]: time="2025-02-13T15:27:57.313095315Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 1.430600837s" Feb 13 15:27:57.313232 containerd[1438]: time="2025-02-13T15:27:57.313125687Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Feb 13 15:27:57.314448 containerd[1438]: time="2025-02-13T15:27:57.314206214Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 15:27:57.323431 containerd[1438]: time="2025-02-13T15:27:57.323385486Z" level=info msg="CreateContainer within sandbox \"a7670ece01df9c171702b799726b6b2d474bf4766fad083eda650464d7f3b171\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 13 15:27:57.334269 containerd[1438]: time="2025-02-13T15:27:57.334224124Z" level=info msg="CreateContainer within sandbox \"a7670ece01df9c171702b799726b6b2d474bf4766fad083eda650464d7f3b171\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"7944cd62488ec8cd52952f2401d4787efb37d141e3e36745fc04b69e08270263\"" Feb 13 15:27:57.337274 containerd[1438]: time="2025-02-13T15:27:57.337239450Z" level=info msg="StartContainer for \"7944cd62488ec8cd52952f2401d4787efb37d141e3e36745fc04b69e08270263\"" Feb 13 15:27:57.367105 kubelet[2521]: E0213 15:27:57.366637 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hmqx4" podUID="d3daff69-f8cf-4771-8db4-eb9251b67560" Feb 13 15:27:57.371238 systemd[1]: Started cri-containerd-7944cd62488ec8cd52952f2401d4787efb37d141e3e36745fc04b69e08270263.scope - libcontainer container 7944cd62488ec8cd52952f2401d4787efb37d141e3e36745fc04b69e08270263. Feb 13 15:27:57.413420 containerd[1438]: time="2025-02-13T15:27:57.413322644Z" level=info msg="StartContainer for \"7944cd62488ec8cd52952f2401d4787efb37d141e3e36745fc04b69e08270263\" returns successfully" Feb 13 15:27:57.419450 kubelet[2521]: E0213 15:27:57.417649 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:57.427948 kubelet[2521]: I0213 15:27:57.427878 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-d8894ffc9-8t9vw" podStartSLOduration=0.996022209 podStartE2EDuration="2.427859449s" podCreationTimestamp="2025-02-13 15:27:55 +0000 UTC" firstStartedPulling="2025-02-13 15:27:55.882194662 +0000 UTC m=+13.605692701" lastFinishedPulling="2025-02-13 15:27:57.314031902 +0000 UTC m=+15.037529941" observedRunningTime="2025-02-13 15:27:57.427821034 +0000 UTC m=+15.151319033" watchObservedRunningTime="2025-02-13 15:27:57.427859449 +0000 UTC m=+15.151357488" Feb 13 15:27:57.447112 kubelet[2521]: E0213 15:27:57.446191 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:57.447112 kubelet[2521]: W0213 15:27:57.446685 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:57.447112 kubelet[2521]: E0213 15:27:57.446706 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:57.447112 kubelet[2521]: E0213 15:27:57.446925 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:57.447112 kubelet[2521]: W0213 15:27:57.446935 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:57.447112 kubelet[2521]: E0213 15:27:57.446945 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:57.447112 kubelet[2521]: E0213 15:27:57.447122 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:57.447374 kubelet[2521]: W0213 15:27:57.447131 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:57.447374 kubelet[2521]: E0213 15:27:57.447142 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:57.447374 kubelet[2521]: E0213 15:27:57.447295 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:57.447374 kubelet[2521]: W0213 15:27:57.447303 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:57.447374 kubelet[2521]: E0213 15:27:57.447311 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:57.447475 kubelet[2521]: E0213 15:27:57.447446 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:57.447475 kubelet[2521]: W0213 15:27:57.447453 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:57.447475 kubelet[2521]: E0213 15:27:57.447461 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:57.447650 kubelet[2521]: E0213 15:27:57.447576 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:57.447650 kubelet[2521]: W0213 15:27:57.447589 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:57.447650 kubelet[2521]: E0213 15:27:57.447598 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:57.447788 kubelet[2521]: E0213 15:27:57.447740 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:57.447788 kubelet[2521]: W0213 15:27:57.447748 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:57.447788 kubelet[2521]: E0213 15:27:57.447755 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:57.447950 kubelet[2521]: E0213 15:27:57.447886 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:57.447950 kubelet[2521]: W0213 15:27:57.447896 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:57.447950 kubelet[2521]: E0213 15:27:57.447903 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:57.448058 kubelet[2521]: E0213 15:27:57.448042 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:57.448058 kubelet[2521]: W0213 15:27:57.448052 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:57.448131 kubelet[2521]: E0213 15:27:57.448060 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:57.448227 kubelet[2521]: E0213 15:27:57.448207 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:57.448227 kubelet[2521]: W0213 15:27:57.448219 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:57.448227 kubelet[2521]: E0213 15:27:57.448227 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:57.451104 kubelet[2521]: E0213 15:27:57.448371 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:57.451104 kubelet[2521]: W0213 15:27:57.448382 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:57.451104 kubelet[2521]: E0213 15:27:57.448390 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:57.451104 kubelet[2521]: E0213 15:27:57.448518 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:57.451104 kubelet[2521]: W0213 15:27:57.448525 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:57.451104 kubelet[2521]: E0213 15:27:57.448532 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:57.451104 kubelet[2521]: E0213 15:27:57.448659 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:57.451104 kubelet[2521]: W0213 15:27:57.448666 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:57.451104 kubelet[2521]: E0213 15:27:57.448673 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:57.451104 kubelet[2521]: E0213 15:27:57.448793 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:57.451384 kubelet[2521]: W0213 15:27:57.448800 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:57.451384 kubelet[2521]: E0213 15:27:57.448806 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:57.451384 kubelet[2521]: E0213 15:27:57.448925 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:57.451384 kubelet[2521]: W0213 15:27:57.448931 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:57.451384 kubelet[2521]: E0213 15:27:57.448939 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:57.454886 kubelet[2521]: E0213 15:27:57.453192 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:57.454886 kubelet[2521]: W0213 15:27:57.453208 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:57.454886 kubelet[2521]: E0213 15:27:57.453219 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:57.454886 kubelet[2521]: E0213 15:27:57.453393 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:57.454886 kubelet[2521]: W0213 15:27:57.453403 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:57.454886 kubelet[2521]: E0213 15:27:57.453420 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:57.454886 kubelet[2521]: E0213 15:27:57.453653 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:57.454886 kubelet[2521]: W0213 15:27:57.453662 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:57.454886 kubelet[2521]: E0213 15:27:57.453674 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:57.454886 kubelet[2521]: E0213 15:27:57.453871 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:57.455136 kubelet[2521]: W0213 15:27:57.453880 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:57.455136 kubelet[2521]: E0213 15:27:57.453892 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:57.455136 kubelet[2521]: E0213 15:27:57.454039 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:57.455136 kubelet[2521]: W0213 15:27:57.454047 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:57.455136 kubelet[2521]: E0213 15:27:57.454058 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:57.455136 kubelet[2521]: E0213 15:27:57.454208 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:57.455136 kubelet[2521]: W0213 15:27:57.454216 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:57.455136 kubelet[2521]: E0213 15:27:57.454230 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:57.455136 kubelet[2521]: E0213 15:27:57.454400 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:57.455136 kubelet[2521]: W0213 15:27:57.454408 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:57.455342 kubelet[2521]: E0213 15:27:57.454423 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:57.455342 kubelet[2521]: E0213 15:27:57.454638 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:57.455342 kubelet[2521]: W0213 15:27:57.454652 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:57.455342 kubelet[2521]: E0213 15:27:57.454673 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:57.455342 kubelet[2521]: E0213 15:27:57.454890 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:57.455342 kubelet[2521]: W0213 15:27:57.454898 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:57.455342 kubelet[2521]: E0213 15:27:57.454920 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:57.455342 kubelet[2521]: E0213 15:27:57.455065 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:57.455342 kubelet[2521]: W0213 15:27:57.455080 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:57.455342 kubelet[2521]: E0213 15:27:57.455090 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:57.455542 kubelet[2521]: E0213 15:27:57.455246 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:57.455542 kubelet[2521]: W0213 15:27:57.455261 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:57.455542 kubelet[2521]: E0213 15:27:57.455270 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:57.455619 kubelet[2521]: E0213 15:27:57.455599 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:57.455619 kubelet[2521]: W0213 15:27:57.455613 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:57.455675 kubelet[2521]: E0213 15:27:57.455626 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:57.455812 kubelet[2521]: E0213 15:27:57.455771 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:57.455812 kubelet[2521]: W0213 15:27:57.455782 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:57.455812 kubelet[2521]: E0213 15:27:57.455793 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:57.455937 kubelet[2521]: E0213 15:27:57.455920 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:57.455937 kubelet[2521]: W0213 15:27:57.455931 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:57.455937 kubelet[2521]: E0213 15:27:57.455938 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:57.456116 kubelet[2521]: E0213 15:27:57.456060 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:57.456116 kubelet[2521]: W0213 15:27:57.456089 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:57.456116 kubelet[2521]: E0213 15:27:57.456097 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:57.456228 kubelet[2521]: E0213 15:27:57.456212 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:57.456228 kubelet[2521]: W0213 15:27:57.456222 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:57.456228 kubelet[2521]: E0213 15:27:57.456229 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:57.456387 kubelet[2521]: E0213 15:27:57.456370 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:57.456387 kubelet[2521]: W0213 15:27:57.456381 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:57.456439 kubelet[2521]: E0213 15:27:57.456389 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:57.456639 kubelet[2521]: E0213 15:27:57.456622 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:57.456639 kubelet[2521]: W0213 15:27:57.456633 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:57.456699 kubelet[2521]: E0213 15:27:57.456644 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:58.358823 containerd[1438]: time="2025-02-13T15:27:58.358766026Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:58.359266 containerd[1438]: time="2025-02-13T15:27:58.359219205Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5117811" Feb 13 15:27:58.360088 containerd[1438]: time="2025-02-13T15:27:58.360048771Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:58.362166 containerd[1438]: time="2025-02-13T15:27:58.362130391Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:58.363011 containerd[1438]: time="2025-02-13T15:27:58.362974883Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.048734255s" Feb 13 15:27:58.363011 containerd[1438]: time="2025-02-13T15:27:58.363011138Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Feb 13 15:27:58.364937 containerd[1438]: time="2025-02-13T15:27:58.364820810Z" level=info msg="CreateContainer within sandbox \"ce807cc14a06288c004cb56c172c910d527ac79f3401ce6c601af4e44c400266\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 15:27:58.380669 containerd[1438]: time="2025-02-13T15:27:58.380617310Z" level=info msg="CreateContainer within sandbox \"ce807cc14a06288c004cb56c172c910d527ac79f3401ce6c601af4e44c400266\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"68f19d1c05bc387e98b1968e22e74024904f873ed993c2b6c7fa671790154402\"" Feb 13 15:27:58.381238 containerd[1438]: time="2025-02-13T15:27:58.381211984Z" level=info msg="StartContainer for \"68f19d1c05bc387e98b1968e22e74024904f873ed993c2b6c7fa671790154402\"" Feb 13 15:27:58.420285 systemd[1]: Started cri-containerd-68f19d1c05bc387e98b1968e22e74024904f873ed993c2b6c7fa671790154402.scope - libcontainer container 68f19d1c05bc387e98b1968e22e74024904f873ed993c2b6c7fa671790154402. Feb 13 15:27:58.421054 kubelet[2521]: I0213 15:27:58.420932 2521 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:27:58.421664 kubelet[2521]: E0213 15:27:58.421633 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:58.452490 containerd[1438]: time="2025-02-13T15:27:58.451411383Z" level=info msg="StartContainer for \"68f19d1c05bc387e98b1968e22e74024904f873ed993c2b6c7fa671790154402\" returns successfully" Feb 13 15:27:58.456890 kubelet[2521]: E0213 15:27:58.456861 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:58.456890 kubelet[2521]: W0213 15:27:58.456883 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:58.457055 kubelet[2521]: E0213 15:27:58.456903 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:58.457055 kubelet[2521]: E0213 15:27:58.457051 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:58.457122 kubelet[2521]: W0213 15:27:58.457060 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:58.457122 kubelet[2521]: E0213 15:27:58.457084 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:58.457340 kubelet[2521]: E0213 15:27:58.457280 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:58.457340 kubelet[2521]: W0213 15:27:58.457295 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:58.457340 kubelet[2521]: E0213 15:27:58.457305 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:58.457495 kubelet[2521]: E0213 15:27:58.457460 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:58.457495 kubelet[2521]: W0213 15:27:58.457470 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:58.457495 kubelet[2521]: E0213 15:27:58.457479 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:58.457656 kubelet[2521]: E0213 15:27:58.457635 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:58.457656 kubelet[2521]: W0213 15:27:58.457644 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:58.457709 kubelet[2521]: E0213 15:27:58.457659 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:58.457807 kubelet[2521]: E0213 15:27:58.457792 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:58.457807 kubelet[2521]: W0213 15:27:58.457803 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:58.457858 kubelet[2521]: E0213 15:27:58.457810 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:58.457960 kubelet[2521]: E0213 15:27:58.457930 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:58.457960 kubelet[2521]: W0213 15:27:58.457939 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:58.457960 kubelet[2521]: E0213 15:27:58.457945 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:58.458160 kubelet[2521]: E0213 15:27:58.458103 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:58.458160 kubelet[2521]: W0213 15:27:58.458114 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:58.458160 kubelet[2521]: E0213 15:27:58.458126 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:58.458373 kubelet[2521]: E0213 15:27:58.458343 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:58.458373 kubelet[2521]: W0213 15:27:58.458360 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:58.458373 kubelet[2521]: E0213 15:27:58.458369 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:58.458584 kubelet[2521]: E0213 15:27:58.458516 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:58.458584 kubelet[2521]: W0213 15:27:58.458527 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:58.458584 kubelet[2521]: E0213 15:27:58.458542 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:58.458738 kubelet[2521]: E0213 15:27:58.458686 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:58.458738 kubelet[2521]: W0213 15:27:58.458697 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:58.458738 kubelet[2521]: E0213 15:27:58.458704 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:58.462722 kubelet[2521]: E0213 15:27:58.462687 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:58.462722 kubelet[2521]: W0213 15:27:58.462707 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:58.462722 kubelet[2521]: E0213 15:27:58.462721 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:58.462941 kubelet[2521]: E0213 15:27:58.462913 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:58.462941 kubelet[2521]: W0213 15:27:58.462929 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:58.462941 kubelet[2521]: E0213 15:27:58.462939 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:58.463146 kubelet[2521]: E0213 15:27:58.463125 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:58.463146 kubelet[2521]: W0213 15:27:58.463146 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:58.463205 kubelet[2521]: E0213 15:27:58.463156 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:58.463324 kubelet[2521]: E0213 15:27:58.463311 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:58.463324 kubelet[2521]: W0213 15:27:58.463323 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:58.463379 kubelet[2521]: E0213 15:27:58.463332 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:58.463600 kubelet[2521]: E0213 15:27:58.463586 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:58.463600 kubelet[2521]: W0213 15:27:58.463599 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:58.463662 kubelet[2521]: E0213 15:27:58.463609 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:58.464331 kubelet[2521]: E0213 15:27:58.464309 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:58.464331 kubelet[2521]: W0213 15:27:58.464324 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:58.464399 kubelet[2521]: E0213 15:27:58.464335 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:58.464786 kubelet[2521]: E0213 15:27:58.464760 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:58.464815 kubelet[2521]: W0213 15:27:58.464785 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:58.464815 kubelet[2521]: E0213 15:27:58.464798 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:58.465026 kubelet[2521]: E0213 15:27:58.465014 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:58.465026 kubelet[2521]: W0213 15:27:58.465025 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:58.465110 kubelet[2521]: E0213 15:27:58.465087 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:58.465473 kubelet[2521]: E0213 15:27:58.465307 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:58.465504 kubelet[2521]: W0213 15:27:58.465476 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:58.465720 kubelet[2521]: E0213 15:27:58.465703 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:58.465865 kubelet[2521]: E0213 15:27:58.465850 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:58.465895 kubelet[2521]: W0213 15:27:58.465865 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:58.465895 kubelet[2521]: E0213 15:27:58.465877 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:58.466087 kubelet[2521]: E0213 15:27:58.466064 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:58.466115 kubelet[2521]: W0213 15:27:58.466090 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:58.466115 kubelet[2521]: E0213 15:27:58.466101 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:58.466592 kubelet[2521]: E0213 15:27:58.466565 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:58.466592 kubelet[2521]: W0213 15:27:58.466581 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:58.466649 kubelet[2521]: E0213 15:27:58.466633 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:58.466796 kubelet[2521]: E0213 15:27:58.466783 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:58.466823 kubelet[2521]: W0213 15:27:58.466801 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:58.466889 kubelet[2521]: E0213 15:27:58.466859 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:58.467153 kubelet[2521]: E0213 15:27:58.467135 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:58.467153 kubelet[2521]: W0213 15:27:58.467151 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:58.467259 kubelet[2521]: E0213 15:27:58.467243 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:58.467378 kubelet[2521]: E0213 15:27:58.467363 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:58.467378 kubelet[2521]: W0213 15:27:58.467376 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:58.467426 kubelet[2521]: E0213 15:27:58.467392 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:58.469497 kubelet[2521]: E0213 15:27:58.469470 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:58.469497 kubelet[2521]: W0213 15:27:58.469489 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:58.469570 kubelet[2521]: E0213 15:27:58.469506 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:58.470222 kubelet[2521]: E0213 15:27:58.470154 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:58.470222 kubelet[2521]: W0213 15:27:58.470175 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:58.470222 kubelet[2521]: E0213 15:27:58.470217 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:58.470509 kubelet[2521]: E0213 15:27:58.470491 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:58.470509 kubelet[2521]: W0213 15:27:58.470507 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:58.470571 kubelet[2521]: E0213 15:27:58.470530 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:58.470819 kubelet[2521]: E0213 15:27:58.470803 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:58.470819 kubelet[2521]: W0213 15:27:58.470817 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:58.470873 kubelet[2521]: E0213 15:27:58.470830 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:58.473189 kubelet[2521]: E0213 15:27:58.473160 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:58.473189 kubelet[2521]: W0213 15:27:58.473179 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:58.473260 kubelet[2521]: E0213 15:27:58.473204 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:58.473603 kubelet[2521]: E0213 15:27:58.473575 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:58.473603 kubelet[2521]: W0213 15:27:58.473592 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:58.473649 kubelet[2521]: E0213 15:27:58.473604 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:58.473984 kubelet[2521]: E0213 15:27:58.473968 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:58.473984 kubelet[2521]: W0213 15:27:58.473983 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:58.474043 kubelet[2521]: E0213 15:27:58.473994 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:58.480194 systemd[1]: cri-containerd-68f19d1c05bc387e98b1968e22e74024904f873ed993c2b6c7fa671790154402.scope: Deactivated successfully. Feb 13 15:27:58.590992 containerd[1438]: time="2025-02-13T15:27:58.586392809Z" level=info msg="shim disconnected" id=68f19d1c05bc387e98b1968e22e74024904f873ed993c2b6c7fa671790154402 namespace=k8s.io Feb 13 15:27:58.590992 containerd[1438]: time="2025-02-13T15:27:58.590996422Z" level=warning msg="cleaning up after shim disconnected" id=68f19d1c05bc387e98b1968e22e74024904f873ed993c2b6c7fa671790154402 namespace=k8s.io Feb 13 15:27:58.591215 containerd[1438]: time="2025-02-13T15:27:58.591011828Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:27:58.633377 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-68f19d1c05bc387e98b1968e22e74024904f873ed993c2b6c7fa671790154402-rootfs.mount: Deactivated successfully. Feb 13 15:27:59.366696 kubelet[2521]: E0213 15:27:59.366645 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hmqx4" podUID="d3daff69-f8cf-4771-8db4-eb9251b67560" Feb 13 15:27:59.430522 kubelet[2521]: E0213 15:27:59.428915 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:59.430952 containerd[1438]: time="2025-02-13T15:27:59.430508220Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 15:28:01.366949 kubelet[2521]: E0213 15:28:01.366869 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hmqx4" podUID="d3daff69-f8cf-4771-8db4-eb9251b67560" Feb 13 15:28:03.368097 kubelet[2521]: E0213 15:28:03.367439 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hmqx4" podUID="d3daff69-f8cf-4771-8db4-eb9251b67560" Feb 13 15:28:03.834508 containerd[1438]: time="2025-02-13T15:28:03.834378449Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:03.835427 containerd[1438]: time="2025-02-13T15:28:03.835273569Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Feb 13 15:28:03.836590 containerd[1438]: time="2025-02-13T15:28:03.836281685Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:03.838694 containerd[1438]: time="2025-02-13T15:28:03.838635343Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:03.839643 containerd[1438]: time="2025-02-13T15:28:03.839597364Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 4.409041007s" Feb 13 15:28:03.839643 containerd[1438]: time="2025-02-13T15:28:03.839639098Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Feb 13 15:28:03.842776 containerd[1438]: time="2025-02-13T15:28:03.842725385Z" level=info msg="CreateContainer within sandbox \"ce807cc14a06288c004cb56c172c910d527ac79f3401ce6c601af4e44c400266\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 15:28:03.870433 containerd[1438]: time="2025-02-13T15:28:03.870374289Z" level=info msg="CreateContainer within sandbox \"ce807cc14a06288c004cb56c172c910d527ac79f3401ce6c601af4e44c400266\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"14940d5320cc3acc5336974214ee77aabf3533811168a1c2ef592f803a4c4fa6\"" Feb 13 15:28:03.872235 containerd[1438]: time="2025-02-13T15:28:03.870976878Z" level=info msg="StartContainer for \"14940d5320cc3acc5336974214ee77aabf3533811168a1c2ef592f803a4c4fa6\"" Feb 13 15:28:03.905257 systemd[1]: Started cri-containerd-14940d5320cc3acc5336974214ee77aabf3533811168a1c2ef592f803a4c4fa6.scope - libcontainer container 14940d5320cc3acc5336974214ee77aabf3533811168a1c2ef592f803a4c4fa6. Feb 13 15:28:03.940151 containerd[1438]: time="2025-02-13T15:28:03.940096739Z" level=info msg="StartContainer for \"14940d5320cc3acc5336974214ee77aabf3533811168a1c2ef592f803a4c4fa6\" returns successfully" Feb 13 15:28:04.441466 kubelet[2521]: E0213 15:28:04.441432 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:04.552051 systemd[1]: cri-containerd-14940d5320cc3acc5336974214ee77aabf3533811168a1c2ef592f803a4c4fa6.scope: Deactivated successfully. Feb 13 15:28:04.557188 kubelet[2521]: I0213 15:28:04.557152 2521 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Feb 13 15:28:04.574682 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-14940d5320cc3acc5336974214ee77aabf3533811168a1c2ef592f803a4c4fa6-rootfs.mount: Deactivated successfully. Feb 13 15:28:04.610434 systemd[1]: Created slice kubepods-besteffort-podb17d9265_9df2_40dd_a8b7_46383a0e17ce.slice - libcontainer container kubepods-besteffort-podb17d9265_9df2_40dd_a8b7_46383a0e17ce.slice. Feb 13 15:28:04.619698 systemd[1]: Created slice kubepods-burstable-podbe132fa9_3a1c_4777_b6f8_2618a1865453.slice - libcontainer container kubepods-burstable-podbe132fa9_3a1c_4777_b6f8_2618a1865453.slice. Feb 13 15:28:04.626531 systemd[1]: Created slice kubepods-burstable-pod48d6879b_40c7_4fb4_9137_f94a1e0bf631.slice - libcontainer container kubepods-burstable-pod48d6879b_40c7_4fb4_9137_f94a1e0bf631.slice. Feb 13 15:28:04.633954 systemd[1]: Created slice kubepods-besteffort-pod2330a284_0835_4d9f_929e_909c050006b6.slice - libcontainer container kubepods-besteffort-pod2330a284_0835_4d9f_929e_909c050006b6.slice. Feb 13 15:28:04.642652 systemd[1]: Created slice kubepods-besteffort-poddb472e10_e2a5_49de_9955_0d1cf7adcfd6.slice - libcontainer container kubepods-besteffort-poddb472e10_e2a5_49de_9955_0d1cf7adcfd6.slice. Feb 13 15:28:04.705374 containerd[1438]: time="2025-02-13T15:28:04.705172691Z" level=info msg="shim disconnected" id=14940d5320cc3acc5336974214ee77aabf3533811168a1c2ef592f803a4c4fa6 namespace=k8s.io Feb 13 15:28:04.705374 containerd[1438]: time="2025-02-13T15:28:04.705330539Z" level=warning msg="cleaning up after shim disconnected" id=14940d5320cc3acc5336974214ee77aabf3533811168a1c2ef592f803a4c4fa6 namespace=k8s.io Feb 13 15:28:04.705374 containerd[1438]: time="2025-02-13T15:28:04.705351865Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:28:04.712578 kubelet[2521]: I0213 15:28:04.712298 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/48d6879b-40c7-4fb4-9137-f94a1e0bf631-config-volume\") pod \"coredns-6f6b679f8f-bhdmb\" (UID: \"48d6879b-40c7-4fb4-9137-f94a1e0bf631\") " pod="kube-system/coredns-6f6b679f8f-bhdmb" Feb 13 15:28:04.712578 kubelet[2521]: I0213 15:28:04.712346 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b17d9265-9df2-40dd-a8b7-46383a0e17ce-tigera-ca-bundle\") pod \"calico-kube-controllers-6c68d99f8f-wlrdk\" (UID: \"b17d9265-9df2-40dd-a8b7-46383a0e17ce\") " pod="calico-system/calico-kube-controllers-6c68d99f8f-wlrdk" Feb 13 15:28:04.712578 kubelet[2521]: I0213 15:28:04.712366 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxlbb\" (UniqueName: \"kubernetes.io/projected/db472e10-e2a5-49de-9955-0d1cf7adcfd6-kube-api-access-wxlbb\") pod \"calico-apiserver-55bbcccb65-78qv4\" (UID: \"db472e10-e2a5-49de-9955-0d1cf7adcfd6\") " pod="calico-apiserver/calico-apiserver-55bbcccb65-78qv4" Feb 13 15:28:04.712578 kubelet[2521]: I0213 15:28:04.712386 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sd8p6\" (UniqueName: \"kubernetes.io/projected/b17d9265-9df2-40dd-a8b7-46383a0e17ce-kube-api-access-sd8p6\") pod \"calico-kube-controllers-6c68d99f8f-wlrdk\" (UID: \"b17d9265-9df2-40dd-a8b7-46383a0e17ce\") " pod="calico-system/calico-kube-controllers-6c68d99f8f-wlrdk" Feb 13 15:28:04.712578 kubelet[2521]: I0213 15:28:04.712445 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/db472e10-e2a5-49de-9955-0d1cf7adcfd6-calico-apiserver-certs\") pod \"calico-apiserver-55bbcccb65-78qv4\" (UID: \"db472e10-e2a5-49de-9955-0d1cf7adcfd6\") " pod="calico-apiserver/calico-apiserver-55bbcccb65-78qv4" Feb 13 15:28:04.712998 kubelet[2521]: I0213 15:28:04.712463 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be132fa9-3a1c-4777-b6f8-2618a1865453-config-volume\") pod \"coredns-6f6b679f8f-xrphc\" (UID: \"be132fa9-3a1c-4777-b6f8-2618a1865453\") " pod="kube-system/coredns-6f6b679f8f-xrphc" Feb 13 15:28:04.712998 kubelet[2521]: I0213 15:28:04.712484 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7vbt\" (UniqueName: \"kubernetes.io/projected/2330a284-0835-4d9f-929e-909c050006b6-kube-api-access-r7vbt\") pod \"calico-apiserver-55bbcccb65-mmhbv\" (UID: \"2330a284-0835-4d9f-929e-909c050006b6\") " pod="calico-apiserver/calico-apiserver-55bbcccb65-mmhbv" Feb 13 15:28:04.712998 kubelet[2521]: I0213 15:28:04.712503 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2330a284-0835-4d9f-929e-909c050006b6-calico-apiserver-certs\") pod \"calico-apiserver-55bbcccb65-mmhbv\" (UID: \"2330a284-0835-4d9f-929e-909c050006b6\") " pod="calico-apiserver/calico-apiserver-55bbcccb65-mmhbv" Feb 13 15:28:04.712998 kubelet[2521]: I0213 15:28:04.712522 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lvx5\" (UniqueName: \"kubernetes.io/projected/48d6879b-40c7-4fb4-9137-f94a1e0bf631-kube-api-access-9lvx5\") pod \"coredns-6f6b679f8f-bhdmb\" (UID: \"48d6879b-40c7-4fb4-9137-f94a1e0bf631\") " pod="kube-system/coredns-6f6b679f8f-bhdmb" Feb 13 15:28:04.712998 kubelet[2521]: I0213 15:28:04.712539 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f26xn\" (UniqueName: \"kubernetes.io/projected/be132fa9-3a1c-4777-b6f8-2618a1865453-kube-api-access-f26xn\") pod \"coredns-6f6b679f8f-xrphc\" (UID: \"be132fa9-3a1c-4777-b6f8-2618a1865453\") " pod="kube-system/coredns-6f6b679f8f-xrphc" Feb 13 15:28:04.916931 containerd[1438]: time="2025-02-13T15:28:04.916872322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c68d99f8f-wlrdk,Uid:b17d9265-9df2-40dd-a8b7-46383a0e17ce,Namespace:calico-system,Attempt:0,}" Feb 13 15:28:04.924302 kubelet[2521]: E0213 15:28:04.924256 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:04.925099 containerd[1438]: time="2025-02-13T15:28:04.925017807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xrphc,Uid:be132fa9-3a1c-4777-b6f8-2618a1865453,Namespace:kube-system,Attempt:0,}" Feb 13 15:28:04.929540 kubelet[2521]: E0213 15:28:04.929115 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:04.933102 containerd[1438]: time="2025-02-13T15:28:04.930662661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bhdmb,Uid:48d6879b-40c7-4fb4-9137-f94a1e0bf631,Namespace:kube-system,Attempt:0,}" Feb 13 15:28:04.940976 containerd[1438]: time="2025-02-13T15:28:04.940931584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55bbcccb65-mmhbv,Uid:2330a284-0835-4d9f-929e-909c050006b6,Namespace:calico-apiserver,Attempt:0,}" Feb 13 15:28:04.949934 containerd[1438]: time="2025-02-13T15:28:04.949614871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55bbcccb65-78qv4,Uid:db472e10-e2a5-49de-9955-0d1cf7adcfd6,Namespace:calico-apiserver,Attempt:0,}" Feb 13 15:28:05.337642 containerd[1438]: time="2025-02-13T15:28:05.337587964Z" level=error msg="Failed to destroy network for sandbox \"9d676e713a9f9bdb0ef333af0423e3a978a265863987d14276ec5d59f17734f0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:05.338317 containerd[1438]: time="2025-02-13T15:28:05.338278962Z" level=error msg="encountered an error cleaning up failed sandbox \"9d676e713a9f9bdb0ef333af0423e3a978a265863987d14276ec5d59f17734f0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:05.338373 containerd[1438]: time="2025-02-13T15:28:05.338350383Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bhdmb,Uid:48d6879b-40c7-4fb4-9137-f94a1e0bf631,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9d676e713a9f9bdb0ef333af0423e3a978a265863987d14276ec5d59f17734f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:05.338538 containerd[1438]: time="2025-02-13T15:28:05.338514550Z" level=error msg="Failed to destroy network for sandbox \"0bd54dba73f69ad05fdd64de844dfb9298048bde545384bf327983a09dc6b98c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:05.338583 containerd[1438]: time="2025-02-13T15:28:05.338556362Z" level=error msg="Failed to destroy network for sandbox \"0bbc8283fb1680fd35c9202a2ab37d2c9e7100b2f1110b5295aeee2d24e7261e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:05.338815 containerd[1438]: time="2025-02-13T15:28:05.338789309Z" level=error msg="encountered an error cleaning up failed sandbox \"0bd54dba73f69ad05fdd64de844dfb9298048bde545384bf327983a09dc6b98c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:05.338920 containerd[1438]: time="2025-02-13T15:28:05.338837603Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55bbcccb65-78qv4,Uid:db472e10-e2a5-49de-9955-0d1cf7adcfd6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0bd54dba73f69ad05fdd64de844dfb9298048bde545384bf327983a09dc6b98c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:05.338920 containerd[1438]: time="2025-02-13T15:28:05.338888818Z" level=error msg="encountered an error cleaning up failed sandbox \"0bbc8283fb1680fd35c9202a2ab37d2c9e7100b2f1110b5295aeee2d24e7261e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:05.339010 containerd[1438]: time="2025-02-13T15:28:05.338958918Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xrphc,Uid:be132fa9-3a1c-4777-b6f8-2618a1865453,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0bbc8283fb1680fd35c9202a2ab37d2c9e7100b2f1110b5295aeee2d24e7261e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:05.342047 kubelet[2521]: E0213 15:28:05.340995 2521 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bbc8283fb1680fd35c9202a2ab37d2c9e7100b2f1110b5295aeee2d24e7261e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:05.342047 kubelet[2521]: E0213 15:28:05.341082 2521 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bbc8283fb1680fd35c9202a2ab37d2c9e7100b2f1110b5295aeee2d24e7261e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-xrphc" Feb 13 15:28:05.342047 kubelet[2521]: E0213 15:28:05.341102 2521 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bbc8283fb1680fd35c9202a2ab37d2c9e7100b2f1110b5295aeee2d24e7261e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-xrphc" Feb 13 15:28:05.342217 kubelet[2521]: E0213 15:28:05.341152 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-xrphc_kube-system(be132fa9-3a1c-4777-b6f8-2618a1865453)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-xrphc_kube-system(be132fa9-3a1c-4777-b6f8-2618a1865453)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0bbc8283fb1680fd35c9202a2ab37d2c9e7100b2f1110b5295aeee2d24e7261e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-xrphc" podUID="be132fa9-3a1c-4777-b6f8-2618a1865453" Feb 13 15:28:05.342442 kubelet[2521]: E0213 15:28:05.342299 2521 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bd54dba73f69ad05fdd64de844dfb9298048bde545384bf327983a09dc6b98c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:05.342442 kubelet[2521]: E0213 15:28:05.342346 2521 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bd54dba73f69ad05fdd64de844dfb9298048bde545384bf327983a09dc6b98c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55bbcccb65-78qv4" Feb 13 15:28:05.342442 kubelet[2521]: E0213 15:28:05.342379 2521 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bd54dba73f69ad05fdd64de844dfb9298048bde545384bf327983a09dc6b98c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55bbcccb65-78qv4" Feb 13 15:28:05.344197 kubelet[2521]: E0213 15:28:05.344019 2521 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d676e713a9f9bdb0ef333af0423e3a978a265863987d14276ec5d59f17734f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:05.344197 kubelet[2521]: E0213 15:28:05.344063 2521 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d676e713a9f9bdb0ef333af0423e3a978a265863987d14276ec5d59f17734f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-bhdmb" Feb 13 15:28:05.344197 kubelet[2521]: E0213 15:28:05.344120 2521 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d676e713a9f9bdb0ef333af0423e3a978a265863987d14276ec5d59f17734f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-bhdmb" Feb 13 15:28:05.344336 kubelet[2521]: E0213 15:28:05.344163 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-bhdmb_kube-system(48d6879b-40c7-4fb4-9137-f94a1e0bf631)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-bhdmb_kube-system(48d6879b-40c7-4fb4-9137-f94a1e0bf631)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9d676e713a9f9bdb0ef333af0423e3a978a265863987d14276ec5d59f17734f0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-bhdmb" podUID="48d6879b-40c7-4fb4-9137-f94a1e0bf631" Feb 13 15:28:05.345185 containerd[1438]: time="2025-02-13T15:28:05.344375837Z" level=error msg="Failed to destroy network for sandbox \"ff8fd863173aed4a1fc618a73607b9ac08318473fa88fefca838b1ac4c6441db\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:05.345185 containerd[1438]: time="2025-02-13T15:28:05.344646795Z" level=error msg="encountered an error cleaning up failed sandbox \"ff8fd863173aed4a1fc618a73607b9ac08318473fa88fefca838b1ac4c6441db\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:05.345185 containerd[1438]: time="2025-02-13T15:28:05.344691088Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c68d99f8f-wlrdk,Uid:b17d9265-9df2-40dd-a8b7-46383a0e17ce,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ff8fd863173aed4a1fc618a73607b9ac08318473fa88fefca838b1ac4c6441db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:05.348595 containerd[1438]: time="2025-02-13T15:28:05.348553240Z" level=error msg="Failed to destroy network for sandbox \"2253f70f58a228177dae68e6db7242302fb163a8e7df684f97a846e3d6faf81d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:05.350160 kubelet[2521]: E0213 15:28:05.342410 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-55bbcccb65-78qv4_calico-apiserver(db472e10-e2a5-49de-9955-0d1cf7adcfd6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-55bbcccb65-78qv4_calico-apiserver(db472e10-e2a5-49de-9955-0d1cf7adcfd6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0bd54dba73f69ad05fdd64de844dfb9298048bde545384bf327983a09dc6b98c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55bbcccb65-78qv4" podUID="db472e10-e2a5-49de-9955-0d1cf7adcfd6" Feb 13 15:28:05.350445 kubelet[2521]: E0213 15:28:05.350331 2521 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff8fd863173aed4a1fc618a73607b9ac08318473fa88fefca838b1ac4c6441db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:05.350445 kubelet[2521]: E0213 15:28:05.350391 2521 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff8fd863173aed4a1fc618a73607b9ac08318473fa88fefca838b1ac4c6441db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c68d99f8f-wlrdk" Feb 13 15:28:05.350445 kubelet[2521]: E0213 15:28:05.350410 2521 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff8fd863173aed4a1fc618a73607b9ac08318473fa88fefca838b1ac4c6441db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c68d99f8f-wlrdk" Feb 13 15:28:05.350539 kubelet[2521]: E0213 15:28:05.350440 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6c68d99f8f-wlrdk_calico-system(b17d9265-9df2-40dd-a8b7-46383a0e17ce)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6c68d99f8f-wlrdk_calico-system(b17d9265-9df2-40dd-a8b7-46383a0e17ce)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ff8fd863173aed4a1fc618a73607b9ac08318473fa88fefca838b1ac4c6441db\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6c68d99f8f-wlrdk" podUID="b17d9265-9df2-40dd-a8b7-46383a0e17ce" Feb 13 15:28:05.350879 containerd[1438]: time="2025-02-13T15:28:05.350770118Z" level=error msg="encountered an error cleaning up failed sandbox \"2253f70f58a228177dae68e6db7242302fb163a8e7df684f97a846e3d6faf81d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:05.350879 containerd[1438]: time="2025-02-13T15:28:05.350834816Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55bbcccb65-mmhbv,Uid:2330a284-0835-4d9f-929e-909c050006b6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2253f70f58a228177dae68e6db7242302fb163a8e7df684f97a846e3d6faf81d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:05.351145 kubelet[2521]: E0213 15:28:05.351031 2521 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2253f70f58a228177dae68e6db7242302fb163a8e7df684f97a846e3d6faf81d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:05.351145 kubelet[2521]: E0213 15:28:05.351096 2521 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2253f70f58a228177dae68e6db7242302fb163a8e7df684f97a846e3d6faf81d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55bbcccb65-mmhbv" Feb 13 15:28:05.351145 kubelet[2521]: E0213 15:28:05.351113 2521 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2253f70f58a228177dae68e6db7242302fb163a8e7df684f97a846e3d6faf81d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55bbcccb65-mmhbv" Feb 13 15:28:05.351231 kubelet[2521]: E0213 15:28:05.351150 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-55bbcccb65-mmhbv_calico-apiserver(2330a284-0835-4d9f-929e-909c050006b6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-55bbcccb65-mmhbv_calico-apiserver(2330a284-0835-4d9f-929e-909c050006b6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2253f70f58a228177dae68e6db7242302fb163a8e7df684f97a846e3d6faf81d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55bbcccb65-mmhbv" podUID="2330a284-0835-4d9f-929e-909c050006b6" Feb 13 15:28:05.375718 systemd[1]: Created slice kubepods-besteffort-podd3daff69_f8cf_4771_8db4_eb9251b67560.slice - libcontainer container kubepods-besteffort-podd3daff69_f8cf_4771_8db4_eb9251b67560.slice. Feb 13 15:28:05.377824 containerd[1438]: time="2025-02-13T15:28:05.377788895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hmqx4,Uid:d3daff69-f8cf-4771-8db4-eb9251b67560,Namespace:calico-system,Attempt:0,}" Feb 13 15:28:05.424749 containerd[1438]: time="2025-02-13T15:28:05.424631457Z" level=error msg="Failed to destroy network for sandbox \"1e575dbffd3d4c245a8c3a4e13c078d1720170ccc9f1a8dfbda6639dd922a63d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:05.425270 containerd[1438]: time="2025-02-13T15:28:05.425052378Z" level=error msg="encountered an error cleaning up failed sandbox \"1e575dbffd3d4c245a8c3a4e13c078d1720170ccc9f1a8dfbda6639dd922a63d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:05.425270 containerd[1438]: time="2025-02-13T15:28:05.425138843Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hmqx4,Uid:d3daff69-f8cf-4771-8db4-eb9251b67560,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1e575dbffd3d4c245a8c3a4e13c078d1720170ccc9f1a8dfbda6639dd922a63d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:05.425429 kubelet[2521]: E0213 15:28:05.425383 2521 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e575dbffd3d4c245a8c3a4e13c078d1720170ccc9f1a8dfbda6639dd922a63d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:05.425485 kubelet[2521]: E0213 15:28:05.425451 2521 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e575dbffd3d4c245a8c3a4e13c078d1720170ccc9f1a8dfbda6639dd922a63d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hmqx4" Feb 13 15:28:05.425485 kubelet[2521]: E0213 15:28:05.425471 2521 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e575dbffd3d4c245a8c3a4e13c078d1720170ccc9f1a8dfbda6639dd922a63d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hmqx4" Feb 13 15:28:05.425540 kubelet[2521]: E0213 15:28:05.425514 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hmqx4_calico-system(d3daff69-f8cf-4771-8db4-eb9251b67560)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hmqx4_calico-system(d3daff69-f8cf-4771-8db4-eb9251b67560)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1e575dbffd3d4c245a8c3a4e13c078d1720170ccc9f1a8dfbda6639dd922a63d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hmqx4" podUID="d3daff69-f8cf-4771-8db4-eb9251b67560" Feb 13 15:28:05.444056 kubelet[2521]: I0213 15:28:05.443557 2521 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0bbc8283fb1680fd35c9202a2ab37d2c9e7100b2f1110b5295aeee2d24e7261e" Feb 13 15:28:05.445486 containerd[1438]: time="2025-02-13T15:28:05.444211533Z" level=info msg="StopPodSandbox for \"0bbc8283fb1680fd35c9202a2ab37d2c9e7100b2f1110b5295aeee2d24e7261e\"" Feb 13 15:28:05.445486 containerd[1438]: time="2025-02-13T15:28:05.444483771Z" level=info msg="Ensure that sandbox 0bbc8283fb1680fd35c9202a2ab37d2c9e7100b2f1110b5295aeee2d24e7261e in task-service has been cleanup successfully" Feb 13 15:28:05.445486 containerd[1438]: time="2025-02-13T15:28:05.444682548Z" level=info msg="TearDown network for sandbox \"0bbc8283fb1680fd35c9202a2ab37d2c9e7100b2f1110b5295aeee2d24e7261e\" successfully" Feb 13 15:28:05.445486 containerd[1438]: time="2025-02-13T15:28:05.444697072Z" level=info msg="StopPodSandbox for \"0bbc8283fb1680fd35c9202a2ab37d2c9e7100b2f1110b5295aeee2d24e7261e\" returns successfully" Feb 13 15:28:05.445644 kubelet[2521]: E0213 15:28:05.445562 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:05.446030 kubelet[2521]: I0213 15:28:05.446007 2521 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff8fd863173aed4a1fc618a73607b9ac08318473fa88fefca838b1ac4c6441db" Feb 13 15:28:05.446165 containerd[1438]: time="2025-02-13T15:28:05.446141328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xrphc,Uid:be132fa9-3a1c-4777-b6f8-2618a1865453,Namespace:kube-system,Attempt:1,}" Feb 13 15:28:05.446685 containerd[1438]: time="2025-02-13T15:28:05.446631069Z" level=info msg="StopPodSandbox for \"ff8fd863173aed4a1fc618a73607b9ac08318473fa88fefca838b1ac4c6441db\"" Feb 13 15:28:05.446811 containerd[1438]: time="2025-02-13T15:28:05.446792796Z" level=info msg="Ensure that sandbox ff8fd863173aed4a1fc618a73607b9ac08318473fa88fefca838b1ac4c6441db in task-service has been cleanup successfully" Feb 13 15:28:05.447330 containerd[1438]: time="2025-02-13T15:28:05.446964125Z" level=info msg="TearDown network for sandbox \"ff8fd863173aed4a1fc618a73607b9ac08318473fa88fefca838b1ac4c6441db\" successfully" Feb 13 15:28:05.447330 containerd[1438]: time="2025-02-13T15:28:05.446988252Z" level=info msg="StopPodSandbox for \"ff8fd863173aed4a1fc618a73607b9ac08318473fa88fefca838b1ac4c6441db\" returns successfully" Feb 13 15:28:05.447423 containerd[1438]: time="2025-02-13T15:28:05.447386807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c68d99f8f-wlrdk,Uid:b17d9265-9df2-40dd-a8b7-46383a0e17ce,Namespace:calico-system,Attempt:1,}" Feb 13 15:28:05.448091 kubelet[2521]: I0213 15:28:05.447954 2521 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e575dbffd3d4c245a8c3a4e13c078d1720170ccc9f1a8dfbda6639dd922a63d" Feb 13 15:28:05.449490 containerd[1438]: time="2025-02-13T15:28:05.449464565Z" level=info msg="StopPodSandbox for \"1e575dbffd3d4c245a8c3a4e13c078d1720170ccc9f1a8dfbda6639dd922a63d\"" Feb 13 15:28:05.449642 containerd[1438]: time="2025-02-13T15:28:05.449612527Z" level=info msg="Ensure that sandbox 1e575dbffd3d4c245a8c3a4e13c078d1720170ccc9f1a8dfbda6639dd922a63d in task-service has been cleanup successfully" Feb 13 15:28:05.450291 kubelet[2521]: I0213 15:28:05.450275 2521 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2253f70f58a228177dae68e6db7242302fb163a8e7df684f97a846e3d6faf81d" Feb 13 15:28:05.450470 containerd[1438]: time="2025-02-13T15:28:05.450439805Z" level=info msg="TearDown network for sandbox \"1e575dbffd3d4c245a8c3a4e13c078d1720170ccc9f1a8dfbda6639dd922a63d\" successfully" Feb 13 15:28:05.450729 containerd[1438]: time="2025-02-13T15:28:05.450696319Z" level=info msg="StopPodSandbox for \"1e575dbffd3d4c245a8c3a4e13c078d1720170ccc9f1a8dfbda6639dd922a63d\" returns successfully" Feb 13 15:28:05.450777 containerd[1438]: time="2025-02-13T15:28:05.450758697Z" level=info msg="StopPodSandbox for \"2253f70f58a228177dae68e6db7242302fb163a8e7df684f97a846e3d6faf81d\"" Feb 13 15:28:05.450908 containerd[1438]: time="2025-02-13T15:28:05.450892456Z" level=info msg="Ensure that sandbox 2253f70f58a228177dae68e6db7242302fb163a8e7df684f97a846e3d6faf81d in task-service has been cleanup successfully" Feb 13 15:28:05.451345 containerd[1438]: time="2025-02-13T15:28:05.451304934Z" level=info msg="TearDown network for sandbox \"2253f70f58a228177dae68e6db7242302fb163a8e7df684f97a846e3d6faf81d\" successfully" Feb 13 15:28:05.451378 containerd[1438]: time="2025-02-13T15:28:05.451344906Z" level=info msg="StopPodSandbox for \"2253f70f58a228177dae68e6db7242302fb163a8e7df684f97a846e3d6faf81d\" returns successfully" Feb 13 15:28:05.452421 containerd[1438]: time="2025-02-13T15:28:05.452400570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hmqx4,Uid:d3daff69-f8cf-4771-8db4-eb9251b67560,Namespace:calico-system,Attempt:1,}" Feb 13 15:28:05.452478 kubelet[2521]: I0213 15:28:05.452456 2521 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d676e713a9f9bdb0ef333af0423e3a978a265863987d14276ec5d59f17734f0" Feb 13 15:28:05.452638 containerd[1438]: time="2025-02-13T15:28:05.452613071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55bbcccb65-mmhbv,Uid:2330a284-0835-4d9f-929e-909c050006b6,Namespace:calico-apiserver,Attempt:1,}" Feb 13 15:28:05.453468 containerd[1438]: time="2025-02-13T15:28:05.453445511Z" level=info msg="StopPodSandbox for \"9d676e713a9f9bdb0ef333af0423e3a978a265863987d14276ec5d59f17734f0\"" Feb 13 15:28:05.454223 containerd[1438]: time="2025-02-13T15:28:05.454179082Z" level=info msg="Ensure that sandbox 9d676e713a9f9bdb0ef333af0423e3a978a265863987d14276ec5d59f17734f0 in task-service has been cleanup successfully" Feb 13 15:28:05.455039 containerd[1438]: time="2025-02-13T15:28:05.454998277Z" level=info msg="TearDown network for sandbox \"9d676e713a9f9bdb0ef333af0423e3a978a265863987d14276ec5d59f17734f0\" successfully" Feb 13 15:28:05.456145 containerd[1438]: time="2025-02-13T15:28:05.455024405Z" level=info msg="StopPodSandbox for \"9d676e713a9f9bdb0ef333af0423e3a978a265863987d14276ec5d59f17734f0\" returns successfully" Feb 13 15:28:05.456145 containerd[1438]: time="2025-02-13T15:28:05.455854404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bhdmb,Uid:48d6879b-40c7-4fb4-9137-f94a1e0bf631,Namespace:kube-system,Attempt:1,}" Feb 13 15:28:05.456214 kubelet[2521]: E0213 15:28:05.455469 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:05.457005 kubelet[2521]: E0213 15:28:05.456906 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:05.458908 kubelet[2521]: I0213 15:28:05.458472 2521 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0bd54dba73f69ad05fdd64de844dfb9298048bde545384bf327983a09dc6b98c" Feb 13 15:28:05.458999 containerd[1438]: time="2025-02-13T15:28:05.458471717Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 15:28:05.458999 containerd[1438]: time="2025-02-13T15:28:05.458931650Z" level=info msg="StopPodSandbox for \"0bd54dba73f69ad05fdd64de844dfb9298048bde545384bf327983a09dc6b98c\"" Feb 13 15:28:05.459387 containerd[1438]: time="2025-02-13T15:28:05.459158635Z" level=info msg="Ensure that sandbox 0bd54dba73f69ad05fdd64de844dfb9298048bde545384bf327983a09dc6b98c in task-service has been cleanup successfully" Feb 13 15:28:05.459812 containerd[1438]: time="2025-02-13T15:28:05.459788056Z" level=info msg="TearDown network for sandbox \"0bd54dba73f69ad05fdd64de844dfb9298048bde545384bf327983a09dc6b98c\" successfully" Feb 13 15:28:05.459812 containerd[1438]: time="2025-02-13T15:28:05.459811303Z" level=info msg="StopPodSandbox for \"0bd54dba73f69ad05fdd64de844dfb9298048bde545384bf327983a09dc6b98c\" returns successfully" Feb 13 15:28:05.460895 containerd[1438]: time="2025-02-13T15:28:05.460700999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55bbcccb65-78qv4,Uid:db472e10-e2a5-49de-9955-0d1cf7adcfd6,Namespace:calico-apiserver,Attempt:1,}" Feb 13 15:28:05.582144 containerd[1438]: time="2025-02-13T15:28:05.582061930Z" level=error msg="Failed to destroy network for sandbox \"936cf36bd18b5d423427d367f40192355a28d275d154bbc72428cc0a0db2ce21\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:05.582549 containerd[1438]: time="2025-02-13T15:28:05.582460004Z" level=error msg="encountered an error cleaning up failed sandbox \"936cf36bd18b5d423427d367f40192355a28d275d154bbc72428cc0a0db2ce21\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:05.582549 containerd[1438]: time="2025-02-13T15:28:05.582535426Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55bbcccb65-mmhbv,Uid:2330a284-0835-4d9f-929e-909c050006b6,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"936cf36bd18b5d423427d367f40192355a28d275d154bbc72428cc0a0db2ce21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:05.582860 kubelet[2521]: E0213 15:28:05.582758 2521 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"936cf36bd18b5d423427d367f40192355a28d275d154bbc72428cc0a0db2ce21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:05.582860 kubelet[2521]: E0213 15:28:05.582827 2521 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"936cf36bd18b5d423427d367f40192355a28d275d154bbc72428cc0a0db2ce21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55bbcccb65-mmhbv" Feb 13 15:28:05.582860 kubelet[2521]: E0213 15:28:05.582849 2521 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"936cf36bd18b5d423427d367f40192355a28d275d154bbc72428cc0a0db2ce21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55bbcccb65-mmhbv" Feb 13 15:28:05.582980 kubelet[2521]: E0213 15:28:05.582887 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-55bbcccb65-mmhbv_calico-apiserver(2330a284-0835-4d9f-929e-909c050006b6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-55bbcccb65-mmhbv_calico-apiserver(2330a284-0835-4d9f-929e-909c050006b6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"936cf36bd18b5d423427d367f40192355a28d275d154bbc72428cc0a0db2ce21\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55bbcccb65-mmhbv" podUID="2330a284-0835-4d9f-929e-909c050006b6" Feb 13 15:28:05.587563 containerd[1438]: time="2025-02-13T15:28:05.587275270Z" level=error msg="Failed to destroy network for sandbox \"051fcd08b73edccb4ccb159d0ff5c057bc8f7b121c26a4566843e5c34094c720\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:05.588736 containerd[1438]: time="2025-02-13T15:28:05.587678826Z" level=error msg="encountered an error cleaning up failed sandbox \"051fcd08b73edccb4ccb159d0ff5c057bc8f7b121c26a4566843e5c34094c720\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:05.588736 containerd[1438]: time="2025-02-13T15:28:05.587753248Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xrphc,Uid:be132fa9-3a1c-4777-b6f8-2618a1865453,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"051fcd08b73edccb4ccb159d0ff5c057bc8f7b121c26a4566843e5c34094c720\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:05.588736 containerd[1438]: time="2025-02-13T15:28:05.588724127Z" level=error msg="Failed to destroy network for sandbox \"3212d16950422236b26587f9b3647a1dbfd28814e3a0abc8871bf58f07a51551\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:05.588861 kubelet[2521]: E0213 15:28:05.588038 2521 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"051fcd08b73edccb4ccb159d0ff5c057bc8f7b121c26a4566843e5c34094c720\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:05.588861 kubelet[2521]: E0213 15:28:05.588112 2521 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"051fcd08b73edccb4ccb159d0ff5c057bc8f7b121c26a4566843e5c34094c720\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-xrphc" Feb 13 15:28:05.588861 kubelet[2521]: E0213 15:28:05.588132 2521 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"051fcd08b73edccb4ccb159d0ff5c057bc8f7b121c26a4566843e5c34094c720\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-xrphc" Feb 13 15:28:05.588962 kubelet[2521]: E0213 15:28:05.588170 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-xrphc_kube-system(be132fa9-3a1c-4777-b6f8-2618a1865453)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-xrphc_kube-system(be132fa9-3a1c-4777-b6f8-2618a1865453)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"051fcd08b73edccb4ccb159d0ff5c057bc8f7b121c26a4566843e5c34094c720\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-xrphc" podUID="be132fa9-3a1c-4777-b6f8-2618a1865453" Feb 13 15:28:05.590199 containerd[1438]: time="2025-02-13T15:28:05.589029495Z" level=error msg="encountered an error cleaning up failed sandbox \"3212d16950422236b26587f9b3647a1dbfd28814e3a0abc8871bf58f07a51551\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:05.590199 containerd[1438]: time="2025-02-13T15:28:05.589095914Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bhdmb,Uid:48d6879b-40c7-4fb4-9137-f94a1e0bf631,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"3212d16950422236b26587f9b3647a1dbfd28814e3a0abc8871bf58f07a51551\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:05.590304 kubelet[2521]: E0213 15:28:05.589268 2521 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3212d16950422236b26587f9b3647a1dbfd28814e3a0abc8871bf58f07a51551\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:05.590304 kubelet[2521]: E0213 15:28:05.589323 2521 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3212d16950422236b26587f9b3647a1dbfd28814e3a0abc8871bf58f07a51551\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-bhdmb" Feb 13 15:28:05.590304 kubelet[2521]: E0213 15:28:05.589343 2521 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3212d16950422236b26587f9b3647a1dbfd28814e3a0abc8871bf58f07a51551\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-bhdmb" Feb 13 15:28:05.590377 kubelet[2521]: E0213 15:28:05.589459 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-bhdmb_kube-system(48d6879b-40c7-4fb4-9137-f94a1e0bf631)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-bhdmb_kube-system(48d6879b-40c7-4fb4-9137-f94a1e0bf631)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3212d16950422236b26587f9b3647a1dbfd28814e3a0abc8871bf58f07a51551\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-bhdmb" podUID="48d6879b-40c7-4fb4-9137-f94a1e0bf631" Feb 13 15:28:05.606034 containerd[1438]: time="2025-02-13T15:28:05.605966890Z" level=error msg="Failed to destroy network for sandbox \"93372508d2f96d2502f51ec2872c102fd38e529d8ae7a472f2ade1091b6b7b5e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:05.606346 containerd[1438]: time="2025-02-13T15:28:05.606321712Z" level=error msg="encountered an error cleaning up failed sandbox \"93372508d2f96d2502f51ec2872c102fd38e529d8ae7a472f2ade1091b6b7b5e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:05.606400 containerd[1438]: time="2025-02-13T15:28:05.606382850Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c68d99f8f-wlrdk,Uid:b17d9265-9df2-40dd-a8b7-46383a0e17ce,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"93372508d2f96d2502f51ec2872c102fd38e529d8ae7a472f2ade1091b6b7b5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:05.606747 kubelet[2521]: E0213 15:28:05.606713 2521 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93372508d2f96d2502f51ec2872c102fd38e529d8ae7a472f2ade1091b6b7b5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:05.606798 kubelet[2521]: E0213 15:28:05.606773 2521 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93372508d2f96d2502f51ec2872c102fd38e529d8ae7a472f2ade1091b6b7b5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c68d99f8f-wlrdk" Feb 13 15:28:05.606828 kubelet[2521]: E0213 15:28:05.606799 2521 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93372508d2f96d2502f51ec2872c102fd38e529d8ae7a472f2ade1091b6b7b5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c68d99f8f-wlrdk" Feb 13 15:28:05.607484 kubelet[2521]: E0213 15:28:05.606859 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6c68d99f8f-wlrdk_calico-system(b17d9265-9df2-40dd-a8b7-46383a0e17ce)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6c68d99f8f-wlrdk_calico-system(b17d9265-9df2-40dd-a8b7-46383a0e17ce)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"93372508d2f96d2502f51ec2872c102fd38e529d8ae7a472f2ade1091b6b7b5e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6c68d99f8f-wlrdk" podUID="b17d9265-9df2-40dd-a8b7-46383a0e17ce" Feb 13 15:28:05.609744 containerd[1438]: time="2025-02-13T15:28:05.609708087Z" level=error msg="Failed to destroy network for sandbox \"39d45d65b330807fcecc7aa43e2979e258bdabd910b8af148e084f968f291a28\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:05.610433 containerd[1438]: time="2025-02-13T15:28:05.610271249Z" level=error msg="encountered an error cleaning up failed sandbox \"39d45d65b330807fcecc7aa43e2979e258bdabd910b8af148e084f968f291a28\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:05.610433 containerd[1438]: time="2025-02-13T15:28:05.610339789Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55bbcccb65-78qv4,Uid:db472e10-e2a5-49de-9955-0d1cf7adcfd6,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"39d45d65b330807fcecc7aa43e2979e258bdabd910b8af148e084f968f291a28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:05.610552 kubelet[2521]: E0213 15:28:05.610519 2521 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39d45d65b330807fcecc7aa43e2979e258bdabd910b8af148e084f968f291a28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:05.610681 kubelet[2521]: E0213 15:28:05.610617 2521 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39d45d65b330807fcecc7aa43e2979e258bdabd910b8af148e084f968f291a28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55bbcccb65-78qv4" Feb 13 15:28:05.610712 kubelet[2521]: E0213 15:28:05.610682 2521 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39d45d65b330807fcecc7aa43e2979e258bdabd910b8af148e084f968f291a28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55bbcccb65-78qv4" Feb 13 15:28:05.612675 kubelet[2521]: E0213 15:28:05.612083 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-55bbcccb65-78qv4_calico-apiserver(db472e10-e2a5-49de-9955-0d1cf7adcfd6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-55bbcccb65-78qv4_calico-apiserver(db472e10-e2a5-49de-9955-0d1cf7adcfd6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"39d45d65b330807fcecc7aa43e2979e258bdabd910b8af148e084f968f291a28\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55bbcccb65-78qv4" podUID="db472e10-e2a5-49de-9955-0d1cf7adcfd6" Feb 13 15:28:05.612780 containerd[1438]: time="2025-02-13T15:28:05.612187161Z" level=error msg="Failed to destroy network for sandbox \"a6226e92350381ab768784fb94d579846ab05715195002b23c3ad52b9bb59176\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:05.614231 containerd[1438]: time="2025-02-13T15:28:05.614197459Z" level=error msg="encountered an error cleaning up failed sandbox \"a6226e92350381ab768784fb94d579846ab05715195002b23c3ad52b9bb59176\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:05.614369 containerd[1438]: time="2025-02-13T15:28:05.614348263Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hmqx4,Uid:d3daff69-f8cf-4771-8db4-eb9251b67560,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"a6226e92350381ab768784fb94d579846ab05715195002b23c3ad52b9bb59176\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:05.614717 kubelet[2521]: E0213 15:28:05.614649 2521 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6226e92350381ab768784fb94d579846ab05715195002b23c3ad52b9bb59176\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:05.614802 kubelet[2521]: E0213 15:28:05.614714 2521 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6226e92350381ab768784fb94d579846ab05715195002b23c3ad52b9bb59176\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hmqx4" Feb 13 15:28:05.614802 kubelet[2521]: E0213 15:28:05.614738 2521 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6226e92350381ab768784fb94d579846ab05715195002b23c3ad52b9bb59176\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hmqx4" Feb 13 15:28:05.614802 kubelet[2521]: E0213 15:28:05.614769 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hmqx4_calico-system(d3daff69-f8cf-4771-8db4-eb9251b67560)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hmqx4_calico-system(d3daff69-f8cf-4771-8db4-eb9251b67560)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a6226e92350381ab768784fb94d579846ab05715195002b23c3ad52b9bb59176\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hmqx4" podUID="d3daff69-f8cf-4771-8db4-eb9251b67560" Feb 13 15:28:05.864906 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2253f70f58a228177dae68e6db7242302fb163a8e7df684f97a846e3d6faf81d-shm.mount: Deactivated successfully. Feb 13 15:28:05.865002 systemd[1]: run-netns-cni\x2dcf4aec46\x2d32b4\x2dbee6\x2d1fb2\x2dd1de77e82c20.mount: Deactivated successfully. Feb 13 15:28:05.865048 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0bbc8283fb1680fd35c9202a2ab37d2c9e7100b2f1110b5295aeee2d24e7261e-shm.mount: Deactivated successfully. Feb 13 15:28:05.865115 systemd[1]: run-netns-cni\x2d9a680f5c\x2d4912\x2d0abb\x2d65f9\x2d32f0d546a2ee.mount: Deactivated successfully. Feb 13 15:28:05.865167 systemd[1]: run-netns-cni\x2d118b5139\x2d787d\x2d3155\x2d5b07\x2d323be85713eb.mount: Deactivated successfully. Feb 13 15:28:05.865222 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9d676e713a9f9bdb0ef333af0423e3a978a265863987d14276ec5d59f17734f0-shm.mount: Deactivated successfully. Feb 13 15:28:05.865273 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ff8fd863173aed4a1fc618a73607b9ac08318473fa88fefca838b1ac4c6441db-shm.mount: Deactivated successfully. Feb 13 15:28:06.463120 kubelet[2521]: I0213 15:28:06.463065 2521 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39d45d65b330807fcecc7aa43e2979e258bdabd910b8af148e084f968f291a28" Feb 13 15:28:06.463897 containerd[1438]: time="2025-02-13T15:28:06.463566648Z" level=info msg="StopPodSandbox for \"39d45d65b330807fcecc7aa43e2979e258bdabd910b8af148e084f968f291a28\"" Feb 13 15:28:06.463897 containerd[1438]: time="2025-02-13T15:28:06.463729133Z" level=info msg="Ensure that sandbox 39d45d65b330807fcecc7aa43e2979e258bdabd910b8af148e084f968f291a28 in task-service has been cleanup successfully" Feb 13 15:28:06.464618 containerd[1438]: time="2025-02-13T15:28:06.464499185Z" level=info msg="TearDown network for sandbox \"39d45d65b330807fcecc7aa43e2979e258bdabd910b8af148e084f968f291a28\" successfully" Feb 13 15:28:06.464618 containerd[1438]: time="2025-02-13T15:28:06.464522712Z" level=info msg="StopPodSandbox for \"39d45d65b330807fcecc7aa43e2979e258bdabd910b8af148e084f968f291a28\" returns successfully" Feb 13 15:28:06.465366 containerd[1438]: time="2025-02-13T15:28:06.465339297Z" level=info msg="StopPodSandbox for \"0bd54dba73f69ad05fdd64de844dfb9298048bde545384bf327983a09dc6b98c\"" Feb 13 15:28:06.466152 systemd[1]: run-netns-cni\x2d21a7ae08\x2d528f\x2de43c\x2d392f\x2d2689bc690363.mount: Deactivated successfully. Feb 13 15:28:06.466459 containerd[1438]: time="2025-02-13T15:28:06.466428518Z" level=info msg="TearDown network for sandbox \"0bd54dba73f69ad05fdd64de844dfb9298048bde545384bf327983a09dc6b98c\" successfully" Feb 13 15:28:06.466459 containerd[1438]: time="2025-02-13T15:28:06.466457166Z" level=info msg="StopPodSandbox for \"0bd54dba73f69ad05fdd64de844dfb9298048bde545384bf327983a09dc6b98c\" returns successfully" Feb 13 15:28:06.468552 containerd[1438]: time="2025-02-13T15:28:06.468507292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55bbcccb65-78qv4,Uid:db472e10-e2a5-49de-9955-0d1cf7adcfd6,Namespace:calico-apiserver,Attempt:2,}" Feb 13 15:28:06.469275 kubelet[2521]: I0213 15:28:06.469235 2521 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6226e92350381ab768784fb94d579846ab05715195002b23c3ad52b9bb59176" Feb 13 15:28:06.469903 containerd[1438]: time="2025-02-13T15:28:06.469814173Z" level=info msg="StopPodSandbox for \"a6226e92350381ab768784fb94d579846ab05715195002b23c3ad52b9bb59176\"" Feb 13 15:28:06.470105 containerd[1438]: time="2025-02-13T15:28:06.470008747Z" level=info msg="Ensure that sandbox a6226e92350381ab768784fb94d579846ab05715195002b23c3ad52b9bb59176 in task-service has been cleanup successfully" Feb 13 15:28:06.470552 containerd[1438]: time="2025-02-13T15:28:06.470528811Z" level=info msg="TearDown network for sandbox \"a6226e92350381ab768784fb94d579846ab05715195002b23c3ad52b9bb59176\" successfully" Feb 13 15:28:06.470636 containerd[1438]: time="2025-02-13T15:28:06.470554378Z" level=info msg="StopPodSandbox for \"a6226e92350381ab768784fb94d579846ab05715195002b23c3ad52b9bb59176\" returns successfully" Feb 13 15:28:06.470948 containerd[1438]: time="2025-02-13T15:28:06.470801126Z" level=info msg="StopPodSandbox for \"1e575dbffd3d4c245a8c3a4e13c078d1720170ccc9f1a8dfbda6639dd922a63d\"" Feb 13 15:28:06.470948 containerd[1438]: time="2025-02-13T15:28:06.470879268Z" level=info msg="TearDown network for sandbox \"1e575dbffd3d4c245a8c3a4e13c078d1720170ccc9f1a8dfbda6639dd922a63d\" successfully" Feb 13 15:28:06.470948 containerd[1438]: time="2025-02-13T15:28:06.470888790Z" level=info msg="StopPodSandbox for \"1e575dbffd3d4c245a8c3a4e13c078d1720170ccc9f1a8dfbda6639dd922a63d\" returns successfully" Feb 13 15:28:06.472190 systemd[1]: run-netns-cni\x2de1c764db\x2de495\x2df2da\x2d52b5\x2d51264ae75148.mount: Deactivated successfully. Feb 13 15:28:06.474293 containerd[1438]: time="2025-02-13T15:28:06.474213509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hmqx4,Uid:d3daff69-f8cf-4771-8db4-eb9251b67560,Namespace:calico-system,Attempt:2,}" Feb 13 15:28:06.474392 kubelet[2521]: I0213 15:28:06.474279 2521 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="936cf36bd18b5d423427d367f40192355a28d275d154bbc72428cc0a0db2ce21" Feb 13 15:28:06.474883 containerd[1438]: time="2025-02-13T15:28:06.474781025Z" level=info msg="StopPodSandbox for \"936cf36bd18b5d423427d367f40192355a28d275d154bbc72428cc0a0db2ce21\"" Feb 13 15:28:06.474990 containerd[1438]: time="2025-02-13T15:28:06.474928946Z" level=info msg="Ensure that sandbox 936cf36bd18b5d423427d367f40192355a28d275d154bbc72428cc0a0db2ce21 in task-service has been cleanup successfully" Feb 13 15:28:06.476156 containerd[1438]: time="2025-02-13T15:28:06.476130598Z" level=info msg="TearDown network for sandbox \"936cf36bd18b5d423427d367f40192355a28d275d154bbc72428cc0a0db2ce21\" successfully" Feb 13 15:28:06.476278 containerd[1438]: time="2025-02-13T15:28:06.476208500Z" level=info msg="StopPodSandbox for \"936cf36bd18b5d423427d367f40192355a28d275d154bbc72428cc0a0db2ce21\" returns successfully" Feb 13 15:28:06.477778 containerd[1438]: time="2025-02-13T15:28:06.477644416Z" level=info msg="StopPodSandbox for \"2253f70f58a228177dae68e6db7242302fb163a8e7df684f97a846e3d6faf81d\"" Feb 13 15:28:06.478278 systemd[1]: run-netns-cni\x2d6b2abd83\x2ddaa8\x2dffa9\x2d679e\x2dce789a2443af.mount: Deactivated successfully. Feb 13 15:28:06.479326 containerd[1438]: time="2025-02-13T15:28:06.479144551Z" level=info msg="TearDown network for sandbox \"2253f70f58a228177dae68e6db7242302fb163a8e7df684f97a846e3d6faf81d\" successfully" Feb 13 15:28:06.479326 containerd[1438]: time="2025-02-13T15:28:06.479163676Z" level=info msg="StopPodSandbox for \"2253f70f58a228177dae68e6db7242302fb163a8e7df684f97a846e3d6faf81d\" returns successfully" Feb 13 15:28:06.479796 kubelet[2521]: I0213 15:28:06.479771 2521 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3212d16950422236b26587f9b3647a1dbfd28814e3a0abc8871bf58f07a51551" Feb 13 15:28:06.480290 containerd[1438]: time="2025-02-13T15:28:06.480231731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55bbcccb65-mmhbv,Uid:2330a284-0835-4d9f-929e-909c050006b6,Namespace:calico-apiserver,Attempt:2,}" Feb 13 15:28:06.481099 containerd[1438]: time="2025-02-13T15:28:06.480755596Z" level=info msg="StopPodSandbox for \"3212d16950422236b26587f9b3647a1dbfd28814e3a0abc8871bf58f07a51551\"" Feb 13 15:28:06.481676 containerd[1438]: time="2025-02-13T15:28:06.481434183Z" level=info msg="Ensure that sandbox 3212d16950422236b26587f9b3647a1dbfd28814e3a0abc8871bf58f07a51551 in task-service has been cleanup successfully" Feb 13 15:28:06.482032 containerd[1438]: time="2025-02-13T15:28:06.482009262Z" level=info msg="TearDown network for sandbox \"3212d16950422236b26587f9b3647a1dbfd28814e3a0abc8871bf58f07a51551\" successfully" Feb 13 15:28:06.482122 containerd[1438]: time="2025-02-13T15:28:06.482106969Z" level=info msg="StopPodSandbox for \"3212d16950422236b26587f9b3647a1dbfd28814e3a0abc8871bf58f07a51551\" returns successfully" Feb 13 15:28:06.483406 containerd[1438]: time="2025-02-13T15:28:06.483364597Z" level=info msg="StopPodSandbox for \"9d676e713a9f9bdb0ef333af0423e3a978a265863987d14276ec5d59f17734f0\"" Feb 13 15:28:06.483710 containerd[1438]: time="2025-02-13T15:28:06.483618867Z" level=info msg="TearDown network for sandbox \"9d676e713a9f9bdb0ef333af0423e3a978a265863987d14276ec5d59f17734f0\" successfully" Feb 13 15:28:06.483710 containerd[1438]: time="2025-02-13T15:28:06.483637352Z" level=info msg="StopPodSandbox for \"9d676e713a9f9bdb0ef333af0423e3a978a265863987d14276ec5d59f17734f0\" returns successfully" Feb 13 15:28:06.483882 kubelet[2521]: I0213 15:28:06.483852 2521 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="051fcd08b73edccb4ccb159d0ff5c057bc8f7b121c26a4566843e5c34094c720" Feb 13 15:28:06.483934 kubelet[2521]: E0213 15:28:06.483885 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:06.484502 containerd[1438]: time="2025-02-13T15:28:06.484167578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bhdmb,Uid:48d6879b-40c7-4fb4-9137-f94a1e0bf631,Namespace:kube-system,Attempt:2,}" Feb 13 15:28:06.484502 containerd[1438]: time="2025-02-13T15:28:06.484491668Z" level=info msg="StopPodSandbox for \"051fcd08b73edccb4ccb159d0ff5c057bc8f7b121c26a4566843e5c34094c720\"" Feb 13 15:28:06.484672 containerd[1438]: time="2025-02-13T15:28:06.484635908Z" level=info msg="Ensure that sandbox 051fcd08b73edccb4ccb159d0ff5c057bc8f7b121c26a4566843e5c34094c720 in task-service has been cleanup successfully" Feb 13 15:28:06.484689 systemd[1]: run-netns-cni\x2d4d219048\x2d0b0d\x2df021\x2deff3\x2d6d96c8bbbd63.mount: Deactivated successfully. Feb 13 15:28:06.486373 kubelet[2521]: I0213 15:28:06.486349 2521 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93372508d2f96d2502f51ec2872c102fd38e529d8ae7a472f2ade1091b6b7b5e" Feb 13 15:28:06.486829 containerd[1438]: time="2025-02-13T15:28:06.486790543Z" level=info msg="StopPodSandbox for \"93372508d2f96d2502f51ec2872c102fd38e529d8ae7a472f2ade1091b6b7b5e\"" Feb 13 15:28:06.487193 containerd[1438]: time="2025-02-13T15:28:06.487159325Z" level=info msg="Ensure that sandbox 93372508d2f96d2502f51ec2872c102fd38e529d8ae7a472f2ade1091b6b7b5e in task-service has been cleanup successfully" Feb 13 15:28:06.487486 containerd[1438]: time="2025-02-13T15:28:06.487453566Z" level=info msg="TearDown network for sandbox \"93372508d2f96d2502f51ec2872c102fd38e529d8ae7a472f2ade1091b6b7b5e\" successfully" Feb 13 15:28:06.487657 containerd[1438]: time="2025-02-13T15:28:06.487524426Z" level=info msg="StopPodSandbox for \"93372508d2f96d2502f51ec2872c102fd38e529d8ae7a472f2ade1091b6b7b5e\" returns successfully" Feb 13 15:28:06.487914 containerd[1438]: time="2025-02-13T15:28:06.487897249Z" level=info msg="StopPodSandbox for \"ff8fd863173aed4a1fc618a73607b9ac08318473fa88fefca838b1ac4c6441db\"" Feb 13 15:28:06.488065 containerd[1438]: time="2025-02-13T15:28:06.488027925Z" level=info msg="TearDown network for sandbox \"ff8fd863173aed4a1fc618a73607b9ac08318473fa88fefca838b1ac4c6441db\" successfully" Feb 13 15:28:06.488190 containerd[1438]: time="2025-02-13T15:28:06.488161202Z" level=info msg="StopPodSandbox for \"ff8fd863173aed4a1fc618a73607b9ac08318473fa88fefca838b1ac4c6441db\" returns successfully" Feb 13 15:28:06.488309 containerd[1438]: time="2025-02-13T15:28:06.488124792Z" level=info msg="TearDown network for sandbox \"051fcd08b73edccb4ccb159d0ff5c057bc8f7b121c26a4566843e5c34094c720\" successfully" Feb 13 15:28:06.488309 containerd[1438]: time="2025-02-13T15:28:06.488265991Z" level=info msg="StopPodSandbox for \"051fcd08b73edccb4ccb159d0ff5c057bc8f7b121c26a4566843e5c34094c720\" returns successfully" Feb 13 15:28:06.489469 containerd[1438]: time="2025-02-13T15:28:06.489409786Z" level=info msg="StopPodSandbox for \"0bbc8283fb1680fd35c9202a2ab37d2c9e7100b2f1110b5295aeee2d24e7261e\"" Feb 13 15:28:06.490285 containerd[1438]: time="2025-02-13T15:28:06.490186961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c68d99f8f-wlrdk,Uid:b17d9265-9df2-40dd-a8b7-46383a0e17ce,Namespace:calico-system,Attempt:2,}" Feb 13 15:28:06.490543 containerd[1438]: time="2025-02-13T15:28:06.490473720Z" level=info msg="TearDown network for sandbox \"0bbc8283fb1680fd35c9202a2ab37d2c9e7100b2f1110b5295aeee2d24e7261e\" successfully" Feb 13 15:28:06.490543 containerd[1438]: time="2025-02-13T15:28:06.490495927Z" level=info msg="StopPodSandbox for \"0bbc8283fb1680fd35c9202a2ab37d2c9e7100b2f1110b5295aeee2d24e7261e\" returns successfully" Feb 13 15:28:06.490914 kubelet[2521]: E0213 15:28:06.490863 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:06.491635 containerd[1438]: time="2025-02-13T15:28:06.491603272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xrphc,Uid:be132fa9-3a1c-4777-b6f8-2618a1865453,Namespace:kube-system,Attempt:2,}" Feb 13 15:28:06.781843 containerd[1438]: time="2025-02-13T15:28:06.781708970Z" level=error msg="Failed to destroy network for sandbox \"ebd99b89a667d6242d41d6f195e59eb6706c8195f2e4a6e0ccb977bed11e5855\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:06.782538 containerd[1438]: time="2025-02-13T15:28:06.782331582Z" level=error msg="encountered an error cleaning up failed sandbox \"ebd99b89a667d6242d41d6f195e59eb6706c8195f2e4a6e0ccb977bed11e5855\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:06.782598 containerd[1438]: time="2025-02-13T15:28:06.782563646Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55bbcccb65-78qv4,Uid:db472e10-e2a5-49de-9955-0d1cf7adcfd6,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"ebd99b89a667d6242d41d6f195e59eb6706c8195f2e4a6e0ccb977bed11e5855\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:06.784092 kubelet[2521]: E0213 15:28:06.783346 2521 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebd99b89a667d6242d41d6f195e59eb6706c8195f2e4a6e0ccb977bed11e5855\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:06.784092 kubelet[2521]: E0213 15:28:06.783409 2521 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebd99b89a667d6242d41d6f195e59eb6706c8195f2e4a6e0ccb977bed11e5855\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55bbcccb65-78qv4" Feb 13 15:28:06.784092 kubelet[2521]: E0213 15:28:06.783428 2521 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebd99b89a667d6242d41d6f195e59eb6706c8195f2e4a6e0ccb977bed11e5855\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55bbcccb65-78qv4" Feb 13 15:28:06.784351 kubelet[2521]: E0213 15:28:06.783465 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-55bbcccb65-78qv4_calico-apiserver(db472e10-e2a5-49de-9955-0d1cf7adcfd6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-55bbcccb65-78qv4_calico-apiserver(db472e10-e2a5-49de-9955-0d1cf7adcfd6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ebd99b89a667d6242d41d6f195e59eb6706c8195f2e4a6e0ccb977bed11e5855\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55bbcccb65-78qv4" podUID="db472e10-e2a5-49de-9955-0d1cf7adcfd6" Feb 13 15:28:06.834362 containerd[1438]: time="2025-02-13T15:28:06.834298257Z" level=error msg="Failed to destroy network for sandbox \"5bcbbbaf93ca2f8c82394fbd2b1b82ed092a9669f081bf4b7be356296412d245\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:06.834698 containerd[1438]: time="2025-02-13T15:28:06.834669159Z" level=error msg="encountered an error cleaning up failed sandbox \"5bcbbbaf93ca2f8c82394fbd2b1b82ed092a9669f081bf4b7be356296412d245\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:06.834815 containerd[1438]: time="2025-02-13T15:28:06.834729856Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hmqx4,Uid:d3daff69-f8cf-4771-8db4-eb9251b67560,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"5bcbbbaf93ca2f8c82394fbd2b1b82ed092a9669f081bf4b7be356296412d245\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:06.835107 kubelet[2521]: E0213 15:28:06.834971 2521 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5bcbbbaf93ca2f8c82394fbd2b1b82ed092a9669f081bf4b7be356296412d245\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:06.835107 kubelet[2521]: E0213 15:28:06.835034 2521 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5bcbbbaf93ca2f8c82394fbd2b1b82ed092a9669f081bf4b7be356296412d245\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hmqx4" Feb 13 15:28:06.835107 kubelet[2521]: E0213 15:28:06.835058 2521 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5bcbbbaf93ca2f8c82394fbd2b1b82ed092a9669f081bf4b7be356296412d245\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hmqx4" Feb 13 15:28:06.835223 kubelet[2521]: E0213 15:28:06.835111 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hmqx4_calico-system(d3daff69-f8cf-4771-8db4-eb9251b67560)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hmqx4_calico-system(d3daff69-f8cf-4771-8db4-eb9251b67560)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5bcbbbaf93ca2f8c82394fbd2b1b82ed092a9669f081bf4b7be356296412d245\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hmqx4" podUID="d3daff69-f8cf-4771-8db4-eb9251b67560" Feb 13 15:28:06.869343 systemd[1]: run-netns-cni\x2df0c27fbc\x2d633d\x2de69d\x2d5bf1\x2d18e01ef9830e.mount: Deactivated successfully. Feb 13 15:28:06.869643 systemd[1]: run-netns-cni\x2dc036fe84\x2d32a8\x2dfc53\x2d20cc\x2d6df1ef558136.mount: Deactivated successfully. Feb 13 15:28:06.880095 containerd[1438]: time="2025-02-13T15:28:06.880007724Z" level=error msg="Failed to destroy network for sandbox \"233214fc7bb247cadbdaa1e736dbf4f26b5ab5d9169eb6a9c3f71561a51e36c2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:06.880484 containerd[1438]: time="2025-02-13T15:28:06.880445524Z" level=error msg="encountered an error cleaning up failed sandbox \"233214fc7bb247cadbdaa1e736dbf4f26b5ab5d9169eb6a9c3f71561a51e36c2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:06.880549 containerd[1438]: time="2025-02-13T15:28:06.880525587Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55bbcccb65-mmhbv,Uid:2330a284-0835-4d9f-929e-909c050006b6,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"233214fc7bb247cadbdaa1e736dbf4f26b5ab5d9169eb6a9c3f71561a51e36c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:06.882526 kubelet[2521]: E0213 15:28:06.882488 2521 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"233214fc7bb247cadbdaa1e736dbf4f26b5ab5d9169eb6a9c3f71561a51e36c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:06.882620 kubelet[2521]: E0213 15:28:06.882544 2521 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"233214fc7bb247cadbdaa1e736dbf4f26b5ab5d9169eb6a9c3f71561a51e36c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55bbcccb65-mmhbv" Feb 13 15:28:06.882620 kubelet[2521]: E0213 15:28:06.882566 2521 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"233214fc7bb247cadbdaa1e736dbf4f26b5ab5d9169eb6a9c3f71561a51e36c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55bbcccb65-mmhbv" Feb 13 15:28:06.882675 kubelet[2521]: E0213 15:28:06.882607 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-55bbcccb65-mmhbv_calico-apiserver(2330a284-0835-4d9f-929e-909c050006b6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-55bbcccb65-mmhbv_calico-apiserver(2330a284-0835-4d9f-929e-909c050006b6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"233214fc7bb247cadbdaa1e736dbf4f26b5ab5d9169eb6a9c3f71561a51e36c2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55bbcccb65-mmhbv" podUID="2330a284-0835-4d9f-929e-909c050006b6" Feb 13 15:28:06.884198 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-233214fc7bb247cadbdaa1e736dbf4f26b5ab5d9169eb6a9c3f71561a51e36c2-shm.mount: Deactivated successfully. Feb 13 15:28:06.900544 containerd[1438]: time="2025-02-13T15:28:06.900484500Z" level=error msg="Failed to destroy network for sandbox \"7ebcedfe59cbd698c1faf3d81c5cddc2808250aba7f1b5b95ba461e61be017a4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:06.900902 containerd[1438]: time="2025-02-13T15:28:06.900854402Z" level=error msg="encountered an error cleaning up failed sandbox \"7ebcedfe59cbd698c1faf3d81c5cddc2808250aba7f1b5b95ba461e61be017a4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:06.900967 containerd[1438]: time="2025-02-13T15:28:06.900932144Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xrphc,Uid:be132fa9-3a1c-4777-b6f8-2618a1865453,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"7ebcedfe59cbd698c1faf3d81c5cddc2808250aba7f1b5b95ba461e61be017a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:06.902636 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7ebcedfe59cbd698c1faf3d81c5cddc2808250aba7f1b5b95ba461e61be017a4-shm.mount: Deactivated successfully. Feb 13 15:28:06.904617 kubelet[2521]: E0213 15:28:06.904571 2521 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ebcedfe59cbd698c1faf3d81c5cddc2808250aba7f1b5b95ba461e61be017a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:06.904732 kubelet[2521]: E0213 15:28:06.904634 2521 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ebcedfe59cbd698c1faf3d81c5cddc2808250aba7f1b5b95ba461e61be017a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-xrphc" Feb 13 15:28:06.904732 kubelet[2521]: E0213 15:28:06.904652 2521 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ebcedfe59cbd698c1faf3d81c5cddc2808250aba7f1b5b95ba461e61be017a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-xrphc" Feb 13 15:28:06.904732 kubelet[2521]: E0213 15:28:06.904698 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-xrphc_kube-system(be132fa9-3a1c-4777-b6f8-2618a1865453)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-xrphc_kube-system(be132fa9-3a1c-4777-b6f8-2618a1865453)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7ebcedfe59cbd698c1faf3d81c5cddc2808250aba7f1b5b95ba461e61be017a4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-xrphc" podUID="be132fa9-3a1c-4777-b6f8-2618a1865453" Feb 13 15:28:07.001266 containerd[1438]: time="2025-02-13T15:28:07.001205641Z" level=error msg="Failed to destroy network for sandbox \"c874b9a1f46bbd68aaf2a7ea1fe0a5aa6e80af9b640025e9e2baccd227229769\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:07.003957 containerd[1438]: time="2025-02-13T15:28:07.003616081Z" level=error msg="encountered an error cleaning up failed sandbox \"c874b9a1f46bbd68aaf2a7ea1fe0a5aa6e80af9b640025e9e2baccd227229769\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:07.003957 containerd[1438]: time="2025-02-13T15:28:07.003694782Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c68d99f8f-wlrdk,Uid:b17d9265-9df2-40dd-a8b7-46383a0e17ce,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"c874b9a1f46bbd68aaf2a7ea1fe0a5aa6e80af9b640025e9e2baccd227229769\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:07.005045 kubelet[2521]: E0213 15:28:07.003917 2521 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c874b9a1f46bbd68aaf2a7ea1fe0a5aa6e80af9b640025e9e2baccd227229769\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:07.005045 kubelet[2521]: E0213 15:28:07.003974 2521 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c874b9a1f46bbd68aaf2a7ea1fe0a5aa6e80af9b640025e9e2baccd227229769\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c68d99f8f-wlrdk" Feb 13 15:28:07.005045 kubelet[2521]: E0213 15:28:07.003996 2521 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c874b9a1f46bbd68aaf2a7ea1fe0a5aa6e80af9b640025e9e2baccd227229769\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c68d99f8f-wlrdk" Feb 13 15:28:07.004362 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c874b9a1f46bbd68aaf2a7ea1fe0a5aa6e80af9b640025e9e2baccd227229769-shm.mount: Deactivated successfully. Feb 13 15:28:07.005231 kubelet[2521]: E0213 15:28:07.004041 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6c68d99f8f-wlrdk_calico-system(b17d9265-9df2-40dd-a8b7-46383a0e17ce)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6c68d99f8f-wlrdk_calico-system(b17d9265-9df2-40dd-a8b7-46383a0e17ce)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c874b9a1f46bbd68aaf2a7ea1fe0a5aa6e80af9b640025e9e2baccd227229769\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6c68d99f8f-wlrdk" podUID="b17d9265-9df2-40dd-a8b7-46383a0e17ce" Feb 13 15:28:07.017095 containerd[1438]: time="2025-02-13T15:28:07.017029040Z" level=error msg="Failed to destroy network for sandbox \"365686e2ea3126d4074c6dfa28c3e1ea5feabc62ece6b663740e99931c83d2ab\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:07.017632 containerd[1438]: time="2025-02-13T15:28:07.017558901Z" level=error msg="encountered an error cleaning up failed sandbox \"365686e2ea3126d4074c6dfa28c3e1ea5feabc62ece6b663740e99931c83d2ab\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:07.017824 containerd[1438]: time="2025-02-13T15:28:07.017796444Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bhdmb,Uid:48d6879b-40c7-4fb4-9137-f94a1e0bf631,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"365686e2ea3126d4074c6dfa28c3e1ea5feabc62ece6b663740e99931c83d2ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:07.018222 kubelet[2521]: E0213 15:28:07.018177 2521 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"365686e2ea3126d4074c6dfa28c3e1ea5feabc62ece6b663740e99931c83d2ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:07.018288 kubelet[2521]: E0213 15:28:07.018243 2521 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"365686e2ea3126d4074c6dfa28c3e1ea5feabc62ece6b663740e99931c83d2ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-bhdmb" Feb 13 15:28:07.018288 kubelet[2521]: E0213 15:28:07.018266 2521 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"365686e2ea3126d4074c6dfa28c3e1ea5feabc62ece6b663740e99931c83d2ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-bhdmb" Feb 13 15:28:07.018341 kubelet[2521]: E0213 15:28:07.018311 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-bhdmb_kube-system(48d6879b-40c7-4fb4-9137-f94a1e0bf631)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-bhdmb_kube-system(48d6879b-40c7-4fb4-9137-f94a1e0bf631)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"365686e2ea3126d4074c6dfa28c3e1ea5feabc62ece6b663740e99931c83d2ab\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-bhdmb" podUID="48d6879b-40c7-4fb4-9137-f94a1e0bf631" Feb 13 15:28:07.490901 kubelet[2521]: I0213 15:28:07.490870 2521 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5bcbbbaf93ca2f8c82394fbd2b1b82ed092a9669f081bf4b7be356296412d245" Feb 13 15:28:07.492269 containerd[1438]: time="2025-02-13T15:28:07.492231824Z" level=info msg="StopPodSandbox for \"5bcbbbaf93ca2f8c82394fbd2b1b82ed092a9669f081bf4b7be356296412d245\"" Feb 13 15:28:07.493459 containerd[1438]: time="2025-02-13T15:28:07.493332636Z" level=info msg="Ensure that sandbox 5bcbbbaf93ca2f8c82394fbd2b1b82ed092a9669f081bf4b7be356296412d245 in task-service has been cleanup successfully" Feb 13 15:28:07.494583 containerd[1438]: time="2025-02-13T15:28:07.493599227Z" level=info msg="TearDown network for sandbox \"5bcbbbaf93ca2f8c82394fbd2b1b82ed092a9669f081bf4b7be356296412d245\" successfully" Feb 13 15:28:07.494583 containerd[1438]: time="2025-02-13T15:28:07.493619232Z" level=info msg="StopPodSandbox for \"5bcbbbaf93ca2f8c82394fbd2b1b82ed092a9669f081bf4b7be356296412d245\" returns successfully" Feb 13 15:28:07.494583 containerd[1438]: time="2025-02-13T15:28:07.494413803Z" level=info msg="StopPodSandbox for \"233214fc7bb247cadbdaa1e736dbf4f26b5ab5d9169eb6a9c3f71561a51e36c2\"" Feb 13 15:28:07.494583 containerd[1438]: time="2025-02-13T15:28:07.494581848Z" level=info msg="Ensure that sandbox 233214fc7bb247cadbdaa1e736dbf4f26b5ab5d9169eb6a9c3f71561a51e36c2 in task-service has been cleanup successfully" Feb 13 15:28:07.494769 kubelet[2521]: I0213 15:28:07.493750 2521 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="233214fc7bb247cadbdaa1e736dbf4f26b5ab5d9169eb6a9c3f71561a51e36c2" Feb 13 15:28:07.494977 containerd[1438]: time="2025-02-13T15:28:07.494815590Z" level=info msg="StopPodSandbox for \"a6226e92350381ab768784fb94d579846ab05715195002b23c3ad52b9bb59176\"" Feb 13 15:28:07.494977 containerd[1438]: time="2025-02-13T15:28:07.494908374Z" level=info msg="TearDown network for sandbox \"a6226e92350381ab768784fb94d579846ab05715195002b23c3ad52b9bb59176\" successfully" Feb 13 15:28:07.494977 containerd[1438]: time="2025-02-13T15:28:07.494918057Z" level=info msg="StopPodSandbox for \"a6226e92350381ab768784fb94d579846ab05715195002b23c3ad52b9bb59176\" returns successfully" Feb 13 15:28:07.495718 containerd[1438]: time="2025-02-13T15:28:07.495688141Z" level=info msg="TearDown network for sandbox \"233214fc7bb247cadbdaa1e736dbf4f26b5ab5d9169eb6a9c3f71561a51e36c2\" successfully" Feb 13 15:28:07.495718 containerd[1438]: time="2025-02-13T15:28:07.495716989Z" level=info msg="StopPodSandbox for \"233214fc7bb247cadbdaa1e736dbf4f26b5ab5d9169eb6a9c3f71561a51e36c2\" returns successfully" Feb 13 15:28:07.495961 containerd[1438]: time="2025-02-13T15:28:07.495938568Z" level=info msg="StopPodSandbox for \"936cf36bd18b5d423427d367f40192355a28d275d154bbc72428cc0a0db2ce21\"" Feb 13 15:28:07.496144 containerd[1438]: time="2025-02-13T15:28:07.496115695Z" level=info msg="StopPodSandbox for \"1e575dbffd3d4c245a8c3a4e13c078d1720170ccc9f1a8dfbda6639dd922a63d\"" Feb 13 15:28:07.496615 containerd[1438]: time="2025-02-13T15:28:07.496210160Z" level=info msg="TearDown network for sandbox \"1e575dbffd3d4c245a8c3a4e13c078d1720170ccc9f1a8dfbda6639dd922a63d\" successfully" Feb 13 15:28:07.496615 containerd[1438]: time="2025-02-13T15:28:07.496226324Z" level=info msg="StopPodSandbox for \"1e575dbffd3d4c245a8c3a4e13c078d1720170ccc9f1a8dfbda6639dd922a63d\" returns successfully" Feb 13 15:28:07.496615 containerd[1438]: time="2025-02-13T15:28:07.496321629Z" level=info msg="TearDown network for sandbox \"936cf36bd18b5d423427d367f40192355a28d275d154bbc72428cc0a0db2ce21\" successfully" Feb 13 15:28:07.496615 containerd[1438]: time="2025-02-13T15:28:07.496339954Z" level=info msg="StopPodSandbox for \"936cf36bd18b5d423427d367f40192355a28d275d154bbc72428cc0a0db2ce21\" returns successfully" Feb 13 15:28:07.496751 containerd[1438]: time="2025-02-13T15:28:07.496720975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hmqx4,Uid:d3daff69-f8cf-4771-8db4-eb9251b67560,Namespace:calico-system,Attempt:3,}" Feb 13 15:28:07.497716 kubelet[2521]: I0213 15:28:07.497620 2521 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="365686e2ea3126d4074c6dfa28c3e1ea5feabc62ece6b663740e99931c83d2ab" Feb 13 15:28:07.498576 containerd[1438]: time="2025-02-13T15:28:07.498511050Z" level=info msg="StopPodSandbox for \"2253f70f58a228177dae68e6db7242302fb163a8e7df684f97a846e3d6faf81d\"" Feb 13 15:28:07.499139 containerd[1438]: time="2025-02-13T15:28:07.498830255Z" level=info msg="StopPodSandbox for \"365686e2ea3126d4074c6dfa28c3e1ea5feabc62ece6b663740e99931c83d2ab\"" Feb 13 15:28:07.499434 containerd[1438]: time="2025-02-13T15:28:07.498891031Z" level=info msg="TearDown network for sandbox \"2253f70f58a228177dae68e6db7242302fb163a8e7df684f97a846e3d6faf81d\" successfully" Feb 13 15:28:07.499434 containerd[1438]: time="2025-02-13T15:28:07.499434896Z" level=info msg="StopPodSandbox for \"2253f70f58a228177dae68e6db7242302fb163a8e7df684f97a846e3d6faf81d\" returns successfully" Feb 13 15:28:07.499814 containerd[1438]: time="2025-02-13T15:28:07.499789030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55bbcccb65-mmhbv,Uid:2330a284-0835-4d9f-929e-909c050006b6,Namespace:calico-apiserver,Attempt:3,}" Feb 13 15:28:07.500454 kubelet[2521]: I0213 15:28:07.500430 2521 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ebd99b89a667d6242d41d6f195e59eb6706c8195f2e4a6e0ccb977bed11e5855" Feb 13 15:28:07.500639 containerd[1438]: time="2025-02-13T15:28:07.500602886Z" level=info msg="Ensure that sandbox 365686e2ea3126d4074c6dfa28c3e1ea5feabc62ece6b663740e99931c83d2ab in task-service has been cleanup successfully" Feb 13 15:28:07.501259 containerd[1438]: time="2025-02-13T15:28:07.501224291Z" level=info msg="TearDown network for sandbox \"365686e2ea3126d4074c6dfa28c3e1ea5feabc62ece6b663740e99931c83d2ab\" successfully" Feb 13 15:28:07.501259 containerd[1438]: time="2025-02-13T15:28:07.501257259Z" level=info msg="StopPodSandbox for \"365686e2ea3126d4074c6dfa28c3e1ea5feabc62ece6b663740e99931c83d2ab\" returns successfully" Feb 13 15:28:07.501656 containerd[1438]: time="2025-02-13T15:28:07.501567622Z" level=info msg="StopPodSandbox for \"ebd99b89a667d6242d41d6f195e59eb6706c8195f2e4a6e0ccb977bed11e5855\"" Feb 13 15:28:07.501797 containerd[1438]: time="2025-02-13T15:28:07.501726864Z" level=info msg="Ensure that sandbox ebd99b89a667d6242d41d6f195e59eb6706c8195f2e4a6e0ccb977bed11e5855 in task-service has been cleanup successfully" Feb 13 15:28:07.501920 containerd[1438]: time="2025-02-13T15:28:07.501902150Z" level=info msg="TearDown network for sandbox \"ebd99b89a667d6242d41d6f195e59eb6706c8195f2e4a6e0ccb977bed11e5855\" successfully" Feb 13 15:28:07.501955 containerd[1438]: time="2025-02-13T15:28:07.501918635Z" level=info msg="StopPodSandbox for \"ebd99b89a667d6242d41d6f195e59eb6706c8195f2e4a6e0ccb977bed11e5855\" returns successfully" Feb 13 15:28:07.502311 containerd[1438]: time="2025-02-13T15:28:07.502245481Z" level=info msg="StopPodSandbox for \"39d45d65b330807fcecc7aa43e2979e258bdabd910b8af148e084f968f291a28\"" Feb 13 15:28:07.502384 containerd[1438]: time="2025-02-13T15:28:07.502340827Z" level=info msg="TearDown network for sandbox \"39d45d65b330807fcecc7aa43e2979e258bdabd910b8af148e084f968f291a28\" successfully" Feb 13 15:28:07.502384 containerd[1438]: time="2025-02-13T15:28:07.502355351Z" level=info msg="StopPodSandbox for \"39d45d65b330807fcecc7aa43e2979e258bdabd910b8af148e084f968f291a28\" returns successfully" Feb 13 15:28:07.502503 containerd[1438]: time="2025-02-13T15:28:07.502412806Z" level=info msg="StopPodSandbox for \"3212d16950422236b26587f9b3647a1dbfd28814e3a0abc8871bf58f07a51551\"" Feb 13 15:28:07.502503 containerd[1438]: time="2025-02-13T15:28:07.502459778Z" level=info msg="TearDown network for sandbox \"3212d16950422236b26587f9b3647a1dbfd28814e3a0abc8871bf58f07a51551\" successfully" Feb 13 15:28:07.502503 containerd[1438]: time="2025-02-13T15:28:07.502467540Z" level=info msg="StopPodSandbox for \"3212d16950422236b26587f9b3647a1dbfd28814e3a0abc8871bf58f07a51551\" returns successfully" Feb 13 15:28:07.502709 containerd[1438]: time="2025-02-13T15:28:07.502679037Z" level=info msg="StopPodSandbox for \"0bd54dba73f69ad05fdd64de844dfb9298048bde545384bf327983a09dc6b98c\"" Feb 13 15:28:07.502774 containerd[1438]: time="2025-02-13T15:28:07.502758498Z" level=info msg="TearDown network for sandbox \"0bd54dba73f69ad05fdd64de844dfb9298048bde545384bf327983a09dc6b98c\" successfully" Feb 13 15:28:07.502774 containerd[1438]: time="2025-02-13T15:28:07.502771461Z" level=info msg="StopPodSandbox for \"0bd54dba73f69ad05fdd64de844dfb9298048bde545384bf327983a09dc6b98c\" returns successfully" Feb 13 15:28:07.502897 containerd[1438]: time="2025-02-13T15:28:07.502690480Z" level=info msg="StopPodSandbox for \"9d676e713a9f9bdb0ef333af0423e3a978a265863987d14276ec5d59f17734f0\"" Feb 13 15:28:07.502897 containerd[1438]: time="2025-02-13T15:28:07.502889933Z" level=info msg="TearDown network for sandbox \"9d676e713a9f9bdb0ef333af0423e3a978a265863987d14276ec5d59f17734f0\" successfully" Feb 13 15:28:07.502897 containerd[1438]: time="2025-02-13T15:28:07.502900775Z" level=info msg="StopPodSandbox for \"9d676e713a9f9bdb0ef333af0423e3a978a265863987d14276ec5d59f17734f0\" returns successfully" Feb 13 15:28:07.503251 kubelet[2521]: E0213 15:28:07.503226 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:07.503604 containerd[1438]: time="2025-02-13T15:28:07.503576595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bhdmb,Uid:48d6879b-40c7-4fb4-9137-f94a1e0bf631,Namespace:kube-system,Attempt:3,}" Feb 13 15:28:07.503661 containerd[1438]: time="2025-02-13T15:28:07.503602162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55bbcccb65-78qv4,Uid:db472e10-e2a5-49de-9955-0d1cf7adcfd6,Namespace:calico-apiserver,Attempt:3,}" Feb 13 15:28:07.504425 kubelet[2521]: I0213 15:28:07.504062 2521 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ebcedfe59cbd698c1faf3d81c5cddc2808250aba7f1b5b95ba461e61be017a4" Feb 13 15:28:07.504915 containerd[1438]: time="2025-02-13T15:28:07.504841650Z" level=info msg="StopPodSandbox for \"7ebcedfe59cbd698c1faf3d81c5cddc2808250aba7f1b5b95ba461e61be017a4\"" Feb 13 15:28:07.505681 containerd[1438]: time="2025-02-13T15:28:07.505488982Z" level=info msg="Ensure that sandbox 7ebcedfe59cbd698c1faf3d81c5cddc2808250aba7f1b5b95ba461e61be017a4 in task-service has been cleanup successfully" Feb 13 15:28:07.505729 containerd[1438]: time="2025-02-13T15:28:07.505705840Z" level=info msg="TearDown network for sandbox \"7ebcedfe59cbd698c1faf3d81c5cddc2808250aba7f1b5b95ba461e61be017a4\" successfully" Feb 13 15:28:07.505729 containerd[1438]: time="2025-02-13T15:28:07.505721324Z" level=info msg="StopPodSandbox for \"7ebcedfe59cbd698c1faf3d81c5cddc2808250aba7f1b5b95ba461e61be017a4\" returns successfully" Feb 13 15:28:07.510602 containerd[1438]: time="2025-02-13T15:28:07.510561208Z" level=info msg="StopPodSandbox for \"051fcd08b73edccb4ccb159d0ff5c057bc8f7b121c26a4566843e5c34094c720\"" Feb 13 15:28:07.510690 containerd[1438]: time="2025-02-13T15:28:07.510680120Z" level=info msg="TearDown network for sandbox \"051fcd08b73edccb4ccb159d0ff5c057bc8f7b121c26a4566843e5c34094c720\" successfully" Feb 13 15:28:07.510778 containerd[1438]: time="2025-02-13T15:28:07.510692523Z" level=info msg="StopPodSandbox for \"051fcd08b73edccb4ccb159d0ff5c057bc8f7b121c26a4566843e5c34094c720\" returns successfully" Feb 13 15:28:07.511660 containerd[1438]: time="2025-02-13T15:28:07.511630852Z" level=info msg="StopPodSandbox for \"0bbc8283fb1680fd35c9202a2ab37d2c9e7100b2f1110b5295aeee2d24e7261e\"" Feb 13 15:28:07.511731 containerd[1438]: time="2025-02-13T15:28:07.511714634Z" level=info msg="TearDown network for sandbox \"0bbc8283fb1680fd35c9202a2ab37d2c9e7100b2f1110b5295aeee2d24e7261e\" successfully" Feb 13 15:28:07.511731 containerd[1438]: time="2025-02-13T15:28:07.511727638Z" level=info msg="StopPodSandbox for \"0bbc8283fb1680fd35c9202a2ab37d2c9e7100b2f1110b5295aeee2d24e7261e\" returns successfully" Feb 13 15:28:07.511940 kubelet[2521]: E0213 15:28:07.511917 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:07.512209 containerd[1438]: time="2025-02-13T15:28:07.512182078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xrphc,Uid:be132fa9-3a1c-4777-b6f8-2618a1865453,Namespace:kube-system,Attempt:3,}" Feb 13 15:28:07.512988 kubelet[2521]: I0213 15:28:07.512884 2521 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c874b9a1f46bbd68aaf2a7ea1fe0a5aa6e80af9b640025e9e2baccd227229769" Feb 13 15:28:07.513390 containerd[1438]: time="2025-02-13T15:28:07.513366273Z" level=info msg="StopPodSandbox for \"c874b9a1f46bbd68aaf2a7ea1fe0a5aa6e80af9b640025e9e2baccd227229769\"" Feb 13 15:28:07.513668 containerd[1438]: time="2025-02-13T15:28:07.513628462Z" level=info msg="Ensure that sandbox c874b9a1f46bbd68aaf2a7ea1fe0a5aa6e80af9b640025e9e2baccd227229769 in task-service has been cleanup successfully" Feb 13 15:28:07.513817 containerd[1438]: time="2025-02-13T15:28:07.513798907Z" level=info msg="TearDown network for sandbox \"c874b9a1f46bbd68aaf2a7ea1fe0a5aa6e80af9b640025e9e2baccd227229769\" successfully" Feb 13 15:28:07.513852 containerd[1438]: time="2025-02-13T15:28:07.513824074Z" level=info msg="StopPodSandbox for \"c874b9a1f46bbd68aaf2a7ea1fe0a5aa6e80af9b640025e9e2baccd227229769\" returns successfully" Feb 13 15:28:07.514267 containerd[1438]: time="2025-02-13T15:28:07.514206256Z" level=info msg="StopPodSandbox for \"93372508d2f96d2502f51ec2872c102fd38e529d8ae7a472f2ade1091b6b7b5e\"" Feb 13 15:28:07.514468 containerd[1438]: time="2025-02-13T15:28:07.514405388Z" level=info msg="TearDown network for sandbox \"93372508d2f96d2502f51ec2872c102fd38e529d8ae7a472f2ade1091b6b7b5e\" successfully" Feb 13 15:28:07.514468 containerd[1438]: time="2025-02-13T15:28:07.514426274Z" level=info msg="StopPodSandbox for \"93372508d2f96d2502f51ec2872c102fd38e529d8ae7a472f2ade1091b6b7b5e\" returns successfully" Feb 13 15:28:07.514733 containerd[1438]: time="2025-02-13T15:28:07.514657535Z" level=info msg="StopPodSandbox for \"ff8fd863173aed4a1fc618a73607b9ac08318473fa88fefca838b1ac4c6441db\"" Feb 13 15:28:07.514887 containerd[1438]: time="2025-02-13T15:28:07.514864830Z" level=info msg="TearDown network for sandbox \"ff8fd863173aed4a1fc618a73607b9ac08318473fa88fefca838b1ac4c6441db\" successfully" Feb 13 15:28:07.514887 containerd[1438]: time="2025-02-13T15:28:07.514884756Z" level=info msg="StopPodSandbox for \"ff8fd863173aed4a1fc618a73607b9ac08318473fa88fefca838b1ac4c6441db\" returns successfully" Feb 13 15:28:07.515314 containerd[1438]: time="2025-02-13T15:28:07.515286702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c68d99f8f-wlrdk,Uid:b17d9265-9df2-40dd-a8b7-46383a0e17ce,Namespace:calico-system,Attempt:3,}" Feb 13 15:28:07.875649 systemd[1]: run-netns-cni\x2d70626880\x2d0049\x2db9b8\x2d1887\x2d2f871c92f30f.mount: Deactivated successfully. Feb 13 15:28:07.876211 systemd[1]: run-netns-cni\x2d52058d34\x2da474\x2d39d2\x2d892c\x2d487003f6a77d.mount: Deactivated successfully. Feb 13 15:28:07.876265 systemd[1]: run-netns-cni\x2d9276502a\x2dc6b1\x2dba96\x2d587d\x2da86eb9f12f0b.mount: Deactivated successfully. Feb 13 15:28:07.876309 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-365686e2ea3126d4074c6dfa28c3e1ea5feabc62ece6b663740e99931c83d2ab-shm.mount: Deactivated successfully. Feb 13 15:28:07.876358 systemd[1]: run-netns-cni\x2d8d75e90e\x2d483d\x2dedc5\x2dd8dc\x2d4bde03020b55.mount: Deactivated successfully. Feb 13 15:28:07.876399 systemd[1]: run-netns-cni\x2d978ed513\x2d6a03\x2dfbce\x2d0f3a\x2d35c4949ddf26.mount: Deactivated successfully. Feb 13 15:28:07.876452 systemd[1]: run-netns-cni\x2dc7189857\x2d9d55\x2d7382\x2d7034\x2dc27d3765815c.mount: Deactivated successfully. Feb 13 15:28:07.948173 containerd[1438]: time="2025-02-13T15:28:07.948103918Z" level=error msg="Failed to destroy network for sandbox \"9ea262909c8919e05cb7fe199a283a585036b3c7d6a8a64160d8cb41a91b3d3d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:07.951119 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9ea262909c8919e05cb7fe199a283a585036b3c7d6a8a64160d8cb41a91b3d3d-shm.mount: Deactivated successfully. Feb 13 15:28:07.951998 containerd[1438]: time="2025-02-13T15:28:07.951500980Z" level=error msg="encountered an error cleaning up failed sandbox \"9ea262909c8919e05cb7fe199a283a585036b3c7d6a8a64160d8cb41a91b3d3d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:07.952336 containerd[1438]: time="2025-02-13T15:28:07.952047845Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xrphc,Uid:be132fa9-3a1c-4777-b6f8-2618a1865453,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"9ea262909c8919e05cb7fe199a283a585036b3c7d6a8a64160d8cb41a91b3d3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:07.952385 kubelet[2521]: E0213 15:28:07.952349 2521 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ea262909c8919e05cb7fe199a283a585036b3c7d6a8a64160d8cb41a91b3d3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:07.952464 kubelet[2521]: E0213 15:28:07.952403 2521 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ea262909c8919e05cb7fe199a283a585036b3c7d6a8a64160d8cb41a91b3d3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-xrphc" Feb 13 15:28:07.952464 kubelet[2521]: E0213 15:28:07.952424 2521 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ea262909c8919e05cb7fe199a283a585036b3c7d6a8a64160d8cb41a91b3d3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-xrphc" Feb 13 15:28:07.952518 kubelet[2521]: E0213 15:28:07.952463 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-xrphc_kube-system(be132fa9-3a1c-4777-b6f8-2618a1865453)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-xrphc_kube-system(be132fa9-3a1c-4777-b6f8-2618a1865453)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9ea262909c8919e05cb7fe199a283a585036b3c7d6a8a64160d8cb41a91b3d3d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-xrphc" podUID="be132fa9-3a1c-4777-b6f8-2618a1865453" Feb 13 15:28:07.958512 containerd[1438]: time="2025-02-13T15:28:07.958350677Z" level=error msg="Failed to destroy network for sandbox \"7cd8e48fe4cf11325abf60135b81eee57f7a784d7d0bcf06096fcdf01876d51b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:07.961632 containerd[1438]: time="2025-02-13T15:28:07.959307171Z" level=error msg="encountered an error cleaning up failed sandbox \"7cd8e48fe4cf11325abf60135b81eee57f7a784d7d0bcf06096fcdf01876d51b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:07.961632 containerd[1438]: time="2025-02-13T15:28:07.959373989Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55bbcccb65-mmhbv,Uid:2330a284-0835-4d9f-929e-909c050006b6,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"7cd8e48fe4cf11325abf60135b81eee57f7a784d7d0bcf06096fcdf01876d51b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:07.960914 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7cd8e48fe4cf11325abf60135b81eee57f7a784d7d0bcf06096fcdf01876d51b-shm.mount: Deactivated successfully. Feb 13 15:28:07.961833 kubelet[2521]: E0213 15:28:07.959628 2521 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cd8e48fe4cf11325abf60135b81eee57f7a784d7d0bcf06096fcdf01876d51b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:07.961833 kubelet[2521]: E0213 15:28:07.959699 2521 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cd8e48fe4cf11325abf60135b81eee57f7a784d7d0bcf06096fcdf01876d51b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55bbcccb65-mmhbv" Feb 13 15:28:07.961833 kubelet[2521]: E0213 15:28:07.959720 2521 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cd8e48fe4cf11325abf60135b81eee57f7a784d7d0bcf06096fcdf01876d51b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55bbcccb65-mmhbv" Feb 13 15:28:07.961920 kubelet[2521]: E0213 15:28:07.959878 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-55bbcccb65-mmhbv_calico-apiserver(2330a284-0835-4d9f-929e-909c050006b6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-55bbcccb65-mmhbv_calico-apiserver(2330a284-0835-4d9f-929e-909c050006b6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7cd8e48fe4cf11325abf60135b81eee57f7a784d7d0bcf06096fcdf01876d51b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55bbcccb65-mmhbv" podUID="2330a284-0835-4d9f-929e-909c050006b6" Feb 13 15:28:07.970599 containerd[1438]: time="2025-02-13T15:28:07.970531310Z" level=error msg="Failed to destroy network for sandbox \"0f2c7fb92a9842fe2f78056fed98131e4b938b6c6d6db9d280cff119ae6753f6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:07.970953 containerd[1438]: time="2025-02-13T15:28:07.970921613Z" level=error msg="encountered an error cleaning up failed sandbox \"0f2c7fb92a9842fe2f78056fed98131e4b938b6c6d6db9d280cff119ae6753f6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:07.971014 containerd[1438]: time="2025-02-13T15:28:07.970990992Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55bbcccb65-78qv4,Uid:db472e10-e2a5-49de-9955-0d1cf7adcfd6,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"0f2c7fb92a9842fe2f78056fed98131e4b938b6c6d6db9d280cff119ae6753f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:07.972654 kubelet[2521]: E0213 15:28:07.971975 2521 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f2c7fb92a9842fe2f78056fed98131e4b938b6c6d6db9d280cff119ae6753f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:07.972654 kubelet[2521]: E0213 15:28:07.972043 2521 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f2c7fb92a9842fe2f78056fed98131e4b938b6c6d6db9d280cff119ae6753f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55bbcccb65-78qv4" Feb 13 15:28:07.972654 kubelet[2521]: E0213 15:28:07.972062 2521 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f2c7fb92a9842fe2f78056fed98131e4b938b6c6d6db9d280cff119ae6753f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55bbcccb65-78qv4" Feb 13 15:28:07.972802 kubelet[2521]: E0213 15:28:07.972108 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-55bbcccb65-78qv4_calico-apiserver(db472e10-e2a5-49de-9955-0d1cf7adcfd6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-55bbcccb65-78qv4_calico-apiserver(db472e10-e2a5-49de-9955-0d1cf7adcfd6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0f2c7fb92a9842fe2f78056fed98131e4b938b6c6d6db9d280cff119ae6753f6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55bbcccb65-78qv4" podUID="db472e10-e2a5-49de-9955-0d1cf7adcfd6" Feb 13 15:28:07.975233 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0f2c7fb92a9842fe2f78056fed98131e4b938b6c6d6db9d280cff119ae6753f6-shm.mount: Deactivated successfully. Feb 13 15:28:07.982786 containerd[1438]: time="2025-02-13T15:28:07.982641804Z" level=error msg="Failed to destroy network for sandbox \"b2bddd7fb16d5ae0064f9701b94ee6a0427769ed88e47c2918d43be64eaad657\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:07.983412 containerd[1438]: time="2025-02-13T15:28:07.983369637Z" level=error msg="encountered an error cleaning up failed sandbox \"b2bddd7fb16d5ae0064f9701b94ee6a0427769ed88e47c2918d43be64eaad657\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:07.984369 containerd[1438]: time="2025-02-13T15:28:07.984247310Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bhdmb,Uid:48d6879b-40c7-4fb4-9137-f94a1e0bf631,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"b2bddd7fb16d5ae0064f9701b94ee6a0427769ed88e47c2918d43be64eaad657\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:07.984718 kubelet[2521]: E0213 15:28:07.984674 2521 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2bddd7fb16d5ae0064f9701b94ee6a0427769ed88e47c2918d43be64eaad657\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:07.984791 kubelet[2521]: E0213 15:28:07.984735 2521 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2bddd7fb16d5ae0064f9701b94ee6a0427769ed88e47c2918d43be64eaad657\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-bhdmb" Feb 13 15:28:07.984791 kubelet[2521]: E0213 15:28:07.984759 2521 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2bddd7fb16d5ae0064f9701b94ee6a0427769ed88e47c2918d43be64eaad657\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-bhdmb" Feb 13 15:28:07.984843 kubelet[2521]: E0213 15:28:07.984816 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-bhdmb_kube-system(48d6879b-40c7-4fb4-9137-f94a1e0bf631)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-bhdmb_kube-system(48d6879b-40c7-4fb4-9137-f94a1e0bf631)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b2bddd7fb16d5ae0064f9701b94ee6a0427769ed88e47c2918d43be64eaad657\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-bhdmb" podUID="48d6879b-40c7-4fb4-9137-f94a1e0bf631" Feb 13 15:28:07.985642 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b2bddd7fb16d5ae0064f9701b94ee6a0427769ed88e47c2918d43be64eaad657-shm.mount: Deactivated successfully. Feb 13 15:28:07.992434 containerd[1438]: time="2025-02-13T15:28:07.992373106Z" level=error msg="Failed to destroy network for sandbox \"459eef6c4c6e99a8aa588ac0d3d9efe223669135a0edbec7c0d02674b0ed92fa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:07.992800 containerd[1438]: time="2025-02-13T15:28:07.992757568Z" level=error msg="encountered an error cleaning up failed sandbox \"459eef6c4c6e99a8aa588ac0d3d9efe223669135a0edbec7c0d02674b0ed92fa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:07.992842 containerd[1438]: time="2025-02-13T15:28:07.992821785Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hmqx4,Uid:d3daff69-f8cf-4771-8db4-eb9251b67560,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"459eef6c4c6e99a8aa588ac0d3d9efe223669135a0edbec7c0d02674b0ed92fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:07.993054 kubelet[2521]: E0213 15:28:07.993014 2521 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"459eef6c4c6e99a8aa588ac0d3d9efe223669135a0edbec7c0d02674b0ed92fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:07.993114 kubelet[2521]: E0213 15:28:07.993084 2521 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"459eef6c4c6e99a8aa588ac0d3d9efe223669135a0edbec7c0d02674b0ed92fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hmqx4" Feb 13 15:28:07.993154 kubelet[2521]: E0213 15:28:07.993110 2521 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"459eef6c4c6e99a8aa588ac0d3d9efe223669135a0edbec7c0d02674b0ed92fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hmqx4" Feb 13 15:28:07.993176 kubelet[2521]: E0213 15:28:07.993151 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hmqx4_calico-system(d3daff69-f8cf-4771-8db4-eb9251b67560)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hmqx4_calico-system(d3daff69-f8cf-4771-8db4-eb9251b67560)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"459eef6c4c6e99a8aa588ac0d3d9efe223669135a0edbec7c0d02674b0ed92fa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hmqx4" podUID="d3daff69-f8cf-4771-8db4-eb9251b67560" Feb 13 15:28:07.995857 containerd[1438]: time="2025-02-13T15:28:07.995789933Z" level=error msg="Failed to destroy network for sandbox \"35e63fa6e367659d0929bdff839d4d1b6c5f686fc9f977877b19d16edee3e4dd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:07.996661 containerd[1438]: time="2025-02-13T15:28:07.996140626Z" level=error msg="encountered an error cleaning up failed sandbox \"35e63fa6e367659d0929bdff839d4d1b6c5f686fc9f977877b19d16edee3e4dd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:07.996839 containerd[1438]: time="2025-02-13T15:28:07.996806883Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c68d99f8f-wlrdk,Uid:b17d9265-9df2-40dd-a8b7-46383a0e17ce,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"35e63fa6e367659d0929bdff839d4d1b6c5f686fc9f977877b19d16edee3e4dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:07.997028 kubelet[2521]: E0213 15:28:07.996994 2521 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35e63fa6e367659d0929bdff839d4d1b6c5f686fc9f977877b19d16edee3e4dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:07.997094 kubelet[2521]: E0213 15:28:07.997049 2521 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35e63fa6e367659d0929bdff839d4d1b6c5f686fc9f977877b19d16edee3e4dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c68d99f8f-wlrdk" Feb 13 15:28:07.997198 kubelet[2521]: E0213 15:28:07.997083 2521 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35e63fa6e367659d0929bdff839d4d1b6c5f686fc9f977877b19d16edee3e4dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c68d99f8f-wlrdk" Feb 13 15:28:07.997282 kubelet[2521]: E0213 15:28:07.997221 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6c68d99f8f-wlrdk_calico-system(b17d9265-9df2-40dd-a8b7-46383a0e17ce)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6c68d99f8f-wlrdk_calico-system(b17d9265-9df2-40dd-a8b7-46383a0e17ce)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"35e63fa6e367659d0929bdff839d4d1b6c5f686fc9f977877b19d16edee3e4dd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6c68d99f8f-wlrdk" podUID="b17d9265-9df2-40dd-a8b7-46383a0e17ce" Feb 13 15:28:08.524910 kubelet[2521]: I0213 15:28:08.524810 2521 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7cd8e48fe4cf11325abf60135b81eee57f7a784d7d0bcf06096fcdf01876d51b" Feb 13 15:28:08.526617 containerd[1438]: time="2025-02-13T15:28:08.526585315Z" level=info msg="StopPodSandbox for \"7cd8e48fe4cf11325abf60135b81eee57f7a784d7d0bcf06096fcdf01876d51b\"" Feb 13 15:28:08.528690 containerd[1438]: time="2025-02-13T15:28:08.528608351Z" level=info msg="Ensure that sandbox 7cd8e48fe4cf11325abf60135b81eee57f7a784d7d0bcf06096fcdf01876d51b in task-service has been cleanup successfully" Feb 13 15:28:08.528870 containerd[1438]: time="2025-02-13T15:28:08.528839530Z" level=info msg="TearDown network for sandbox \"7cd8e48fe4cf11325abf60135b81eee57f7a784d7d0bcf06096fcdf01876d51b\" successfully" Feb 13 15:28:08.528906 containerd[1438]: time="2025-02-13T15:28:08.528871418Z" level=info msg="StopPodSandbox for \"7cd8e48fe4cf11325abf60135b81eee57f7a784d7d0bcf06096fcdf01876d51b\" returns successfully" Feb 13 15:28:08.551504 containerd[1438]: time="2025-02-13T15:28:08.551456822Z" level=info msg="StopPodSandbox for \"233214fc7bb247cadbdaa1e736dbf4f26b5ab5d9169eb6a9c3f71561a51e36c2\"" Feb 13 15:28:08.551738 containerd[1438]: time="2025-02-13T15:28:08.551649951Z" level=info msg="TearDown network for sandbox \"233214fc7bb247cadbdaa1e736dbf4f26b5ab5d9169eb6a9c3f71561a51e36c2\" successfully" Feb 13 15:28:08.551738 containerd[1438]: time="2025-02-13T15:28:08.551704845Z" level=info msg="StopPodSandbox for \"233214fc7bb247cadbdaa1e736dbf4f26b5ab5d9169eb6a9c3f71561a51e36c2\" returns successfully" Feb 13 15:28:08.552441 containerd[1438]: time="2025-02-13T15:28:08.552414906Z" level=info msg="StopPodSandbox for \"936cf36bd18b5d423427d367f40192355a28d275d154bbc72428cc0a0db2ce21\"" Feb 13 15:28:08.552512 containerd[1438]: time="2025-02-13T15:28:08.552497327Z" level=info msg="TearDown network for sandbox \"936cf36bd18b5d423427d367f40192355a28d275d154bbc72428cc0a0db2ce21\" successfully" Feb 13 15:28:08.552545 containerd[1438]: time="2025-02-13T15:28:08.552510731Z" level=info msg="StopPodSandbox for \"936cf36bd18b5d423427d367f40192355a28d275d154bbc72428cc0a0db2ce21\" returns successfully" Feb 13 15:28:08.553437 containerd[1438]: time="2025-02-13T15:28:08.553409560Z" level=info msg="StopPodSandbox for \"2253f70f58a228177dae68e6db7242302fb163a8e7df684f97a846e3d6faf81d\"" Feb 13 15:28:08.553510 containerd[1438]: time="2025-02-13T15:28:08.553492701Z" level=info msg="TearDown network for sandbox \"2253f70f58a228177dae68e6db7242302fb163a8e7df684f97a846e3d6faf81d\" successfully" Feb 13 15:28:08.553510 containerd[1438]: time="2025-02-13T15:28:08.553503544Z" level=info msg="StopPodSandbox for \"2253f70f58a228177dae68e6db7242302fb163a8e7df684f97a846e3d6faf81d\" returns successfully" Feb 13 15:28:08.553973 containerd[1438]: time="2025-02-13T15:28:08.553899245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55bbcccb65-mmhbv,Uid:2330a284-0835-4d9f-929e-909c050006b6,Namespace:calico-apiserver,Attempt:4,}" Feb 13 15:28:08.555182 kubelet[2521]: I0213 15:28:08.555149 2521 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f2c7fb92a9842fe2f78056fed98131e4b938b6c6d6db9d280cff119ae6753f6" Feb 13 15:28:08.556574 containerd[1438]: time="2025-02-13T15:28:08.556542279Z" level=info msg="StopPodSandbox for \"0f2c7fb92a9842fe2f78056fed98131e4b938b6c6d6db9d280cff119ae6753f6\"" Feb 13 15:28:08.556711 containerd[1438]: time="2025-02-13T15:28:08.556692038Z" level=info msg="Ensure that sandbox 0f2c7fb92a9842fe2f78056fed98131e4b938b6c6d6db9d280cff119ae6753f6 in task-service has been cleanup successfully" Feb 13 15:28:08.556878 containerd[1438]: time="2025-02-13T15:28:08.556860641Z" level=info msg="TearDown network for sandbox \"0f2c7fb92a9842fe2f78056fed98131e4b938b6c6d6db9d280cff119ae6753f6\" successfully" Feb 13 15:28:08.556919 containerd[1438]: time="2025-02-13T15:28:08.556878685Z" level=info msg="StopPodSandbox for \"0f2c7fb92a9842fe2f78056fed98131e4b938b6c6d6db9d280cff119ae6753f6\" returns successfully" Feb 13 15:28:08.558817 kubelet[2521]: I0213 15:28:08.558779 2521 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ea262909c8919e05cb7fe199a283a585036b3c7d6a8a64160d8cb41a91b3d3d" Feb 13 15:28:08.559206 containerd[1438]: time="2025-02-13T15:28:08.557786837Z" level=info msg="StopPodSandbox for \"ebd99b89a667d6242d41d6f195e59eb6706c8195f2e4a6e0ccb977bed11e5855\"" Feb 13 15:28:08.559423 containerd[1438]: time="2025-02-13T15:28:08.559289100Z" level=info msg="StopPodSandbox for \"9ea262909c8919e05cb7fe199a283a585036b3c7d6a8a64160d8cb41a91b3d3d\"" Feb 13 15:28:08.559566 containerd[1438]: time="2025-02-13T15:28:08.559485230Z" level=info msg="Ensure that sandbox 9ea262909c8919e05cb7fe199a283a585036b3c7d6a8a64160d8cb41a91b3d3d in task-service has been cleanup successfully" Feb 13 15:28:08.559566 containerd[1438]: time="2025-02-13T15:28:08.559524280Z" level=info msg="TearDown network for sandbox \"ebd99b89a667d6242d41d6f195e59eb6706c8195f2e4a6e0ccb977bed11e5855\" successfully" Feb 13 15:28:08.559566 containerd[1438]: time="2025-02-13T15:28:08.559543645Z" level=info msg="StopPodSandbox for \"ebd99b89a667d6242d41d6f195e59eb6706c8195f2e4a6e0ccb977bed11e5855\" returns successfully" Feb 13 15:28:08.560026 containerd[1438]: time="2025-02-13T15:28:08.559653673Z" level=info msg="TearDown network for sandbox \"9ea262909c8919e05cb7fe199a283a585036b3c7d6a8a64160d8cb41a91b3d3d\" successfully" Feb 13 15:28:08.560026 containerd[1438]: time="2025-02-13T15:28:08.559668797Z" level=info msg="StopPodSandbox for \"9ea262909c8919e05cb7fe199a283a585036b3c7d6a8a64160d8cb41a91b3d3d\" returns successfully" Feb 13 15:28:08.560311 containerd[1438]: time="2025-02-13T15:28:08.560091225Z" level=info msg="StopPodSandbox for \"7ebcedfe59cbd698c1faf3d81c5cddc2808250aba7f1b5b95ba461e61be017a4\"" Feb 13 15:28:08.560311 containerd[1438]: time="2025-02-13T15:28:08.560177327Z" level=info msg="TearDown network for sandbox \"7ebcedfe59cbd698c1faf3d81c5cddc2808250aba7f1b5b95ba461e61be017a4\" successfully" Feb 13 15:28:08.560311 containerd[1438]: time="2025-02-13T15:28:08.560186449Z" level=info msg="StopPodSandbox for \"7ebcedfe59cbd698c1faf3d81c5cddc2808250aba7f1b5b95ba461e61be017a4\" returns successfully" Feb 13 15:28:08.561104 containerd[1438]: time="2025-02-13T15:28:08.561029505Z" level=info msg="StopPodSandbox for \"39d45d65b330807fcecc7aa43e2979e258bdabd910b8af148e084f968f291a28\"" Feb 13 15:28:08.561192 containerd[1438]: time="2025-02-13T15:28:08.561130210Z" level=info msg="TearDown network for sandbox \"39d45d65b330807fcecc7aa43e2979e258bdabd910b8af148e084f968f291a28\" successfully" Feb 13 15:28:08.561192 containerd[1438]: time="2025-02-13T15:28:08.561142973Z" level=info msg="StopPodSandbox for \"39d45d65b330807fcecc7aa43e2979e258bdabd910b8af148e084f968f291a28\" returns successfully" Feb 13 15:28:08.561281 containerd[1438]: time="2025-02-13T15:28:08.561255522Z" level=info msg="StopPodSandbox for \"051fcd08b73edccb4ccb159d0ff5c057bc8f7b121c26a4566843e5c34094c720\"" Feb 13 15:28:08.561401 containerd[1438]: time="2025-02-13T15:28:08.561324700Z" level=info msg="TearDown network for sandbox \"051fcd08b73edccb4ccb159d0ff5c057bc8f7b121c26a4566843e5c34094c720\" successfully" Feb 13 15:28:08.561401 containerd[1438]: time="2025-02-13T15:28:08.561338783Z" level=info msg="StopPodSandbox for \"051fcd08b73edccb4ccb159d0ff5c057bc8f7b121c26a4566843e5c34094c720\" returns successfully" Feb 13 15:28:08.563141 containerd[1438]: time="2025-02-13T15:28:08.563111076Z" level=info msg="StopPodSandbox for \"0bd54dba73f69ad05fdd64de844dfb9298048bde545384bf327983a09dc6b98c\"" Feb 13 15:28:08.563286 containerd[1438]: time="2025-02-13T15:28:08.563190816Z" level=info msg="TearDown network for sandbox \"0bd54dba73f69ad05fdd64de844dfb9298048bde545384bf327983a09dc6b98c\" successfully" Feb 13 15:28:08.563286 containerd[1438]: time="2025-02-13T15:28:08.563200499Z" level=info msg="StopPodSandbox for \"0bd54dba73f69ad05fdd64de844dfb9298048bde545384bf327983a09dc6b98c\" returns successfully" Feb 13 15:28:08.563286 containerd[1438]: time="2025-02-13T15:28:08.563239108Z" level=info msg="StopPodSandbox for \"0bbc8283fb1680fd35c9202a2ab37d2c9e7100b2f1110b5295aeee2d24e7261e\"" Feb 13 15:28:08.563286 containerd[1438]: time="2025-02-13T15:28:08.563286400Z" level=info msg="TearDown network for sandbox \"0bbc8283fb1680fd35c9202a2ab37d2c9e7100b2f1110b5295aeee2d24e7261e\" successfully" Feb 13 15:28:08.563529 containerd[1438]: time="2025-02-13T15:28:08.563294563Z" level=info msg="StopPodSandbox for \"0bbc8283fb1680fd35c9202a2ab37d2c9e7100b2f1110b5295aeee2d24e7261e\" returns successfully" Feb 13 15:28:08.563568 kubelet[2521]: E0213 15:28:08.563496 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:08.564268 containerd[1438]: time="2025-02-13T15:28:08.564237803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55bbcccb65-78qv4,Uid:db472e10-e2a5-49de-9955-0d1cf7adcfd6,Namespace:calico-apiserver,Attempt:4,}" Feb 13 15:28:08.564361 containerd[1438]: time="2025-02-13T15:28:08.564339749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xrphc,Uid:be132fa9-3a1c-4777-b6f8-2618a1865453,Namespace:kube-system,Attempt:4,}" Feb 13 15:28:08.564458 kubelet[2521]: I0213 15:28:08.564434 2521 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35e63fa6e367659d0929bdff839d4d1b6c5f686fc9f977877b19d16edee3e4dd" Feb 13 15:28:08.565442 containerd[1438]: time="2025-02-13T15:28:08.565396179Z" level=info msg="StopPodSandbox for \"35e63fa6e367659d0929bdff839d4d1b6c5f686fc9f977877b19d16edee3e4dd\"" Feb 13 15:28:08.565569 containerd[1438]: time="2025-02-13T15:28:08.565551658Z" level=info msg="Ensure that sandbox 35e63fa6e367659d0929bdff839d4d1b6c5f686fc9f977877b19d16edee3e4dd in task-service has been cleanup successfully" Feb 13 15:28:08.570478 kubelet[2521]: I0213 15:28:08.570448 2521 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2bddd7fb16d5ae0064f9701b94ee6a0427769ed88e47c2918d43be64eaad657" Feb 13 15:28:08.571212 containerd[1438]: time="2025-02-13T15:28:08.570933912Z" level=info msg="StopPodSandbox for \"b2bddd7fb16d5ae0064f9701b94ee6a0427769ed88e47c2918d43be64eaad657\"" Feb 13 15:28:08.571212 containerd[1438]: time="2025-02-13T15:28:08.571109757Z" level=info msg="Ensure that sandbox b2bddd7fb16d5ae0064f9701b94ee6a0427769ed88e47c2918d43be64eaad657 in task-service has been cleanup successfully" Feb 13 15:28:08.571390 containerd[1438]: time="2025-02-13T15:28:08.571295724Z" level=info msg="TearDown network for sandbox \"b2bddd7fb16d5ae0064f9701b94ee6a0427769ed88e47c2918d43be64eaad657\" successfully" Feb 13 15:28:08.571390 containerd[1438]: time="2025-02-13T15:28:08.571314329Z" level=info msg="StopPodSandbox for \"b2bddd7fb16d5ae0064f9701b94ee6a0427769ed88e47c2918d43be64eaad657\" returns successfully" Feb 13 15:28:08.571920 containerd[1438]: time="2025-02-13T15:28:08.571878233Z" level=info msg="StopPodSandbox for \"365686e2ea3126d4074c6dfa28c3e1ea5feabc62ece6b663740e99931c83d2ab\"" Feb 13 15:28:08.572572 containerd[1438]: time="2025-02-13T15:28:08.572545043Z" level=info msg="TearDown network for sandbox \"365686e2ea3126d4074c6dfa28c3e1ea5feabc62ece6b663740e99931c83d2ab\" successfully" Feb 13 15:28:08.572572 containerd[1438]: time="2025-02-13T15:28:08.572570330Z" level=info msg="StopPodSandbox for \"365686e2ea3126d4074c6dfa28c3e1ea5feabc62ece6b663740e99931c83d2ab\" returns successfully" Feb 13 15:28:08.574335 containerd[1438]: time="2025-02-13T15:28:08.574307693Z" level=info msg="StopPodSandbox for \"3212d16950422236b26587f9b3647a1dbfd28814e3a0abc8871bf58f07a51551\"" Feb 13 15:28:08.574407 containerd[1438]: time="2025-02-13T15:28:08.574386193Z" level=info msg="TearDown network for sandbox \"3212d16950422236b26587f9b3647a1dbfd28814e3a0abc8871bf58f07a51551\" successfully" Feb 13 15:28:08.574407 containerd[1438]: time="2025-02-13T15:28:08.574396115Z" level=info msg="StopPodSandbox for \"3212d16950422236b26587f9b3647a1dbfd28814e3a0abc8871bf58f07a51551\" returns successfully" Feb 13 15:28:08.576019 containerd[1438]: time="2025-02-13T15:28:08.575770026Z" level=info msg="StopPodSandbox for \"9d676e713a9f9bdb0ef333af0423e3a978a265863987d14276ec5d59f17734f0\"" Feb 13 15:28:08.576019 containerd[1438]: time="2025-02-13T15:28:08.575853487Z" level=info msg="TearDown network for sandbox \"9d676e713a9f9bdb0ef333af0423e3a978a265863987d14276ec5d59f17734f0\" successfully" Feb 13 15:28:08.576019 containerd[1438]: time="2025-02-13T15:28:08.575863130Z" level=info msg="StopPodSandbox for \"9d676e713a9f9bdb0ef333af0423e3a978a265863987d14276ec5d59f17734f0\" returns successfully" Feb 13 15:28:08.576170 kubelet[2521]: E0213 15:28:08.576109 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:08.576170 kubelet[2521]: I0213 15:28:08.576118 2521 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="459eef6c4c6e99a8aa588ac0d3d9efe223669135a0edbec7c0d02674b0ed92fa" Feb 13 15:28:08.576909 containerd[1438]: time="2025-02-13T15:28:08.576877028Z" level=info msg="StopPodSandbox for \"459eef6c4c6e99a8aa588ac0d3d9efe223669135a0edbec7c0d02674b0ed92fa\"" Feb 13 15:28:08.576962 containerd[1438]: time="2025-02-13T15:28:08.576915958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bhdmb,Uid:48d6879b-40c7-4fb4-9137-f94a1e0bf631,Namespace:kube-system,Attempt:4,}" Feb 13 15:28:08.577461 containerd[1438]: time="2025-02-13T15:28:08.577421087Z" level=info msg="Ensure that sandbox 459eef6c4c6e99a8aa588ac0d3d9efe223669135a0edbec7c0d02674b0ed92fa in task-service has been cleanup successfully" Feb 13 15:28:08.577752 containerd[1438]: time="2025-02-13T15:28:08.577730446Z" level=info msg="TearDown network for sandbox \"459eef6c4c6e99a8aa588ac0d3d9efe223669135a0edbec7c0d02674b0ed92fa\" successfully" Feb 13 15:28:08.577752 containerd[1438]: time="2025-02-13T15:28:08.577751012Z" level=info msg="StopPodSandbox for \"459eef6c4c6e99a8aa588ac0d3d9efe223669135a0edbec7c0d02674b0ed92fa\" returns successfully" Feb 13 15:28:08.578411 containerd[1438]: time="2025-02-13T15:28:08.578358527Z" level=info msg="StopPodSandbox for \"5bcbbbaf93ca2f8c82394fbd2b1b82ed092a9669f081bf4b7be356296412d245\"" Feb 13 15:28:08.578478 containerd[1438]: time="2025-02-13T15:28:08.578445509Z" level=info msg="TearDown network for sandbox \"5bcbbbaf93ca2f8c82394fbd2b1b82ed092a9669f081bf4b7be356296412d245\" successfully" Feb 13 15:28:08.578478 containerd[1438]: time="2025-02-13T15:28:08.578456432Z" level=info msg="StopPodSandbox for \"5bcbbbaf93ca2f8c82394fbd2b1b82ed092a9669f081bf4b7be356296412d245\" returns successfully" Feb 13 15:28:08.578793 containerd[1438]: time="2025-02-13T15:28:08.578760509Z" level=info msg="StopPodSandbox for \"a6226e92350381ab768784fb94d579846ab05715195002b23c3ad52b9bb59176\"" Feb 13 15:28:08.578906 containerd[1438]: time="2025-02-13T15:28:08.578850092Z" level=info msg="TearDown network for sandbox \"a6226e92350381ab768784fb94d579846ab05715195002b23c3ad52b9bb59176\" successfully" Feb 13 15:28:08.578906 containerd[1438]: time="2025-02-13T15:28:08.578860415Z" level=info msg="StopPodSandbox for \"a6226e92350381ab768784fb94d579846ab05715195002b23c3ad52b9bb59176\" returns successfully" Feb 13 15:28:08.579148 containerd[1438]: time="2025-02-13T15:28:08.579125282Z" level=info msg="StopPodSandbox for \"1e575dbffd3d4c245a8c3a4e13c078d1720170ccc9f1a8dfbda6639dd922a63d\"" Feb 13 15:28:08.579216 containerd[1438]: time="2025-02-13T15:28:08.579197461Z" level=info msg="TearDown network for sandbox \"1e575dbffd3d4c245a8c3a4e13c078d1720170ccc9f1a8dfbda6639dd922a63d\" successfully" Feb 13 15:28:08.579247 containerd[1438]: time="2025-02-13T15:28:08.579212985Z" level=info msg="StopPodSandbox for \"1e575dbffd3d4c245a8c3a4e13c078d1720170ccc9f1a8dfbda6639dd922a63d\" returns successfully" Feb 13 15:28:08.580151 containerd[1438]: time="2025-02-13T15:28:08.580060561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hmqx4,Uid:d3daff69-f8cf-4771-8db4-eb9251b67560,Namespace:calico-system,Attempt:4,}" Feb 13 15:28:08.665137 containerd[1438]: time="2025-02-13T15:28:08.665097821Z" level=info msg="TearDown network for sandbox \"35e63fa6e367659d0929bdff839d4d1b6c5f686fc9f977877b19d16edee3e4dd\" successfully" Feb 13 15:28:08.665894 containerd[1438]: time="2025-02-13T15:28:08.665323438Z" level=info msg="StopPodSandbox for \"35e63fa6e367659d0929bdff839d4d1b6c5f686fc9f977877b19d16edee3e4dd\" returns successfully" Feb 13 15:28:08.667162 containerd[1438]: time="2025-02-13T15:28:08.667018871Z" level=info msg="StopPodSandbox for \"c874b9a1f46bbd68aaf2a7ea1fe0a5aa6e80af9b640025e9e2baccd227229769\"" Feb 13 15:28:08.667162 containerd[1438]: time="2025-02-13T15:28:08.667126338Z" level=info msg="TearDown network for sandbox \"c874b9a1f46bbd68aaf2a7ea1fe0a5aa6e80af9b640025e9e2baccd227229769\" successfully" Feb 13 15:28:08.667162 containerd[1438]: time="2025-02-13T15:28:08.667136981Z" level=info msg="StopPodSandbox for \"c874b9a1f46bbd68aaf2a7ea1fe0a5aa6e80af9b640025e9e2baccd227229769\" returns successfully" Feb 13 15:28:08.668051 containerd[1438]: time="2025-02-13T15:28:08.668029329Z" level=info msg="StopPodSandbox for \"93372508d2f96d2502f51ec2872c102fd38e529d8ae7a472f2ade1091b6b7b5e\"" Feb 13 15:28:08.668476 containerd[1438]: time="2025-02-13T15:28:08.668379098Z" level=info msg="TearDown network for sandbox \"93372508d2f96d2502f51ec2872c102fd38e529d8ae7a472f2ade1091b6b7b5e\" successfully" Feb 13 15:28:08.668476 containerd[1438]: time="2025-02-13T15:28:08.668398543Z" level=info msg="StopPodSandbox for \"93372508d2f96d2502f51ec2872c102fd38e529d8ae7a472f2ade1091b6b7b5e\" returns successfully" Feb 13 15:28:08.668890 containerd[1438]: time="2025-02-13T15:28:08.668674773Z" level=info msg="StopPodSandbox for \"ff8fd863173aed4a1fc618a73607b9ac08318473fa88fefca838b1ac4c6441db\"" Feb 13 15:28:08.668890 containerd[1438]: time="2025-02-13T15:28:08.668763756Z" level=info msg="TearDown network for sandbox \"ff8fd863173aed4a1fc618a73607b9ac08318473fa88fefca838b1ac4c6441db\" successfully" Feb 13 15:28:08.668890 containerd[1438]: time="2025-02-13T15:28:08.668774159Z" level=info msg="StopPodSandbox for \"ff8fd863173aed4a1fc618a73607b9ac08318473fa88fefca838b1ac4c6441db\" returns successfully" Feb 13 15:28:08.669556 containerd[1438]: time="2025-02-13T15:28:08.669528991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c68d99f8f-wlrdk,Uid:b17d9265-9df2-40dd-a8b7-46383a0e17ce,Namespace:calico-system,Attempt:4,}" Feb 13 15:28:08.873476 systemd[1]: run-netns-cni\x2dba841a4a\x2ddafe\x2d4b55\x2df5dd\x2de795064f41ee.mount: Deactivated successfully. Feb 13 15:28:08.873578 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-35e63fa6e367659d0929bdff839d4d1b6c5f686fc9f977877b19d16edee3e4dd-shm.mount: Deactivated successfully. Feb 13 15:28:08.873630 systemd[1]: run-netns-cni\x2d3f0fe814\x2dd8a7\x2d904e\x2d2503\x2d5dc6a7b38ce2.mount: Deactivated successfully. Feb 13 15:28:08.873674 systemd[1]: run-netns-cni\x2d6f7ec42a\x2d451b\x2d1c57\x2dc84e\x2d00789fe05472.mount: Deactivated successfully. Feb 13 15:28:08.873722 systemd[1]: run-netns-cni\x2dff1c9762\x2d8baa\x2dca46\x2d2657\x2d4e9a33822351.mount: Deactivated successfully. Feb 13 15:28:08.873762 systemd[1]: run-netns-cni\x2d0c2b0f3c\x2dc89f\x2d1bc9\x2dfdf3\x2d22a80c4e2ca5.mount: Deactivated successfully. Feb 13 15:28:08.873803 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-459eef6c4c6e99a8aa588ac0d3d9efe223669135a0edbec7c0d02674b0ed92fa-shm.mount: Deactivated successfully. Feb 13 15:28:08.873847 systemd[1]: run-netns-cni\x2ddd4b84ec\x2d3740\x2d381b\x2d3b51\x2db228c27c9c82.mount: Deactivated successfully. Feb 13 15:28:08.910857 containerd[1438]: time="2025-02-13T15:28:08.910722419Z" level=error msg="Failed to destroy network for sandbox \"9dd50c1169bff60764c79272474e7ca909a0107af7a189c3cbeed0f7c50a856c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:08.912008 containerd[1438]: time="2025-02-13T15:28:08.911961576Z" level=error msg="encountered an error cleaning up failed sandbox \"9dd50c1169bff60764c79272474e7ca909a0107af7a189c3cbeed0f7c50a856c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:08.913179 containerd[1438]: time="2025-02-13T15:28:08.912571611Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55bbcccb65-mmhbv,Uid:2330a284-0835-4d9f-929e-909c050006b6,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"9dd50c1169bff60764c79272474e7ca909a0107af7a189c3cbeed0f7c50a856c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:08.913669 kubelet[2521]: E0213 15:28:08.913526 2521 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dd50c1169bff60764c79272474e7ca909a0107af7a189c3cbeed0f7c50a856c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:08.913669 kubelet[2521]: E0213 15:28:08.913596 2521 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dd50c1169bff60764c79272474e7ca909a0107af7a189c3cbeed0f7c50a856c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55bbcccb65-mmhbv" Feb 13 15:28:08.913669 kubelet[2521]: E0213 15:28:08.913617 2521 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dd50c1169bff60764c79272474e7ca909a0107af7a189c3cbeed0f7c50a856c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55bbcccb65-mmhbv" Feb 13 15:28:08.913854 kubelet[2521]: E0213 15:28:08.913655 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-55bbcccb65-mmhbv_calico-apiserver(2330a284-0835-4d9f-929e-909c050006b6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-55bbcccb65-mmhbv_calico-apiserver(2330a284-0835-4d9f-929e-909c050006b6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9dd50c1169bff60764c79272474e7ca909a0107af7a189c3cbeed0f7c50a856c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55bbcccb65-mmhbv" podUID="2330a284-0835-4d9f-929e-909c050006b6" Feb 13 15:28:08.913731 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9dd50c1169bff60764c79272474e7ca909a0107af7a189c3cbeed0f7c50a856c-shm.mount: Deactivated successfully. Feb 13 15:28:08.934061 containerd[1438]: time="2025-02-13T15:28:08.933986156Z" level=error msg="Failed to destroy network for sandbox \"d0c2eb6f93193316e44af9d4dbdeb228109b616fedaa816c6c35c6dd7186ac09\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:08.937007 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d0c2eb6f93193316e44af9d4dbdeb228109b616fedaa816c6c35c6dd7186ac09-shm.mount: Deactivated successfully. Feb 13 15:28:08.939284 containerd[1438]: time="2025-02-13T15:28:08.939064132Z" level=error msg="encountered an error cleaning up failed sandbox \"d0c2eb6f93193316e44af9d4dbdeb228109b616fedaa816c6c35c6dd7186ac09\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:08.939611 containerd[1438]: time="2025-02-13T15:28:08.939572701Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55bbcccb65-78qv4,Uid:db472e10-e2a5-49de-9955-0d1cf7adcfd6,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"d0c2eb6f93193316e44af9d4dbdeb228109b616fedaa816c6c35c6dd7186ac09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:08.940416 kubelet[2521]: E0213 15:28:08.940363 2521 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0c2eb6f93193316e44af9d4dbdeb228109b616fedaa816c6c35c6dd7186ac09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:08.940599 kubelet[2521]: E0213 15:28:08.940452 2521 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0c2eb6f93193316e44af9d4dbdeb228109b616fedaa816c6c35c6dd7186ac09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55bbcccb65-78qv4" Feb 13 15:28:08.940599 kubelet[2521]: E0213 15:28:08.940505 2521 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0c2eb6f93193316e44af9d4dbdeb228109b616fedaa816c6c35c6dd7186ac09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55bbcccb65-78qv4" Feb 13 15:28:08.940599 kubelet[2521]: E0213 15:28:08.940597 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-55bbcccb65-78qv4_calico-apiserver(db472e10-e2a5-49de-9955-0d1cf7adcfd6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-55bbcccb65-78qv4_calico-apiserver(db472e10-e2a5-49de-9955-0d1cf7adcfd6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d0c2eb6f93193316e44af9d4dbdeb228109b616fedaa816c6c35c6dd7186ac09\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55bbcccb65-78qv4" podUID="db472e10-e2a5-49de-9955-0d1cf7adcfd6" Feb 13 15:28:08.954467 containerd[1438]: time="2025-02-13T15:28:08.954407567Z" level=error msg="Failed to destroy network for sandbox \"8f7bee58858555566b8477fe6bb885380e3ec24102d973bc32a89681765c1508\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:08.954862 containerd[1438]: time="2025-02-13T15:28:08.954827674Z" level=error msg="encountered an error cleaning up failed sandbox \"8f7bee58858555566b8477fe6bb885380e3ec24102d973bc32a89681765c1508\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:08.954969 containerd[1438]: time="2025-02-13T15:28:08.954903213Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hmqx4,Uid:d3daff69-f8cf-4771-8db4-eb9251b67560,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"8f7bee58858555566b8477fe6bb885380e3ec24102d973bc32a89681765c1508\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:08.955265 kubelet[2521]: E0213 15:28:08.955224 2521 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f7bee58858555566b8477fe6bb885380e3ec24102d973bc32a89681765c1508\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:08.955341 kubelet[2521]: E0213 15:28:08.955288 2521 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f7bee58858555566b8477fe6bb885380e3ec24102d973bc32a89681765c1508\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hmqx4" Feb 13 15:28:08.955341 kubelet[2521]: E0213 15:28:08.955308 2521 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f7bee58858555566b8477fe6bb885380e3ec24102d973bc32a89681765c1508\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hmqx4" Feb 13 15:28:08.955449 kubelet[2521]: E0213 15:28:08.955412 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hmqx4_calico-system(d3daff69-f8cf-4771-8db4-eb9251b67560)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hmqx4_calico-system(d3daff69-f8cf-4771-8db4-eb9251b67560)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8f7bee58858555566b8477fe6bb885380e3ec24102d973bc32a89681765c1508\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hmqx4" podUID="d3daff69-f8cf-4771-8db4-eb9251b67560" Feb 13 15:28:08.958213 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8f7bee58858555566b8477fe6bb885380e3ec24102d973bc32a89681765c1508-shm.mount: Deactivated successfully. Feb 13 15:28:08.959988 containerd[1438]: time="2025-02-13T15:28:08.959829791Z" level=error msg="Failed to destroy network for sandbox \"86147124bed1f9993203b2167e20e2e93bb5e10c9702e17942935a551ce02d6c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:08.961103 containerd[1438]: time="2025-02-13T15:28:08.960489999Z" level=error msg="encountered an error cleaning up failed sandbox \"86147124bed1f9993203b2167e20e2e93bb5e10c9702e17942935a551ce02d6c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:08.962430 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-86147124bed1f9993203b2167e20e2e93bb5e10c9702e17942935a551ce02d6c-shm.mount: Deactivated successfully. Feb 13 15:28:08.963385 containerd[1438]: time="2025-02-13T15:28:08.963345208Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xrphc,Uid:be132fa9-3a1c-4777-b6f8-2618a1865453,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"86147124bed1f9993203b2167e20e2e93bb5e10c9702e17942935a551ce02d6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:08.964421 kubelet[2521]: E0213 15:28:08.964302 2521 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86147124bed1f9993203b2167e20e2e93bb5e10c9702e17942935a551ce02d6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:08.964421 kubelet[2521]: E0213 15:28:08.964359 2521 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86147124bed1f9993203b2167e20e2e93bb5e10c9702e17942935a551ce02d6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-xrphc" Feb 13 15:28:08.964421 kubelet[2521]: E0213 15:28:08.964377 2521 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86147124bed1f9993203b2167e20e2e93bb5e10c9702e17942935a551ce02d6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-xrphc" Feb 13 15:28:08.964574 kubelet[2521]: E0213 15:28:08.964423 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-xrphc_kube-system(be132fa9-3a1c-4777-b6f8-2618a1865453)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-xrphc_kube-system(be132fa9-3a1c-4777-b6f8-2618a1865453)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"86147124bed1f9993203b2167e20e2e93bb5e10c9702e17942935a551ce02d6c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-xrphc" podUID="be132fa9-3a1c-4777-b6f8-2618a1865453" Feb 13 15:28:08.970626 containerd[1438]: time="2025-02-13T15:28:08.970572012Z" level=error msg="Failed to destroy network for sandbox \"91ac3c9c17810de765c7ea367d4d4a805d2038f43c9051e647a42f6dd72df458\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:08.972431 containerd[1438]: time="2025-02-13T15:28:08.972367550Z" level=error msg="encountered an error cleaning up failed sandbox \"91ac3c9c17810de765c7ea367d4d4a805d2038f43c9051e647a42f6dd72df458\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:08.972537 containerd[1438]: time="2025-02-13T15:28:08.972459133Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bhdmb,Uid:48d6879b-40c7-4fb4-9137-f94a1e0bf631,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"91ac3c9c17810de765c7ea367d4d4a805d2038f43c9051e647a42f6dd72df458\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:08.972798 kubelet[2521]: E0213 15:28:08.972751 2521 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91ac3c9c17810de765c7ea367d4d4a805d2038f43c9051e647a42f6dd72df458\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:08.972866 kubelet[2521]: E0213 15:28:08.972827 2521 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91ac3c9c17810de765c7ea367d4d4a805d2038f43c9051e647a42f6dd72df458\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-bhdmb" Feb 13 15:28:08.972866 kubelet[2521]: E0213 15:28:08.972847 2521 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91ac3c9c17810de765c7ea367d4d4a805d2038f43c9051e647a42f6dd72df458\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-bhdmb" Feb 13 15:28:08.973130 kubelet[2521]: E0213 15:28:08.972953 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-bhdmb_kube-system(48d6879b-40c7-4fb4-9137-f94a1e0bf631)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-bhdmb_kube-system(48d6879b-40c7-4fb4-9137-f94a1e0bf631)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"91ac3c9c17810de765c7ea367d4d4a805d2038f43c9051e647a42f6dd72df458\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-bhdmb" podUID="48d6879b-40c7-4fb4-9137-f94a1e0bf631" Feb 13 15:28:08.987471 containerd[1438]: time="2025-02-13T15:28:08.987414270Z" level=error msg="Failed to destroy network for sandbox \"41b2565a38bfef8f3a666d78b84f6bb5e7b0dd9a901cae77a54f239a6a5c47de\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:08.987858 containerd[1438]: time="2025-02-13T15:28:08.987825975Z" level=error msg="encountered an error cleaning up failed sandbox \"41b2565a38bfef8f3a666d78b84f6bb5e7b0dd9a901cae77a54f239a6a5c47de\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:08.987936 containerd[1438]: time="2025-02-13T15:28:08.987892232Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c68d99f8f-wlrdk,Uid:b17d9265-9df2-40dd-a8b7-46383a0e17ce,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"41b2565a38bfef8f3a666d78b84f6bb5e7b0dd9a901cae77a54f239a6a5c47de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:08.988585 kubelet[2521]: E0213 15:28:08.988192 2521 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41b2565a38bfef8f3a666d78b84f6bb5e7b0dd9a901cae77a54f239a6a5c47de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:08.988585 kubelet[2521]: E0213 15:28:08.988273 2521 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41b2565a38bfef8f3a666d78b84f6bb5e7b0dd9a901cae77a54f239a6a5c47de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c68d99f8f-wlrdk" Feb 13 15:28:08.988585 kubelet[2521]: E0213 15:28:08.988297 2521 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41b2565a38bfef8f3a666d78b84f6bb5e7b0dd9a901cae77a54f239a6a5c47de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c68d99f8f-wlrdk" Feb 13 15:28:08.988818 kubelet[2521]: E0213 15:28:08.988345 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6c68d99f8f-wlrdk_calico-system(b17d9265-9df2-40dd-a8b7-46383a0e17ce)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6c68d99f8f-wlrdk_calico-system(b17d9265-9df2-40dd-a8b7-46383a0e17ce)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"41b2565a38bfef8f3a666d78b84f6bb5e7b0dd9a901cae77a54f239a6a5c47de\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6c68d99f8f-wlrdk" podUID="b17d9265-9df2-40dd-a8b7-46383a0e17ce" Feb 13 15:28:09.123082 containerd[1438]: time="2025-02-13T15:28:09.123011267Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:09.135186 containerd[1438]: time="2025-02-13T15:28:09.135019896Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Feb 13 15:28:09.146306 containerd[1438]: time="2025-02-13T15:28:09.146253375Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:09.148487 containerd[1438]: time="2025-02-13T15:28:09.148446794Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:09.149096 containerd[1438]: time="2025-02-13T15:28:09.149028217Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 3.690513688s" Feb 13 15:28:09.149096 containerd[1438]: time="2025-02-13T15:28:09.149061585Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Feb 13 15:28:09.155736 containerd[1438]: time="2025-02-13T15:28:09.155689733Z" level=info msg="CreateContainer within sandbox \"ce807cc14a06288c004cb56c172c910d527ac79f3401ce6c601af4e44c400266\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 15:28:09.169628 containerd[1438]: time="2025-02-13T15:28:09.169579465Z" level=info msg="CreateContainer within sandbox \"ce807cc14a06288c004cb56c172c910d527ac79f3401ce6c601af4e44c400266\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7cb5523a6feaefacba06bd13956a7fe77c8a42a73483942d1411e4b422435309\"" Feb 13 15:28:09.170325 containerd[1438]: time="2025-02-13T15:28:09.170300522Z" level=info msg="StartContainer for \"7cb5523a6feaefacba06bd13956a7fe77c8a42a73483942d1411e4b422435309\"" Feb 13 15:28:09.224248 systemd[1]: Started cri-containerd-7cb5523a6feaefacba06bd13956a7fe77c8a42a73483942d1411e4b422435309.scope - libcontainer container 7cb5523a6feaefacba06bd13956a7fe77c8a42a73483942d1411e4b422435309. Feb 13 15:28:09.260400 containerd[1438]: time="2025-02-13T15:28:09.260327275Z" level=info msg="StartContainer for \"7cb5523a6feaefacba06bd13956a7fe77c8a42a73483942d1411e4b422435309\" returns successfully" Feb 13 15:28:09.459423 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 15:28:09.459539 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 15:28:09.582602 kubelet[2521]: I0213 15:28:09.582574 2521 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9dd50c1169bff60764c79272474e7ca909a0107af7a189c3cbeed0f7c50a856c" Feb 13 15:28:09.583369 containerd[1438]: time="2025-02-13T15:28:09.583330254Z" level=info msg="StopPodSandbox for \"9dd50c1169bff60764c79272474e7ca909a0107af7a189c3cbeed0f7c50a856c\"" Feb 13 15:28:09.584339 containerd[1438]: time="2025-02-13T15:28:09.583498255Z" level=info msg="Ensure that sandbox 9dd50c1169bff60764c79272474e7ca909a0107af7a189c3cbeed0f7c50a856c in task-service has been cleanup successfully" Feb 13 15:28:09.584679 containerd[1438]: time="2025-02-13T15:28:09.584653859Z" level=info msg="TearDown network for sandbox \"9dd50c1169bff60764c79272474e7ca909a0107af7a189c3cbeed0f7c50a856c\" successfully" Feb 13 15:28:09.584708 containerd[1438]: time="2025-02-13T15:28:09.584679666Z" level=info msg="StopPodSandbox for \"9dd50c1169bff60764c79272474e7ca909a0107af7a189c3cbeed0f7c50a856c\" returns successfully" Feb 13 15:28:09.585186 containerd[1438]: time="2025-02-13T15:28:09.585158823Z" level=info msg="StopPodSandbox for \"7cd8e48fe4cf11325abf60135b81eee57f7a784d7d0bcf06096fcdf01876d51b\"" Feb 13 15:28:09.585254 containerd[1438]: time="2025-02-13T15:28:09.585239763Z" level=info msg="TearDown network for sandbox \"7cd8e48fe4cf11325abf60135b81eee57f7a784d7d0bcf06096fcdf01876d51b\" successfully" Feb 13 15:28:09.585254 containerd[1438]: time="2025-02-13T15:28:09.585252326Z" level=info msg="StopPodSandbox for \"7cd8e48fe4cf11325abf60135b81eee57f7a784d7d0bcf06096fcdf01876d51b\" returns successfully" Feb 13 15:28:09.585461 kubelet[2521]: I0213 15:28:09.585443 2521 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91ac3c9c17810de765c7ea367d4d4a805d2038f43c9051e647a42f6dd72df458" Feb 13 15:28:09.586214 containerd[1438]: time="2025-02-13T15:28:09.586183835Z" level=info msg="StopPodSandbox for \"233214fc7bb247cadbdaa1e736dbf4f26b5ab5d9169eb6a9c3f71561a51e36c2\"" Feb 13 15:28:09.589640 containerd[1438]: time="2025-02-13T15:28:09.586333232Z" level=info msg="TearDown network for sandbox \"233214fc7bb247cadbdaa1e736dbf4f26b5ab5d9169eb6a9c3f71561a51e36c2\" successfully" Feb 13 15:28:09.589640 containerd[1438]: time="2025-02-13T15:28:09.586358958Z" level=info msg="StopPodSandbox for \"233214fc7bb247cadbdaa1e736dbf4f26b5ab5d9169eb6a9c3f71561a51e36c2\" returns successfully" Feb 13 15:28:09.589640 containerd[1438]: time="2025-02-13T15:28:09.586458703Z" level=info msg="StopPodSandbox for \"91ac3c9c17810de765c7ea367d4d4a805d2038f43c9051e647a42f6dd72df458\"" Feb 13 15:28:09.589640 containerd[1438]: time="2025-02-13T15:28:09.586727609Z" level=info msg="Ensure that sandbox 91ac3c9c17810de765c7ea367d4d4a805d2038f43c9051e647a42f6dd72df458 in task-service has been cleanup successfully" Feb 13 15:28:09.589640 containerd[1438]: time="2025-02-13T15:28:09.587039325Z" level=info msg="TearDown network for sandbox \"91ac3c9c17810de765c7ea367d4d4a805d2038f43c9051e647a42f6dd72df458\" successfully" Feb 13 15:28:09.589640 containerd[1438]: time="2025-02-13T15:28:09.587091378Z" level=info msg="StopPodSandbox for \"91ac3c9c17810de765c7ea367d4d4a805d2038f43c9051e647a42f6dd72df458\" returns successfully" Feb 13 15:28:09.589640 containerd[1438]: time="2025-02-13T15:28:09.587604464Z" level=info msg="StopPodSandbox for \"b2bddd7fb16d5ae0064f9701b94ee6a0427769ed88e47c2918d43be64eaad657\"" Feb 13 15:28:09.589640 containerd[1438]: time="2025-02-13T15:28:09.587701928Z" level=info msg="TearDown network for sandbox \"b2bddd7fb16d5ae0064f9701b94ee6a0427769ed88e47c2918d43be64eaad657\" successfully" Feb 13 15:28:09.589640 containerd[1438]: time="2025-02-13T15:28:09.587712291Z" level=info msg="StopPodSandbox for \"b2bddd7fb16d5ae0064f9701b94ee6a0427769ed88e47c2918d43be64eaad657\" returns successfully" Feb 13 15:28:09.589640 containerd[1438]: time="2025-02-13T15:28:09.587845323Z" level=info msg="StopPodSandbox for \"936cf36bd18b5d423427d367f40192355a28d275d154bbc72428cc0a0db2ce21\"" Feb 13 15:28:09.589640 containerd[1438]: time="2025-02-13T15:28:09.587915100Z" level=info msg="TearDown network for sandbox \"936cf36bd18b5d423427d367f40192355a28d275d154bbc72428cc0a0db2ce21\" successfully" Feb 13 15:28:09.589640 containerd[1438]: time="2025-02-13T15:28:09.587924863Z" level=info msg="StopPodSandbox for \"936cf36bd18b5d423427d367f40192355a28d275d154bbc72428cc0a0db2ce21\" returns successfully" Feb 13 15:28:09.589640 containerd[1438]: time="2025-02-13T15:28:09.588364051Z" level=info msg="StopPodSandbox for \"2253f70f58a228177dae68e6db7242302fb163a8e7df684f97a846e3d6faf81d\"" Feb 13 15:28:09.589640 containerd[1438]: time="2025-02-13T15:28:09.588449792Z" level=info msg="TearDown network for sandbox \"2253f70f58a228177dae68e6db7242302fb163a8e7df684f97a846e3d6faf81d\" successfully" Feb 13 15:28:09.589640 containerd[1438]: time="2025-02-13T15:28:09.588460474Z" level=info msg="StopPodSandbox for \"2253f70f58a228177dae68e6db7242302fb163a8e7df684f97a846e3d6faf81d\" returns successfully" Feb 13 15:28:09.589640 containerd[1438]: time="2025-02-13T15:28:09.588599709Z" level=info msg="StopPodSandbox for \"365686e2ea3126d4074c6dfa28c3e1ea5feabc62ece6b663740e99931c83d2ab\"" Feb 13 15:28:09.589640 containerd[1438]: time="2025-02-13T15:28:09.588664284Z" level=info msg="TearDown network for sandbox \"365686e2ea3126d4074c6dfa28c3e1ea5feabc62ece6b663740e99931c83d2ab\" successfully" Feb 13 15:28:09.589640 containerd[1438]: time="2025-02-13T15:28:09.588678848Z" level=info msg="StopPodSandbox for \"365686e2ea3126d4074c6dfa28c3e1ea5feabc62ece6b663740e99931c83d2ab\" returns successfully" Feb 13 15:28:09.590676 containerd[1438]: time="2025-02-13T15:28:09.590600240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55bbcccb65-mmhbv,Uid:2330a284-0835-4d9f-929e-909c050006b6,Namespace:calico-apiserver,Attempt:5,}" Feb 13 15:28:09.590852 containerd[1438]: time="2025-02-13T15:28:09.590620645Z" level=info msg="StopPodSandbox for \"3212d16950422236b26587f9b3647a1dbfd28814e3a0abc8871bf58f07a51551\"" Feb 13 15:28:09.590965 containerd[1438]: time="2025-02-13T15:28:09.590940483Z" level=info msg="TearDown network for sandbox \"3212d16950422236b26587f9b3647a1dbfd28814e3a0abc8871bf58f07a51551\" successfully" Feb 13 15:28:09.590965 containerd[1438]: time="2025-02-13T15:28:09.590963489Z" level=info msg="StopPodSandbox for \"3212d16950422236b26587f9b3647a1dbfd28814e3a0abc8871bf58f07a51551\" returns successfully" Feb 13 15:28:09.591505 containerd[1438]: time="2025-02-13T15:28:09.591449889Z" level=info msg="StopPodSandbox for \"9d676e713a9f9bdb0ef333af0423e3a978a265863987d14276ec5d59f17734f0\"" Feb 13 15:28:09.592376 containerd[1438]: time="2025-02-13T15:28:09.592312180Z" level=info msg="TearDown network for sandbox \"9d676e713a9f9bdb0ef333af0423e3a978a265863987d14276ec5d59f17734f0\" successfully" Feb 13 15:28:09.592376 containerd[1438]: time="2025-02-13T15:28:09.592335386Z" level=info msg="StopPodSandbox for \"9d676e713a9f9bdb0ef333af0423e3a978a265863987d14276ec5d59f17734f0\" returns successfully" Feb 13 15:28:09.592551 kubelet[2521]: E0213 15:28:09.592518 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:09.592603 kubelet[2521]: I0213 15:28:09.592588 2521 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f7bee58858555566b8477fe6bb885380e3ec24102d973bc32a89681765c1508" Feb 13 15:28:09.592844 containerd[1438]: time="2025-02-13T15:28:09.592819905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bhdmb,Uid:48d6879b-40c7-4fb4-9137-f94a1e0bf631,Namespace:kube-system,Attempt:5,}" Feb 13 15:28:09.593371 containerd[1438]: time="2025-02-13T15:28:09.592962140Z" level=info msg="StopPodSandbox for \"8f7bee58858555566b8477fe6bb885380e3ec24102d973bc32a89681765c1508\"" Feb 13 15:28:09.593483 containerd[1438]: time="2025-02-13T15:28:09.593433216Z" level=info msg="Ensure that sandbox 8f7bee58858555566b8477fe6bb885380e3ec24102d973bc32a89681765c1508 in task-service has been cleanup successfully" Feb 13 15:28:09.593691 containerd[1438]: time="2025-02-13T15:28:09.593629464Z" level=info msg="TearDown network for sandbox \"8f7bee58858555566b8477fe6bb885380e3ec24102d973bc32a89681765c1508\" successfully" Feb 13 15:28:09.593788 containerd[1438]: time="2025-02-13T15:28:09.593705563Z" level=info msg="StopPodSandbox for \"8f7bee58858555566b8477fe6bb885380e3ec24102d973bc32a89681765c1508\" returns successfully" Feb 13 15:28:09.595280 containerd[1438]: time="2025-02-13T15:28:09.594787588Z" level=info msg="StopPodSandbox for \"459eef6c4c6e99a8aa588ac0d3d9efe223669135a0edbec7c0d02674b0ed92fa\"" Feb 13 15:28:09.595280 containerd[1438]: time="2025-02-13T15:28:09.594935265Z" level=info msg="TearDown network for sandbox \"459eef6c4c6e99a8aa588ac0d3d9efe223669135a0edbec7c0d02674b0ed92fa\" successfully" Feb 13 15:28:09.595280 containerd[1438]: time="2025-02-13T15:28:09.594947468Z" level=info msg="StopPodSandbox for \"459eef6c4c6e99a8aa588ac0d3d9efe223669135a0edbec7c0d02674b0ed92fa\" returns successfully" Feb 13 15:28:09.595874 containerd[1438]: time="2025-02-13T15:28:09.595848249Z" level=info msg="StopPodSandbox for \"5bcbbbaf93ca2f8c82394fbd2b1b82ed092a9669f081bf4b7be356296412d245\"" Feb 13 15:28:09.595935 containerd[1438]: time="2025-02-13T15:28:09.595918026Z" level=info msg="TearDown network for sandbox \"5bcbbbaf93ca2f8c82394fbd2b1b82ed092a9669f081bf4b7be356296412d245\" successfully" Feb 13 15:28:09.595935 containerd[1438]: time="2025-02-13T15:28:09.595927148Z" level=info msg="StopPodSandbox for \"5bcbbbaf93ca2f8c82394fbd2b1b82ed092a9669f081bf4b7be356296412d245\" returns successfully" Feb 13 15:28:09.596635 containerd[1438]: time="2025-02-13T15:28:09.596507891Z" level=info msg="StopPodSandbox for \"a6226e92350381ab768784fb94d579846ab05715195002b23c3ad52b9bb59176\"" Feb 13 15:28:09.596635 containerd[1438]: time="2025-02-13T15:28:09.596593032Z" level=info msg="TearDown network for sandbox \"a6226e92350381ab768784fb94d579846ab05715195002b23c3ad52b9bb59176\" successfully" Feb 13 15:28:09.596635 containerd[1438]: time="2025-02-13T15:28:09.596602434Z" level=info msg="StopPodSandbox for \"a6226e92350381ab768784fb94d579846ab05715195002b23c3ad52b9bb59176\" returns successfully" Feb 13 15:28:09.597001 containerd[1438]: time="2025-02-13T15:28:09.596979087Z" level=info msg="StopPodSandbox for \"1e575dbffd3d4c245a8c3a4e13c078d1720170ccc9f1a8dfbda6639dd922a63d\"" Feb 13 15:28:09.597341 containerd[1438]: time="2025-02-13T15:28:09.597246352Z" level=info msg="TearDown network for sandbox \"1e575dbffd3d4c245a8c3a4e13c078d1720170ccc9f1a8dfbda6639dd922a63d\" successfully" Feb 13 15:28:09.597341 containerd[1438]: time="2025-02-13T15:28:09.597272279Z" level=info msg="StopPodSandbox for \"1e575dbffd3d4c245a8c3a4e13c078d1720170ccc9f1a8dfbda6639dd922a63d\" returns successfully" Feb 13 15:28:09.597951 containerd[1438]: time="2025-02-13T15:28:09.597714747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hmqx4,Uid:d3daff69-f8cf-4771-8db4-eb9251b67560,Namespace:calico-system,Attempt:5,}" Feb 13 15:28:09.602175 kubelet[2521]: I0213 15:28:09.601649 2521 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41b2565a38bfef8f3a666d78b84f6bb5e7b0dd9a901cae77a54f239a6a5c47de" Feb 13 15:28:09.602707 containerd[1438]: time="2025-02-13T15:28:09.602401459Z" level=info msg="StopPodSandbox for \"41b2565a38bfef8f3a666d78b84f6bb5e7b0dd9a901cae77a54f239a6a5c47de\"" Feb 13 15:28:09.603153 containerd[1438]: time="2025-02-13T15:28:09.603123556Z" level=info msg="Ensure that sandbox 41b2565a38bfef8f3a666d78b84f6bb5e7b0dd9a901cae77a54f239a6a5c47de in task-service has been cleanup successfully" Feb 13 15:28:09.603638 containerd[1438]: time="2025-02-13T15:28:09.603551541Z" level=info msg="TearDown network for sandbox \"41b2565a38bfef8f3a666d78b84f6bb5e7b0dd9a901cae77a54f239a6a5c47de\" successfully" Feb 13 15:28:09.603638 containerd[1438]: time="2025-02-13T15:28:09.603572466Z" level=info msg="StopPodSandbox for \"41b2565a38bfef8f3a666d78b84f6bb5e7b0dd9a901cae77a54f239a6a5c47de\" returns successfully" Feb 13 15:28:09.604425 containerd[1438]: time="2025-02-13T15:28:09.604401750Z" level=info msg="StopPodSandbox for \"35e63fa6e367659d0929bdff839d4d1b6c5f686fc9f977877b19d16edee3e4dd\"" Feb 13 15:28:09.604581 containerd[1438]: time="2025-02-13T15:28:09.604562389Z" level=info msg="TearDown network for sandbox \"35e63fa6e367659d0929bdff839d4d1b6c5f686fc9f977877b19d16edee3e4dd\" successfully" Feb 13 15:28:09.604618 containerd[1438]: time="2025-02-13T15:28:09.604579034Z" level=info msg="StopPodSandbox for \"35e63fa6e367659d0929bdff839d4d1b6c5f686fc9f977877b19d16edee3e4dd\" returns successfully" Feb 13 15:28:09.604951 containerd[1438]: time="2025-02-13T15:28:09.604931320Z" level=info msg="StopPodSandbox for \"c874b9a1f46bbd68aaf2a7ea1fe0a5aa6e80af9b640025e9e2baccd227229769\"" Feb 13 15:28:09.605406 containerd[1438]: time="2025-02-13T15:28:09.605378270Z" level=info msg="TearDown network for sandbox \"c874b9a1f46bbd68aaf2a7ea1fe0a5aa6e80af9b640025e9e2baccd227229769\" successfully" Feb 13 15:28:09.605406 containerd[1438]: time="2025-02-13T15:28:09.605400595Z" level=info msg="StopPodSandbox for \"c874b9a1f46bbd68aaf2a7ea1fe0a5aa6e80af9b640025e9e2baccd227229769\" returns successfully" Feb 13 15:28:09.606372 kubelet[2521]: E0213 15:28:09.605996 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:09.606515 containerd[1438]: time="2025-02-13T15:28:09.606120092Z" level=info msg="StopPodSandbox for \"93372508d2f96d2502f51ec2872c102fd38e529d8ae7a472f2ade1091b6b7b5e\"" Feb 13 15:28:09.606515 containerd[1438]: time="2025-02-13T15:28:09.606205033Z" level=info msg="TearDown network for sandbox \"93372508d2f96d2502f51ec2872c102fd38e529d8ae7a472f2ade1091b6b7b5e\" successfully" Feb 13 15:28:09.606515 containerd[1438]: time="2025-02-13T15:28:09.606214835Z" level=info msg="StopPodSandbox for \"93372508d2f96d2502f51ec2872c102fd38e529d8ae7a472f2ade1091b6b7b5e\" returns successfully" Feb 13 15:28:09.607543 containerd[1438]: time="2025-02-13T15:28:09.607300782Z" level=info msg="StopPodSandbox for \"ff8fd863173aed4a1fc618a73607b9ac08318473fa88fefca838b1ac4c6441db\"" Feb 13 15:28:09.607543 containerd[1438]: time="2025-02-13T15:28:09.607462542Z" level=info msg="TearDown network for sandbox \"ff8fd863173aed4a1fc618a73607b9ac08318473fa88fefca838b1ac4c6441db\" successfully" Feb 13 15:28:09.607543 containerd[1438]: time="2025-02-13T15:28:09.607476545Z" level=info msg="StopPodSandbox for \"ff8fd863173aed4a1fc618a73607b9ac08318473fa88fefca838b1ac4c6441db\" returns successfully" Feb 13 15:28:09.610151 containerd[1438]: time="2025-02-13T15:28:09.609268905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c68d99f8f-wlrdk,Uid:b17d9265-9df2-40dd-a8b7-46383a0e17ce,Namespace:calico-system,Attempt:5,}" Feb 13 15:28:09.612103 kubelet[2521]: I0213 15:28:09.611600 2521 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0c2eb6f93193316e44af9d4dbdeb228109b616fedaa816c6c35c6dd7186ac09" Feb 13 15:28:09.612761 containerd[1438]: time="2025-02-13T15:28:09.612717593Z" level=info msg="StopPodSandbox for \"d0c2eb6f93193316e44af9d4dbdeb228109b616fedaa816c6c35c6dd7186ac09\"" Feb 13 15:28:09.613058 containerd[1438]: time="2025-02-13T15:28:09.612877472Z" level=info msg="Ensure that sandbox d0c2eb6f93193316e44af9d4dbdeb228109b616fedaa816c6c35c6dd7186ac09 in task-service has been cleanup successfully" Feb 13 15:28:09.615141 containerd[1438]: time="2025-02-13T15:28:09.614227123Z" level=info msg="TearDown network for sandbox \"d0c2eb6f93193316e44af9d4dbdeb228109b616fedaa816c6c35c6dd7186ac09\" successfully" Feb 13 15:28:09.615141 containerd[1438]: time="2025-02-13T15:28:09.614258611Z" level=info msg="StopPodSandbox for \"d0c2eb6f93193316e44af9d4dbdeb228109b616fedaa816c6c35c6dd7186ac09\" returns successfully" Feb 13 15:28:09.615366 containerd[1438]: time="2025-02-13T15:28:09.615245974Z" level=info msg="StopPodSandbox for \"0f2c7fb92a9842fe2f78056fed98131e4b938b6c6d6db9d280cff119ae6753f6\"" Feb 13 15:28:09.615366 containerd[1438]: time="2025-02-13T15:28:09.615341077Z" level=info msg="TearDown network for sandbox \"0f2c7fb92a9842fe2f78056fed98131e4b938b6c6d6db9d280cff119ae6753f6\" successfully" Feb 13 15:28:09.615366 containerd[1438]: time="2025-02-13T15:28:09.615352200Z" level=info msg="StopPodSandbox for \"0f2c7fb92a9842fe2f78056fed98131e4b938b6c6d6db9d280cff119ae6753f6\" returns successfully" Feb 13 15:28:09.615750 containerd[1438]: time="2025-02-13T15:28:09.615720570Z" level=info msg="StopPodSandbox for \"ebd99b89a667d6242d41d6f195e59eb6706c8195f2e4a6e0ccb977bed11e5855\"" Feb 13 15:28:09.615808 containerd[1438]: time="2025-02-13T15:28:09.615798949Z" level=info msg="TearDown network for sandbox \"ebd99b89a667d6242d41d6f195e59eb6706c8195f2e4a6e0ccb977bed11e5855\" successfully" Feb 13 15:28:09.615831 containerd[1438]: time="2025-02-13T15:28:09.615808632Z" level=info msg="StopPodSandbox for \"ebd99b89a667d6242d41d6f195e59eb6706c8195f2e4a6e0ccb977bed11e5855\" returns successfully" Feb 13 15:28:09.616190 containerd[1438]: time="2025-02-13T15:28:09.616156997Z" level=info msg="StopPodSandbox for \"39d45d65b330807fcecc7aa43e2979e258bdabd910b8af148e084f968f291a28\"" Feb 13 15:28:09.616240 containerd[1438]: time="2025-02-13T15:28:09.616230215Z" level=info msg="TearDown network for sandbox \"39d45d65b330807fcecc7aa43e2979e258bdabd910b8af148e084f968f291a28\" successfully" Feb 13 15:28:09.616240 containerd[1438]: time="2025-02-13T15:28:09.616239898Z" level=info msg="StopPodSandbox for \"39d45d65b330807fcecc7aa43e2979e258bdabd910b8af148e084f968f291a28\" returns successfully" Feb 13 15:28:09.616903 containerd[1438]: time="2025-02-13T15:28:09.616731939Z" level=info msg="StopPodSandbox for \"0bd54dba73f69ad05fdd64de844dfb9298048bde545384bf327983a09dc6b98c\"" Feb 13 15:28:09.616903 containerd[1438]: time="2025-02-13T15:28:09.616805237Z" level=info msg="TearDown network for sandbox \"0bd54dba73f69ad05fdd64de844dfb9298048bde545384bf327983a09dc6b98c\" successfully" Feb 13 15:28:09.616903 containerd[1438]: time="2025-02-13T15:28:09.616814159Z" level=info msg="StopPodSandbox for \"0bd54dba73f69ad05fdd64de844dfb9298048bde545384bf327983a09dc6b98c\" returns successfully" Feb 13 15:28:09.617916 containerd[1438]: time="2025-02-13T15:28:09.617610154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55bbcccb65-78qv4,Uid:db472e10-e2a5-49de-9955-0d1cf7adcfd6,Namespace:calico-apiserver,Attempt:5,}" Feb 13 15:28:09.618977 kubelet[2521]: I0213 15:28:09.618960 2521 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86147124bed1f9993203b2167e20e2e93bb5e10c9702e17942935a551ce02d6c" Feb 13 15:28:09.621307 containerd[1438]: time="2025-02-13T15:28:09.620816582Z" level=info msg="StopPodSandbox for \"86147124bed1f9993203b2167e20e2e93bb5e10c9702e17942935a551ce02d6c\"" Feb 13 15:28:09.621307 containerd[1438]: time="2025-02-13T15:28:09.621066403Z" level=info msg="Ensure that sandbox 86147124bed1f9993203b2167e20e2e93bb5e10c9702e17942935a551ce02d6c in task-service has been cleanup successfully" Feb 13 15:28:09.621653 containerd[1438]: time="2025-02-13T15:28:09.621309303Z" level=info msg="TearDown network for sandbox \"86147124bed1f9993203b2167e20e2e93bb5e10c9702e17942935a551ce02d6c\" successfully" Feb 13 15:28:09.621653 containerd[1438]: time="2025-02-13T15:28:09.621327147Z" level=info msg="StopPodSandbox for \"86147124bed1f9993203b2167e20e2e93bb5e10c9702e17942935a551ce02d6c\" returns successfully" Feb 13 15:28:09.622043 containerd[1438]: time="2025-02-13T15:28:09.621864599Z" level=info msg="StopPodSandbox for \"9ea262909c8919e05cb7fe199a283a585036b3c7d6a8a64160d8cb41a91b3d3d\"" Feb 13 15:28:09.622930 containerd[1438]: time="2025-02-13T15:28:09.622083013Z" level=info msg="TearDown network for sandbox \"9ea262909c8919e05cb7fe199a283a585036b3c7d6a8a64160d8cb41a91b3d3d\" successfully" Feb 13 15:28:09.622930 containerd[1438]: time="2025-02-13T15:28:09.622252775Z" level=info msg="StopPodSandbox for \"9ea262909c8919e05cb7fe199a283a585036b3c7d6a8a64160d8cb41a91b3d3d\" returns successfully" Feb 13 15:28:09.622930 containerd[1438]: time="2025-02-13T15:28:09.622666476Z" level=info msg="StopPodSandbox for \"7ebcedfe59cbd698c1faf3d81c5cddc2808250aba7f1b5b95ba461e61be017a4\"" Feb 13 15:28:09.622930 containerd[1438]: time="2025-02-13T15:28:09.622787186Z" level=info msg="TearDown network for sandbox \"7ebcedfe59cbd698c1faf3d81c5cddc2808250aba7f1b5b95ba461e61be017a4\" successfully" Feb 13 15:28:09.622930 containerd[1438]: time="2025-02-13T15:28:09.622798229Z" level=info msg="StopPodSandbox for \"7ebcedfe59cbd698c1faf3d81c5cddc2808250aba7f1b5b95ba461e61be017a4\" returns successfully" Feb 13 15:28:09.623399 containerd[1438]: time="2025-02-13T15:28:09.623360927Z" level=info msg="StopPodSandbox for \"051fcd08b73edccb4ccb159d0ff5c057bc8f7b121c26a4566843e5c34094c720\"" Feb 13 15:28:09.623473 containerd[1438]: time="2025-02-13T15:28:09.623456070Z" level=info msg="TearDown network for sandbox \"051fcd08b73edccb4ccb159d0ff5c057bc8f7b121c26a4566843e5c34094c720\" successfully" Feb 13 15:28:09.623473 containerd[1438]: time="2025-02-13T15:28:09.623467313Z" level=info msg="StopPodSandbox for \"051fcd08b73edccb4ccb159d0ff5c057bc8f7b121c26a4566843e5c34094c720\" returns successfully" Feb 13 15:28:09.624863 containerd[1438]: time="2025-02-13T15:28:09.624750628Z" level=info msg="StopPodSandbox for \"0bbc8283fb1680fd35c9202a2ab37d2c9e7100b2f1110b5295aeee2d24e7261e\"" Feb 13 15:28:09.624946 containerd[1438]: time="2025-02-13T15:28:09.624926952Z" level=info msg="TearDown network for sandbox \"0bbc8283fb1680fd35c9202a2ab37d2c9e7100b2f1110b5295aeee2d24e7261e\" successfully" Feb 13 15:28:09.624946 containerd[1438]: time="2025-02-13T15:28:09.624939435Z" level=info msg="StopPodSandbox for \"0bbc8283fb1680fd35c9202a2ab37d2c9e7100b2f1110b5295aeee2d24e7261e\" returns successfully" Feb 13 15:28:09.625197 kubelet[2521]: E0213 15:28:09.625164 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:09.625538 containerd[1438]: time="2025-02-13T15:28:09.625514296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xrphc,Uid:be132fa9-3a1c-4777-b6f8-2618a1865453,Namespace:kube-system,Attempt:5,}" Feb 13 15:28:09.884205 systemd[1]: run-netns-cni\x2d8b780577\x2d68c2\x2d6736\x2d56a7\x2d35b66147725b.mount: Deactivated successfully. Feb 13 15:28:09.884290 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-41b2565a38bfef8f3a666d78b84f6bb5e7b0dd9a901cae77a54f239a6a5c47de-shm.mount: Deactivated successfully. Feb 13 15:28:09.884341 systemd[1]: run-netns-cni\x2d523f9bba\x2dc769\x2db554\x2d7c5e\x2ddb370fb208fa.mount: Deactivated successfully. Feb 13 15:28:09.884390 systemd[1]: run-netns-cni\x2d7a8c0f5e\x2d0bc3\x2d7866\x2d3452\x2de8758e7576fd.mount: Deactivated successfully. Feb 13 15:28:09.884447 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-91ac3c9c17810de765c7ea367d4d4a805d2038f43c9051e647a42f6dd72df458-shm.mount: Deactivated successfully. Feb 13 15:28:09.884494 systemd[1]: run-netns-cni\x2d7d49c24f\x2d4abc\x2d06dd\x2da4ca\x2d18573c468a9c.mount: Deactivated successfully. Feb 13 15:28:09.884538 systemd[1]: run-netns-cni\x2dc1f28481\x2dfc08\x2de824\x2d61d9\x2d14f7f663f6fb.mount: Deactivated successfully. Feb 13 15:28:09.884580 systemd[1]: run-netns-cni\x2d56a38b72\x2def80\x2d8544\x2d00ba\x2ddc2a204d991f.mount: Deactivated successfully. Feb 13 15:28:09.884623 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2204500902.mount: Deactivated successfully. Feb 13 15:28:10.311269 systemd-networkd[1373]: cali819e3fe5589: Link UP Feb 13 15:28:10.311445 systemd-networkd[1373]: cali819e3fe5589: Gained carrier Feb 13 15:28:10.324145 kubelet[2521]: I0213 15:28:10.323502 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-48qbp" podStartSLOduration=2.115263883 podStartE2EDuration="15.323480448s" podCreationTimestamp="2025-02-13 15:27:55 +0000 UTC" firstStartedPulling="2025-02-13 15:27:55.941682506 +0000 UTC m=+13.665180545" lastFinishedPulling="2025-02-13 15:28:09.149899071 +0000 UTC m=+26.873397110" observedRunningTime="2025-02-13 15:28:09.631645002 +0000 UTC m=+27.355143081" watchObservedRunningTime="2025-02-13 15:28:10.323480448 +0000 UTC m=+28.046978567" Feb 13 15:28:10.324586 containerd[1438]: 2025-02-13 15:28:09.693 [INFO][4554] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:28:10.324586 containerd[1438]: 2025-02-13 15:28:09.780 [INFO][4554] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--bhdmb-eth0 coredns-6f6b679f8f- kube-system 48d6879b-40c7-4fb4-9137-f94a1e0bf631 695 0 2025-02-13 15:27:48 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-bhdmb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali819e3fe5589 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="b48b2b049f4a486353fc8d15efbd6d3a4d4faa6681e1cc29ba61ede1bd1d7565" Namespace="kube-system" Pod="coredns-6f6b679f8f-bhdmb" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--bhdmb-" Feb 13 15:28:10.324586 containerd[1438]: 2025-02-13 15:28:09.794 [INFO][4554] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b48b2b049f4a486353fc8d15efbd6d3a4d4faa6681e1cc29ba61ede1bd1d7565" Namespace="kube-system" Pod="coredns-6f6b679f8f-bhdmb" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--bhdmb-eth0" Feb 13 15:28:10.324586 containerd[1438]: 2025-02-13 15:28:10.120 [INFO][4630] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b48b2b049f4a486353fc8d15efbd6d3a4d4faa6681e1cc29ba61ede1bd1d7565" HandleID="k8s-pod-network.b48b2b049f4a486353fc8d15efbd6d3a4d4faa6681e1cc29ba61ede1bd1d7565" Workload="localhost-k8s-coredns--6f6b679f8f--bhdmb-eth0" Feb 13 15:28:10.324586 containerd[1438]: 2025-02-13 15:28:10.252 [INFO][4630] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b48b2b049f4a486353fc8d15efbd6d3a4d4faa6681e1cc29ba61ede1bd1d7565" HandleID="k8s-pod-network.b48b2b049f4a486353fc8d15efbd6d3a4d4faa6681e1cc29ba61ede1bd1d7565" Workload="localhost-k8s-coredns--6f6b679f8f--bhdmb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000501ef0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-bhdmb", "timestamp":"2025-02-13 15:28:10.120274955 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:28:10.324586 containerd[1438]: 2025-02-13 15:28:10.261 [INFO][4630] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:28:10.324586 containerd[1438]: 2025-02-13 15:28:10.261 [INFO][4630] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:28:10.324586 containerd[1438]: 2025-02-13 15:28:10.262 [INFO][4630] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 15:28:10.324586 containerd[1438]: 2025-02-13 15:28:10.264 [INFO][4630] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b48b2b049f4a486353fc8d15efbd6d3a4d4faa6681e1cc29ba61ede1bd1d7565" host="localhost" Feb 13 15:28:10.324586 containerd[1438]: 2025-02-13 15:28:10.279 [INFO][4630] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 15:28:10.324586 containerd[1438]: 2025-02-13 15:28:10.288 [INFO][4630] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 15:28:10.324586 containerd[1438]: 2025-02-13 15:28:10.290 [INFO][4630] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 15:28:10.324586 containerd[1438]: 2025-02-13 15:28:10.292 [INFO][4630] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 15:28:10.324586 containerd[1438]: 2025-02-13 15:28:10.292 [INFO][4630] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b48b2b049f4a486353fc8d15efbd6d3a4d4faa6681e1cc29ba61ede1bd1d7565" host="localhost" Feb 13 15:28:10.324586 containerd[1438]: 2025-02-13 15:28:10.293 [INFO][4630] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b48b2b049f4a486353fc8d15efbd6d3a4d4faa6681e1cc29ba61ede1bd1d7565 Feb 13 15:28:10.324586 containerd[1438]: 2025-02-13 15:28:10.297 [INFO][4630] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b48b2b049f4a486353fc8d15efbd6d3a4d4faa6681e1cc29ba61ede1bd1d7565" host="localhost" Feb 13 15:28:10.324586 containerd[1438]: 2025-02-13 15:28:10.301 [INFO][4630] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.b48b2b049f4a486353fc8d15efbd6d3a4d4faa6681e1cc29ba61ede1bd1d7565" host="localhost" Feb 13 15:28:10.324586 containerd[1438]: 2025-02-13 15:28:10.301 [INFO][4630] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.b48b2b049f4a486353fc8d15efbd6d3a4d4faa6681e1cc29ba61ede1bd1d7565" host="localhost" Feb 13 15:28:10.324586 containerd[1438]: 2025-02-13 15:28:10.301 [INFO][4630] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:28:10.324586 containerd[1438]: 2025-02-13 15:28:10.301 [INFO][4630] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="b48b2b049f4a486353fc8d15efbd6d3a4d4faa6681e1cc29ba61ede1bd1d7565" HandleID="k8s-pod-network.b48b2b049f4a486353fc8d15efbd6d3a4d4faa6681e1cc29ba61ede1bd1d7565" Workload="localhost-k8s-coredns--6f6b679f8f--bhdmb-eth0" Feb 13 15:28:10.325266 containerd[1438]: 2025-02-13 15:28:10.305 [INFO][4554] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b48b2b049f4a486353fc8d15efbd6d3a4d4faa6681e1cc29ba61ede1bd1d7565" Namespace="kube-system" Pod="coredns-6f6b679f8f-bhdmb" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--bhdmb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--bhdmb-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"48d6879b-40c7-4fb4-9137-f94a1e0bf631", ResourceVersion:"695", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 27, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-bhdmb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali819e3fe5589", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:28:10.325266 containerd[1438]: 2025-02-13 15:28:10.305 [INFO][4554] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="b48b2b049f4a486353fc8d15efbd6d3a4d4faa6681e1cc29ba61ede1bd1d7565" Namespace="kube-system" Pod="coredns-6f6b679f8f-bhdmb" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--bhdmb-eth0" Feb 13 15:28:10.325266 containerd[1438]: 2025-02-13 15:28:10.305 [INFO][4554] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali819e3fe5589 ContainerID="b48b2b049f4a486353fc8d15efbd6d3a4d4faa6681e1cc29ba61ede1bd1d7565" Namespace="kube-system" Pod="coredns-6f6b679f8f-bhdmb" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--bhdmb-eth0" Feb 13 15:28:10.325266 containerd[1438]: 2025-02-13 15:28:10.313 [INFO][4554] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b48b2b049f4a486353fc8d15efbd6d3a4d4faa6681e1cc29ba61ede1bd1d7565" Namespace="kube-system" Pod="coredns-6f6b679f8f-bhdmb" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--bhdmb-eth0" Feb 13 15:28:10.325266 containerd[1438]: 2025-02-13 15:28:10.313 [INFO][4554] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b48b2b049f4a486353fc8d15efbd6d3a4d4faa6681e1cc29ba61ede1bd1d7565" Namespace="kube-system" Pod="coredns-6f6b679f8f-bhdmb" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--bhdmb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--bhdmb-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"48d6879b-40c7-4fb4-9137-f94a1e0bf631", ResourceVersion:"695", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 27, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b48b2b049f4a486353fc8d15efbd6d3a4d4faa6681e1cc29ba61ede1bd1d7565", Pod:"coredns-6f6b679f8f-bhdmb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali819e3fe5589", MAC:"da:60:43:7f:d0:18", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:28:10.325266 containerd[1438]: 2025-02-13 15:28:10.322 [INFO][4554] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b48b2b049f4a486353fc8d15efbd6d3a4d4faa6681e1cc29ba61ede1bd1d7565" Namespace="kube-system" Pod="coredns-6f6b679f8f-bhdmb" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--bhdmb-eth0" Feb 13 15:28:10.346943 containerd[1438]: time="2025-02-13T15:28:10.346841977Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:28:10.346943 containerd[1438]: time="2025-02-13T15:28:10.346906753Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:28:10.346943 containerd[1438]: time="2025-02-13T15:28:10.346919556Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:28:10.347194 containerd[1438]: time="2025-02-13T15:28:10.347003175Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:28:10.376287 systemd[1]: Started cri-containerd-b48b2b049f4a486353fc8d15efbd6d3a4d4faa6681e1cc29ba61ede1bd1d7565.scope - libcontainer container b48b2b049f4a486353fc8d15efbd6d3a4d4faa6681e1cc29ba61ede1bd1d7565. Feb 13 15:28:10.401297 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:28:10.442912 systemd-networkd[1373]: cali48fe9c7b737: Link UP Feb 13 15:28:10.443736 systemd-networkd[1373]: cali48fe9c7b737: Gained carrier Feb 13 15:28:10.455505 systemd[1]: Started sshd@7-10.0.0.93:22-10.0.0.1:51786.service - OpenSSH per-connection server daemon (10.0.0.1:51786). Feb 13 15:28:10.459508 containerd[1438]: 2025-02-13 15:28:09.756 [INFO][4582] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:28:10.459508 containerd[1438]: 2025-02-13 15:28:09.807 [INFO][4582] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--xrphc-eth0 coredns-6f6b679f8f- kube-system be132fa9-3a1c-4777-b6f8-2618a1865453 694 0 2025-02-13 15:27:48 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-xrphc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali48fe9c7b737 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="d09cb0b9e2e7415fdfd22f19afac1b701d2c1daf1d7b31b1620a85e01846140d" Namespace="kube-system" Pod="coredns-6f6b679f8f-xrphc" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--xrphc-" Feb 13 15:28:10.459508 containerd[1438]: 2025-02-13 15:28:09.807 [INFO][4582] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d09cb0b9e2e7415fdfd22f19afac1b701d2c1daf1d7b31b1620a85e01846140d" Namespace="kube-system" Pod="coredns-6f6b679f8f-xrphc" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--xrphc-eth0" Feb 13 15:28:10.459508 containerd[1438]: 2025-02-13 15:28:10.130 [INFO][4635] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d09cb0b9e2e7415fdfd22f19afac1b701d2c1daf1d7b31b1620a85e01846140d" HandleID="k8s-pod-network.d09cb0b9e2e7415fdfd22f19afac1b701d2c1daf1d7b31b1620a85e01846140d" Workload="localhost-k8s-coredns--6f6b679f8f--xrphc-eth0" Feb 13 15:28:10.459508 containerd[1438]: 2025-02-13 15:28:10.262 [INFO][4635] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d09cb0b9e2e7415fdfd22f19afac1b701d2c1daf1d7b31b1620a85e01846140d" HandleID="k8s-pod-network.d09cb0b9e2e7415fdfd22f19afac1b701d2c1daf1d7b31b1620a85e01846140d" Workload="localhost-k8s-coredns--6f6b679f8f--xrphc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400018bb00), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-xrphc", "timestamp":"2025-02-13 15:28:10.130041146 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:28:10.459508 containerd[1438]: 2025-02-13 15:28:10.263 [INFO][4635] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:28:10.459508 containerd[1438]: 2025-02-13 15:28:10.301 [INFO][4635] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:28:10.459508 containerd[1438]: 2025-02-13 15:28:10.301 [INFO][4635] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 15:28:10.459508 containerd[1438]: 2025-02-13 15:28:10.369 [INFO][4635] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d09cb0b9e2e7415fdfd22f19afac1b701d2c1daf1d7b31b1620a85e01846140d" host="localhost" Feb 13 15:28:10.459508 containerd[1438]: 2025-02-13 15:28:10.397 [INFO][4635] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 15:28:10.459508 containerd[1438]: 2025-02-13 15:28:10.414 [INFO][4635] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 15:28:10.459508 containerd[1438]: 2025-02-13 15:28:10.420 [INFO][4635] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 15:28:10.459508 containerd[1438]: 2025-02-13 15:28:10.423 [INFO][4635] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 15:28:10.459508 containerd[1438]: 2025-02-13 15:28:10.423 [INFO][4635] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d09cb0b9e2e7415fdfd22f19afac1b701d2c1daf1d7b31b1620a85e01846140d" host="localhost" Feb 13 15:28:10.459508 containerd[1438]: 2025-02-13 15:28:10.427 [INFO][4635] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d09cb0b9e2e7415fdfd22f19afac1b701d2c1daf1d7b31b1620a85e01846140d Feb 13 15:28:10.459508 containerd[1438]: 2025-02-13 15:28:10.431 [INFO][4635] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d09cb0b9e2e7415fdfd22f19afac1b701d2c1daf1d7b31b1620a85e01846140d" host="localhost" Feb 13 15:28:10.459508 containerd[1438]: 2025-02-13 15:28:10.437 [INFO][4635] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.d09cb0b9e2e7415fdfd22f19afac1b701d2c1daf1d7b31b1620a85e01846140d" host="localhost" Feb 13 15:28:10.459508 containerd[1438]: 2025-02-13 15:28:10.437 [INFO][4635] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.d09cb0b9e2e7415fdfd22f19afac1b701d2c1daf1d7b31b1620a85e01846140d" host="localhost" Feb 13 15:28:10.459508 containerd[1438]: 2025-02-13 15:28:10.437 [INFO][4635] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:28:10.459508 containerd[1438]: 2025-02-13 15:28:10.437 [INFO][4635] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="d09cb0b9e2e7415fdfd22f19afac1b701d2c1daf1d7b31b1620a85e01846140d" HandleID="k8s-pod-network.d09cb0b9e2e7415fdfd22f19afac1b701d2c1daf1d7b31b1620a85e01846140d" Workload="localhost-k8s-coredns--6f6b679f8f--xrphc-eth0" Feb 13 15:28:10.460122 containerd[1438]: 2025-02-13 15:28:10.440 [INFO][4582] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d09cb0b9e2e7415fdfd22f19afac1b701d2c1daf1d7b31b1620a85e01846140d" Namespace="kube-system" Pod="coredns-6f6b679f8f-xrphc" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--xrphc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--xrphc-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"be132fa9-3a1c-4777-b6f8-2618a1865453", ResourceVersion:"694", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 27, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-xrphc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali48fe9c7b737", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:28:10.460122 containerd[1438]: 2025-02-13 15:28:10.440 [INFO][4582] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="d09cb0b9e2e7415fdfd22f19afac1b701d2c1daf1d7b31b1620a85e01846140d" Namespace="kube-system" Pod="coredns-6f6b679f8f-xrphc" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--xrphc-eth0" Feb 13 15:28:10.460122 containerd[1438]: 2025-02-13 15:28:10.440 [INFO][4582] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali48fe9c7b737 ContainerID="d09cb0b9e2e7415fdfd22f19afac1b701d2c1daf1d7b31b1620a85e01846140d" Namespace="kube-system" Pod="coredns-6f6b679f8f-xrphc" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--xrphc-eth0" Feb 13 15:28:10.460122 containerd[1438]: 2025-02-13 15:28:10.443 [INFO][4582] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d09cb0b9e2e7415fdfd22f19afac1b701d2c1daf1d7b31b1620a85e01846140d" Namespace="kube-system" Pod="coredns-6f6b679f8f-xrphc" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--xrphc-eth0" Feb 13 15:28:10.460122 containerd[1438]: 2025-02-13 15:28:10.444 [INFO][4582] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d09cb0b9e2e7415fdfd22f19afac1b701d2c1daf1d7b31b1620a85e01846140d" Namespace="kube-system" Pod="coredns-6f6b679f8f-xrphc" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--xrphc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--xrphc-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"be132fa9-3a1c-4777-b6f8-2618a1865453", ResourceVersion:"694", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 27, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d09cb0b9e2e7415fdfd22f19afac1b701d2c1daf1d7b31b1620a85e01846140d", Pod:"coredns-6f6b679f8f-xrphc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali48fe9c7b737", MAC:"2a:0d:44:ea:2c:0d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:28:10.460122 containerd[1438]: 2025-02-13 15:28:10.457 [INFO][4582] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d09cb0b9e2e7415fdfd22f19afac1b701d2c1daf1d7b31b1620a85e01846140d" Namespace="kube-system" Pod="coredns-6f6b679f8f-xrphc" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--xrphc-eth0" Feb 13 15:28:10.463022 containerd[1438]: time="2025-02-13T15:28:10.462984225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bhdmb,Uid:48d6879b-40c7-4fb4-9137-f94a1e0bf631,Namespace:kube-system,Attempt:5,} returns sandbox id \"b48b2b049f4a486353fc8d15efbd6d3a4d4faa6681e1cc29ba61ede1bd1d7565\"" Feb 13 15:28:10.463722 kubelet[2521]: E0213 15:28:10.463693 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:10.468049 containerd[1438]: time="2025-02-13T15:28:10.467546025Z" level=info msg="CreateContainer within sandbox \"b48b2b049f4a486353fc8d15efbd6d3a4d4faa6681e1cc29ba61ede1bd1d7565\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:28:10.486578 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2402201863.mount: Deactivated successfully. Feb 13 15:28:10.492213 containerd[1438]: time="2025-02-13T15:28:10.492171093Z" level=info msg="CreateContainer within sandbox \"b48b2b049f4a486353fc8d15efbd6d3a4d4faa6681e1cc29ba61ede1bd1d7565\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a8f72ad7e46b90c31023e0540cb3faef550d6d8bfb5070f3fbe0dc6b8477feae\"" Feb 13 15:28:10.493088 containerd[1438]: time="2025-02-13T15:28:10.492896265Z" level=info msg="StartContainer for \"a8f72ad7e46b90c31023e0540cb3faef550d6d8bfb5070f3fbe0dc6b8477feae\"" Feb 13 15:28:10.497050 containerd[1438]: time="2025-02-13T15:28:10.495528087Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:28:10.497050 containerd[1438]: time="2025-02-13T15:28:10.496740694Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:28:10.497050 containerd[1438]: time="2025-02-13T15:28:10.496756178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:28:10.497050 containerd[1438]: time="2025-02-13T15:28:10.496863003Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:28:10.516129 sshd[4736]: Accepted publickey for core from 10.0.0.1 port 51786 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:28:10.517407 sshd-session[4736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:28:10.518295 systemd[1]: Started cri-containerd-d09cb0b9e2e7415fdfd22f19afac1b701d2c1daf1d7b31b1620a85e01846140d.scope - libcontainer container d09cb0b9e2e7415fdfd22f19afac1b701d2c1daf1d7b31b1620a85e01846140d. Feb 13 15:28:10.527372 systemd-logind[1424]: New session 8 of user core. Feb 13 15:28:10.530040 systemd[1]: Started cri-containerd-a8f72ad7e46b90c31023e0540cb3faef550d6d8bfb5070f3fbe0dc6b8477feae.scope - libcontainer container a8f72ad7e46b90c31023e0540cb3faef550d6d8bfb5070f3fbe0dc6b8477feae. Feb 13 15:28:10.531218 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 15:28:10.541403 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:28:10.559846 systemd-networkd[1373]: cali85046783ee6: Link UP Feb 13 15:28:10.560034 systemd-networkd[1373]: cali85046783ee6: Gained carrier Feb 13 15:28:10.580635 containerd[1438]: 2025-02-13 15:28:09.751 [INFO][4597] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:28:10.580635 containerd[1438]: 2025-02-13 15:28:09.780 [INFO][4597] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--55bbcccb65--78qv4-eth0 calico-apiserver-55bbcccb65- calico-apiserver db472e10-e2a5-49de-9955-0d1cf7adcfd6 691 0 2025-02-13 15:27:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:55bbcccb65 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-55bbcccb65-78qv4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali85046783ee6 [] []}} ContainerID="7841690efaff621fba8207e5072d265eb6c9be7978ddf33f7ef4d0cac0dae0cf" Namespace="calico-apiserver" Pod="calico-apiserver-55bbcccb65-78qv4" WorkloadEndpoint="localhost-k8s-calico--apiserver--55bbcccb65--78qv4-" Feb 13 15:28:10.580635 containerd[1438]: 2025-02-13 15:28:09.786 [INFO][4597] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7841690efaff621fba8207e5072d265eb6c9be7978ddf33f7ef4d0cac0dae0cf" Namespace="calico-apiserver" Pod="calico-apiserver-55bbcccb65-78qv4" WorkloadEndpoint="localhost-k8s-calico--apiserver--55bbcccb65--78qv4-eth0" Feb 13 15:28:10.580635 containerd[1438]: 2025-02-13 15:28:10.115 [INFO][4629] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7841690efaff621fba8207e5072d265eb6c9be7978ddf33f7ef4d0cac0dae0cf" HandleID="k8s-pod-network.7841690efaff621fba8207e5072d265eb6c9be7978ddf33f7ef4d0cac0dae0cf" Workload="localhost-k8s-calico--apiserver--55bbcccb65--78qv4-eth0" Feb 13 15:28:10.580635 containerd[1438]: 2025-02-13 15:28:10.253 [INFO][4629] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7841690efaff621fba8207e5072d265eb6c9be7978ddf33f7ef4d0cac0dae0cf" HandleID="k8s-pod-network.7841690efaff621fba8207e5072d265eb6c9be7978ddf33f7ef4d0cac0dae0cf" Workload="localhost-k8s-calico--apiserver--55bbcccb65--78qv4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400044aa60), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-55bbcccb65-78qv4", "timestamp":"2025-02-13 15:28:10.111419139 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:28:10.580635 containerd[1438]: 2025-02-13 15:28:10.263 [INFO][4629] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:28:10.580635 containerd[1438]: 2025-02-13 15:28:10.437 [INFO][4629] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:28:10.580635 containerd[1438]: 2025-02-13 15:28:10.437 [INFO][4629] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 15:28:10.580635 containerd[1438]: 2025-02-13 15:28:10.470 [INFO][4629] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7841690efaff621fba8207e5072d265eb6c9be7978ddf33f7ef4d0cac0dae0cf" host="localhost" Feb 13 15:28:10.580635 containerd[1438]: 2025-02-13 15:28:10.496 [INFO][4629] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 15:28:10.580635 containerd[1438]: 2025-02-13 15:28:10.511 [INFO][4629] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 15:28:10.580635 containerd[1438]: 2025-02-13 15:28:10.513 [INFO][4629] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 15:28:10.580635 containerd[1438]: 2025-02-13 15:28:10.516 [INFO][4629] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 15:28:10.580635 containerd[1438]: 2025-02-13 15:28:10.516 [INFO][4629] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7841690efaff621fba8207e5072d265eb6c9be7978ddf33f7ef4d0cac0dae0cf" host="localhost" Feb 13 15:28:10.580635 containerd[1438]: 2025-02-13 15:28:10.521 [INFO][4629] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7841690efaff621fba8207e5072d265eb6c9be7978ddf33f7ef4d0cac0dae0cf Feb 13 15:28:10.580635 containerd[1438]: 2025-02-13 15:28:10.528 [INFO][4629] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7841690efaff621fba8207e5072d265eb6c9be7978ddf33f7ef4d0cac0dae0cf" host="localhost" Feb 13 15:28:10.580635 containerd[1438]: 2025-02-13 15:28:10.553 [INFO][4629] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.7841690efaff621fba8207e5072d265eb6c9be7978ddf33f7ef4d0cac0dae0cf" host="localhost" Feb 13 15:28:10.580635 containerd[1438]: 2025-02-13 15:28:10.553 [INFO][4629] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.7841690efaff621fba8207e5072d265eb6c9be7978ddf33f7ef4d0cac0dae0cf" host="localhost" Feb 13 15:28:10.580635 containerd[1438]: 2025-02-13 15:28:10.553 [INFO][4629] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:28:10.580635 containerd[1438]: 2025-02-13 15:28:10.553 [INFO][4629] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="7841690efaff621fba8207e5072d265eb6c9be7978ddf33f7ef4d0cac0dae0cf" HandleID="k8s-pod-network.7841690efaff621fba8207e5072d265eb6c9be7978ddf33f7ef4d0cac0dae0cf" Workload="localhost-k8s-calico--apiserver--55bbcccb65--78qv4-eth0" Feb 13 15:28:10.581436 containerd[1438]: 2025-02-13 15:28:10.558 [INFO][4597] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7841690efaff621fba8207e5072d265eb6c9be7978ddf33f7ef4d0cac0dae0cf" Namespace="calico-apiserver" Pod="calico-apiserver-55bbcccb65-78qv4" WorkloadEndpoint="localhost-k8s-calico--apiserver--55bbcccb65--78qv4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55bbcccb65--78qv4-eth0", GenerateName:"calico-apiserver-55bbcccb65-", Namespace:"calico-apiserver", SelfLink:"", UID:"db472e10-e2a5-49de-9955-0d1cf7adcfd6", ResourceVersion:"691", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 27, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55bbcccb65", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-55bbcccb65-78qv4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali85046783ee6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:28:10.581436 containerd[1438]: 2025-02-13 15:28:10.558 [INFO][4597] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="7841690efaff621fba8207e5072d265eb6c9be7978ddf33f7ef4d0cac0dae0cf" Namespace="calico-apiserver" Pod="calico-apiserver-55bbcccb65-78qv4" WorkloadEndpoint="localhost-k8s-calico--apiserver--55bbcccb65--78qv4-eth0" Feb 13 15:28:10.581436 containerd[1438]: 2025-02-13 15:28:10.558 [INFO][4597] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali85046783ee6 ContainerID="7841690efaff621fba8207e5072d265eb6c9be7978ddf33f7ef4d0cac0dae0cf" Namespace="calico-apiserver" Pod="calico-apiserver-55bbcccb65-78qv4" WorkloadEndpoint="localhost-k8s-calico--apiserver--55bbcccb65--78qv4-eth0" Feb 13 15:28:10.581436 containerd[1438]: 2025-02-13 15:28:10.560 [INFO][4597] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7841690efaff621fba8207e5072d265eb6c9be7978ddf33f7ef4d0cac0dae0cf" Namespace="calico-apiserver" Pod="calico-apiserver-55bbcccb65-78qv4" WorkloadEndpoint="localhost-k8s-calico--apiserver--55bbcccb65--78qv4-eth0" Feb 13 15:28:10.581436 containerd[1438]: 2025-02-13 15:28:10.560 [INFO][4597] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7841690efaff621fba8207e5072d265eb6c9be7978ddf33f7ef4d0cac0dae0cf" Namespace="calico-apiserver" Pod="calico-apiserver-55bbcccb65-78qv4" WorkloadEndpoint="localhost-k8s-calico--apiserver--55bbcccb65--78qv4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55bbcccb65--78qv4-eth0", GenerateName:"calico-apiserver-55bbcccb65-", Namespace:"calico-apiserver", SelfLink:"", UID:"db472e10-e2a5-49de-9955-0d1cf7adcfd6", ResourceVersion:"691", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 27, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55bbcccb65", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7841690efaff621fba8207e5072d265eb6c9be7978ddf33f7ef4d0cac0dae0cf", Pod:"calico-apiserver-55bbcccb65-78qv4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali85046783ee6", MAC:"6e:47:f5:0d:7f:f0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:28:10.581436 containerd[1438]: 2025-02-13 15:28:10.573 [INFO][4597] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7841690efaff621fba8207e5072d265eb6c9be7978ddf33f7ef4d0cac0dae0cf" Namespace="calico-apiserver" Pod="calico-apiserver-55bbcccb65-78qv4" WorkloadEndpoint="localhost-k8s-calico--apiserver--55bbcccb65--78qv4-eth0" Feb 13 15:28:10.597967 containerd[1438]: time="2025-02-13T15:28:10.597899116Z" level=info msg="StartContainer for \"a8f72ad7e46b90c31023e0540cb3faef550d6d8bfb5070f3fbe0dc6b8477feae\" returns successfully" Feb 13 15:28:10.600460 containerd[1438]: time="2025-02-13T15:28:10.598051512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xrphc,Uid:be132fa9-3a1c-4777-b6f8-2618a1865453,Namespace:kube-system,Attempt:5,} returns sandbox id \"d09cb0b9e2e7415fdfd22f19afac1b701d2c1daf1d7b31b1620a85e01846140d\"" Feb 13 15:28:10.608057 kubelet[2521]: E0213 15:28:10.607875 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:10.615539 containerd[1438]: time="2025-02-13T15:28:10.615460112Z" level=info msg="CreateContainer within sandbox \"d09cb0b9e2e7415fdfd22f19afac1b701d2c1daf1d7b31b1620a85e01846140d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:28:10.645848 kubelet[2521]: I0213 15:28:10.645521 2521 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:28:10.646619 kubelet[2521]: E0213 15:28:10.646099 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:10.646619 kubelet[2521]: E0213 15:28:10.646066 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:10.663617 containerd[1438]: time="2025-02-13T15:28:10.663536691Z" level=info msg="CreateContainer within sandbox \"d09cb0b9e2e7415fdfd22f19afac1b701d2c1daf1d7b31b1620a85e01846140d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d7226331e24ef95ff1a769cacc6f71b7697c4a7102e40b827e86e0357624b383\"" Feb 13 15:28:10.667967 containerd[1438]: time="2025-02-13T15:28:10.666151310Z" level=info msg="StartContainer for \"d7226331e24ef95ff1a769cacc6f71b7697c4a7102e40b827e86e0357624b383\"" Feb 13 15:28:10.681688 systemd-networkd[1373]: calia6753ea763c: Link UP Feb 13 15:28:10.682087 systemd-networkd[1373]: calia6753ea763c: Gained carrier Feb 13 15:28:10.718661 kubelet[2521]: I0213 15:28:10.718578 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-bhdmb" podStartSLOduration=22.718559073 podStartE2EDuration="22.718559073s" podCreationTimestamp="2025-02-13 15:27:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:28:10.67050694 +0000 UTC m=+28.394004979" watchObservedRunningTime="2025-02-13 15:28:10.718559073 +0000 UTC m=+28.442057112" Feb 13 15:28:10.723936 containerd[1438]: time="2025-02-13T15:28:10.723837682Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:28:10.723936 containerd[1438]: time="2025-02-13T15:28:10.723900977Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:28:10.723936 containerd[1438]: time="2025-02-13T15:28:10.723916981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:28:10.724169 containerd[1438]: time="2025-02-13T15:28:10.723998040Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:28:10.728345 containerd[1438]: 2025-02-13 15:28:09.780 [INFO][4609] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:28:10.728345 containerd[1438]: 2025-02-13 15:28:09.803 [INFO][4609] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6c68d99f8f--wlrdk-eth0 calico-kube-controllers-6c68d99f8f- calico-system b17d9265-9df2-40dd-a8b7-46383a0e17ce 688 0 2025-02-13 15:27:55 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6c68d99f8f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6c68d99f8f-wlrdk eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia6753ea763c [] []}} ContainerID="895343bb39d03f12c10c91f3a782c6bf012c7ef45a2f0a04d219281f35a4bc7c" Namespace="calico-system" Pod="calico-kube-controllers-6c68d99f8f-wlrdk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6c68d99f8f--wlrdk-" Feb 13 15:28:10.728345 containerd[1438]: 2025-02-13 15:28:09.810 [INFO][4609] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="895343bb39d03f12c10c91f3a782c6bf012c7ef45a2f0a04d219281f35a4bc7c" Namespace="calico-system" Pod="calico-kube-controllers-6c68d99f8f-wlrdk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6c68d99f8f--wlrdk-eth0" Feb 13 15:28:10.728345 containerd[1438]: 2025-02-13 15:28:10.119 [INFO][4645] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="895343bb39d03f12c10c91f3a782c6bf012c7ef45a2f0a04d219281f35a4bc7c" HandleID="k8s-pod-network.895343bb39d03f12c10c91f3a782c6bf012c7ef45a2f0a04d219281f35a4bc7c" Workload="localhost-k8s-calico--kube--controllers--6c68d99f8f--wlrdk-eth0" Feb 13 15:28:10.728345 containerd[1438]: 2025-02-13 15:28:10.265 [INFO][4645] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="895343bb39d03f12c10c91f3a782c6bf012c7ef45a2f0a04d219281f35a4bc7c" HandleID="k8s-pod-network.895343bb39d03f12c10c91f3a782c6bf012c7ef45a2f0a04d219281f35a4bc7c" Workload="localhost-k8s-calico--kube--controllers--6c68d99f8f--wlrdk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005ce3d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6c68d99f8f-wlrdk", "timestamp":"2025-02-13 15:28:10.119062708 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:28:10.728345 containerd[1438]: 2025-02-13 15:28:10.266 [INFO][4645] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:28:10.728345 containerd[1438]: 2025-02-13 15:28:10.553 [INFO][4645] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:28:10.728345 containerd[1438]: 2025-02-13 15:28:10.553 [INFO][4645] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 15:28:10.728345 containerd[1438]: 2025-02-13 15:28:10.569 [INFO][4645] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.895343bb39d03f12c10c91f3a782c6bf012c7ef45a2f0a04d219281f35a4bc7c" host="localhost" Feb 13 15:28:10.728345 containerd[1438]: 2025-02-13 15:28:10.597 [INFO][4645] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 15:28:10.728345 containerd[1438]: 2025-02-13 15:28:10.619 [INFO][4645] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 15:28:10.728345 containerd[1438]: 2025-02-13 15:28:10.624 [INFO][4645] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 15:28:10.728345 containerd[1438]: 2025-02-13 15:28:10.637 [INFO][4645] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 15:28:10.728345 containerd[1438]: 2025-02-13 15:28:10.637 [INFO][4645] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.895343bb39d03f12c10c91f3a782c6bf012c7ef45a2f0a04d219281f35a4bc7c" host="localhost" Feb 13 15:28:10.728345 containerd[1438]: 2025-02-13 15:28:10.641 [INFO][4645] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.895343bb39d03f12c10c91f3a782c6bf012c7ef45a2f0a04d219281f35a4bc7c Feb 13 15:28:10.728345 containerd[1438]: 2025-02-13 15:28:10.648 [INFO][4645] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.895343bb39d03f12c10c91f3a782c6bf012c7ef45a2f0a04d219281f35a4bc7c" host="localhost" Feb 13 15:28:10.728345 containerd[1438]: 2025-02-13 15:28:10.660 [INFO][4645] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.895343bb39d03f12c10c91f3a782c6bf012c7ef45a2f0a04d219281f35a4bc7c" host="localhost" Feb 13 15:28:10.728345 containerd[1438]: 2025-02-13 15:28:10.661 [INFO][4645] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.895343bb39d03f12c10c91f3a782c6bf012c7ef45a2f0a04d219281f35a4bc7c" host="localhost" Feb 13 15:28:10.728345 containerd[1438]: 2025-02-13 15:28:10.662 [INFO][4645] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:28:10.728345 containerd[1438]: 2025-02-13 15:28:10.662 [INFO][4645] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="895343bb39d03f12c10c91f3a782c6bf012c7ef45a2f0a04d219281f35a4bc7c" HandleID="k8s-pod-network.895343bb39d03f12c10c91f3a782c6bf012c7ef45a2f0a04d219281f35a4bc7c" Workload="localhost-k8s-calico--kube--controllers--6c68d99f8f--wlrdk-eth0" Feb 13 15:28:10.728867 containerd[1438]: 2025-02-13 15:28:10.677 [INFO][4609] cni-plugin/k8s.go 386: Populated endpoint ContainerID="895343bb39d03f12c10c91f3a782c6bf012c7ef45a2f0a04d219281f35a4bc7c" Namespace="calico-system" Pod="calico-kube-controllers-6c68d99f8f-wlrdk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6c68d99f8f--wlrdk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6c68d99f8f--wlrdk-eth0", GenerateName:"calico-kube-controllers-6c68d99f8f-", Namespace:"calico-system", SelfLink:"", UID:"b17d9265-9df2-40dd-a8b7-46383a0e17ce", ResourceVersion:"688", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 27, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c68d99f8f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6c68d99f8f-wlrdk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia6753ea763c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:28:10.728867 containerd[1438]: 2025-02-13 15:28:10.677 [INFO][4609] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="895343bb39d03f12c10c91f3a782c6bf012c7ef45a2f0a04d219281f35a4bc7c" Namespace="calico-system" Pod="calico-kube-controllers-6c68d99f8f-wlrdk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6c68d99f8f--wlrdk-eth0" Feb 13 15:28:10.728867 containerd[1438]: 2025-02-13 15:28:10.677 [INFO][4609] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia6753ea763c ContainerID="895343bb39d03f12c10c91f3a782c6bf012c7ef45a2f0a04d219281f35a4bc7c" Namespace="calico-system" Pod="calico-kube-controllers-6c68d99f8f-wlrdk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6c68d99f8f--wlrdk-eth0" Feb 13 15:28:10.728867 containerd[1438]: 2025-02-13 15:28:10.681 [INFO][4609] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="895343bb39d03f12c10c91f3a782c6bf012c7ef45a2f0a04d219281f35a4bc7c" Namespace="calico-system" Pod="calico-kube-controllers-6c68d99f8f-wlrdk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6c68d99f8f--wlrdk-eth0" Feb 13 15:28:10.728867 containerd[1438]: 2025-02-13 15:28:10.685 [INFO][4609] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="895343bb39d03f12c10c91f3a782c6bf012c7ef45a2f0a04d219281f35a4bc7c" Namespace="calico-system" Pod="calico-kube-controllers-6c68d99f8f-wlrdk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6c68d99f8f--wlrdk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6c68d99f8f--wlrdk-eth0", GenerateName:"calico-kube-controllers-6c68d99f8f-", Namespace:"calico-system", SelfLink:"", UID:"b17d9265-9df2-40dd-a8b7-46383a0e17ce", ResourceVersion:"688", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 27, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c68d99f8f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"895343bb39d03f12c10c91f3a782c6bf012c7ef45a2f0a04d219281f35a4bc7c", Pod:"calico-kube-controllers-6c68d99f8f-wlrdk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia6753ea763c", MAC:"06:64:f6:dc:e6:18", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:28:10.728867 containerd[1438]: 2025-02-13 15:28:10.717 [INFO][4609] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="895343bb39d03f12c10c91f3a782c6bf012c7ef45a2f0a04d219281f35a4bc7c" Namespace="calico-system" Pod="calico-kube-controllers-6c68d99f8f-wlrdk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6c68d99f8f--wlrdk-eth0" Feb 13 15:28:10.738610 systemd[1]: Started cri-containerd-d7226331e24ef95ff1a769cacc6f71b7697c4a7102e40b827e86e0357624b383.scope - libcontainer container d7226331e24ef95ff1a769cacc6f71b7697c4a7102e40b827e86e0357624b383. Feb 13 15:28:10.767325 systemd[1]: Started cri-containerd-7841690efaff621fba8207e5072d265eb6c9be7978ddf33f7ef4d0cac0dae0cf.scope - libcontainer container 7841690efaff621fba8207e5072d265eb6c9be7978ddf33f7ef4d0cac0dae0cf. Feb 13 15:28:10.821611 systemd-networkd[1373]: calib44ad93cf55: Link UP Feb 13 15:28:10.821854 systemd-networkd[1373]: calib44ad93cf55: Gained carrier Feb 13 15:28:10.824194 sshd[4806]: Connection closed by 10.0.0.1 port 51786 Feb 13 15:28:10.823529 sshd-session[4736]: pam_unix(sshd:session): session closed for user core Feb 13 15:28:10.830394 containerd[1438]: time="2025-02-13T15:28:10.825206434Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:28:10.830394 containerd[1438]: time="2025-02-13T15:28:10.825266848Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:28:10.830394 containerd[1438]: time="2025-02-13T15:28:10.825280011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:28:10.830394 containerd[1438]: time="2025-02-13T15:28:10.825374754Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:28:10.837601 systemd[1]: sshd@7-10.0.0.93:22-10.0.0.1:51786.service: Deactivated successfully. Feb 13 15:28:10.839739 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 15:28:10.850002 systemd-logind[1424]: Session 8 logged out. Waiting for processes to exit. Feb 13 15:28:10.856457 containerd[1438]: 2025-02-13 15:28:09.668 [INFO][4543] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:28:10.856457 containerd[1438]: 2025-02-13 15:28:09.790 [INFO][4543] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--55bbcccb65--mmhbv-eth0 calico-apiserver-55bbcccb65- calico-apiserver 2330a284-0835-4d9f-929e-909c050006b6 696 0 2025-02-13 15:27:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:55bbcccb65 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-55bbcccb65-mmhbv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib44ad93cf55 [] []}} ContainerID="d73a3125696c192081e963d012d35642881c283a662d24cae9cc4a497272dab5" Namespace="calico-apiserver" Pod="calico-apiserver-55bbcccb65-mmhbv" WorkloadEndpoint="localhost-k8s-calico--apiserver--55bbcccb65--mmhbv-" Feb 13 15:28:10.856457 containerd[1438]: 2025-02-13 15:28:09.790 [INFO][4543] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d73a3125696c192081e963d012d35642881c283a662d24cae9cc4a497272dab5" Namespace="calico-apiserver" Pod="calico-apiserver-55bbcccb65-mmhbv" WorkloadEndpoint="localhost-k8s-calico--apiserver--55bbcccb65--mmhbv-eth0" Feb 13 15:28:10.856457 containerd[1438]: 2025-02-13 15:28:10.126 [INFO][4633] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d73a3125696c192081e963d012d35642881c283a662d24cae9cc4a497272dab5" HandleID="k8s-pod-network.d73a3125696c192081e963d012d35642881c283a662d24cae9cc4a497272dab5" Workload="localhost-k8s-calico--apiserver--55bbcccb65--mmhbv-eth0" Feb 13 15:28:10.856457 containerd[1438]: 2025-02-13 15:28:10.266 [INFO][4633] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d73a3125696c192081e963d012d35642881c283a662d24cae9cc4a497272dab5" HandleID="k8s-pod-network.d73a3125696c192081e963d012d35642881c283a662d24cae9cc4a497272dab5" Workload="localhost-k8s-calico--apiserver--55bbcccb65--mmhbv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000322af0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-55bbcccb65-mmhbv", "timestamp":"2025-02-13 15:28:10.126414408 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:28:10.856457 containerd[1438]: 2025-02-13 15:28:10.267 [INFO][4633] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:28:10.856457 containerd[1438]: 2025-02-13 15:28:10.662 [INFO][4633] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:28:10.856457 containerd[1438]: 2025-02-13 15:28:10.664 [INFO][4633] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 15:28:10.856457 containerd[1438]: 2025-02-13 15:28:10.701 [INFO][4633] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d73a3125696c192081e963d012d35642881c283a662d24cae9cc4a497272dab5" host="localhost" Feb 13 15:28:10.856457 containerd[1438]: 2025-02-13 15:28:10.714 [INFO][4633] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 15:28:10.856457 containerd[1438]: 2025-02-13 15:28:10.729 [INFO][4633] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 15:28:10.856457 containerd[1438]: 2025-02-13 15:28:10.734 [INFO][4633] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 15:28:10.856457 containerd[1438]: 2025-02-13 15:28:10.740 [INFO][4633] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 15:28:10.856457 containerd[1438]: 2025-02-13 15:28:10.740 [INFO][4633] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d73a3125696c192081e963d012d35642881c283a662d24cae9cc4a497272dab5" host="localhost" Feb 13 15:28:10.856457 containerd[1438]: 2025-02-13 15:28:10.745 [INFO][4633] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d73a3125696c192081e963d012d35642881c283a662d24cae9cc4a497272dab5 Feb 13 15:28:10.856457 containerd[1438]: 2025-02-13 15:28:10.756 [INFO][4633] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d73a3125696c192081e963d012d35642881c283a662d24cae9cc4a497272dab5" host="localhost" Feb 13 15:28:10.856457 containerd[1438]: 2025-02-13 15:28:10.785 [INFO][4633] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.d73a3125696c192081e963d012d35642881c283a662d24cae9cc4a497272dab5" host="localhost" Feb 13 15:28:10.856457 containerd[1438]: 2025-02-13 15:28:10.785 [INFO][4633] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.d73a3125696c192081e963d012d35642881c283a662d24cae9cc4a497272dab5" host="localhost" Feb 13 15:28:10.856457 containerd[1438]: 2025-02-13 15:28:10.785 [INFO][4633] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:28:10.856457 containerd[1438]: 2025-02-13 15:28:10.785 [INFO][4633] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="d73a3125696c192081e963d012d35642881c283a662d24cae9cc4a497272dab5" HandleID="k8s-pod-network.d73a3125696c192081e963d012d35642881c283a662d24cae9cc4a497272dab5" Workload="localhost-k8s-calico--apiserver--55bbcccb65--mmhbv-eth0" Feb 13 15:28:10.857000 containerd[1438]: 2025-02-13 15:28:10.794 [INFO][4543] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d73a3125696c192081e963d012d35642881c283a662d24cae9cc4a497272dab5" Namespace="calico-apiserver" Pod="calico-apiserver-55bbcccb65-mmhbv" WorkloadEndpoint="localhost-k8s-calico--apiserver--55bbcccb65--mmhbv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55bbcccb65--mmhbv-eth0", GenerateName:"calico-apiserver-55bbcccb65-", Namespace:"calico-apiserver", SelfLink:"", UID:"2330a284-0835-4d9f-929e-909c050006b6", ResourceVersion:"696", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 27, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55bbcccb65", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-55bbcccb65-mmhbv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib44ad93cf55", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:28:10.857000 containerd[1438]: 2025-02-13 15:28:10.794 [INFO][4543] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="d73a3125696c192081e963d012d35642881c283a662d24cae9cc4a497272dab5" Namespace="calico-apiserver" Pod="calico-apiserver-55bbcccb65-mmhbv" WorkloadEndpoint="localhost-k8s-calico--apiserver--55bbcccb65--mmhbv-eth0" Feb 13 15:28:10.857000 containerd[1438]: 2025-02-13 15:28:10.794 [INFO][4543] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib44ad93cf55 ContainerID="d73a3125696c192081e963d012d35642881c283a662d24cae9cc4a497272dab5" Namespace="calico-apiserver" Pod="calico-apiserver-55bbcccb65-mmhbv" WorkloadEndpoint="localhost-k8s-calico--apiserver--55bbcccb65--mmhbv-eth0" Feb 13 15:28:10.857000 containerd[1438]: 2025-02-13 15:28:10.821 [INFO][4543] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d73a3125696c192081e963d012d35642881c283a662d24cae9cc4a497272dab5" Namespace="calico-apiserver" Pod="calico-apiserver-55bbcccb65-mmhbv" WorkloadEndpoint="localhost-k8s-calico--apiserver--55bbcccb65--mmhbv-eth0" Feb 13 15:28:10.857000 containerd[1438]: 2025-02-13 15:28:10.823 [INFO][4543] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d73a3125696c192081e963d012d35642881c283a662d24cae9cc4a497272dab5" Namespace="calico-apiserver" Pod="calico-apiserver-55bbcccb65-mmhbv" WorkloadEndpoint="localhost-k8s-calico--apiserver--55bbcccb65--mmhbv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55bbcccb65--mmhbv-eth0", GenerateName:"calico-apiserver-55bbcccb65-", Namespace:"calico-apiserver", SelfLink:"", UID:"2330a284-0835-4d9f-929e-909c050006b6", ResourceVersion:"696", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 27, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55bbcccb65", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d73a3125696c192081e963d012d35642881c283a662d24cae9cc4a497272dab5", Pod:"calico-apiserver-55bbcccb65-mmhbv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib44ad93cf55", MAC:"de:a2:ed:1c:82:3d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:28:10.857000 containerd[1438]: 2025-02-13 15:28:10.846 [INFO][4543] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d73a3125696c192081e963d012d35642881c283a662d24cae9cc4a497272dab5" Namespace="calico-apiserver" Pod="calico-apiserver-55bbcccb65-mmhbv" WorkloadEndpoint="localhost-k8s-calico--apiserver--55bbcccb65--mmhbv-eth0" Feb 13 15:28:10.861106 containerd[1438]: time="2025-02-13T15:28:10.859617938Z" level=info msg="StartContainer for \"d7226331e24ef95ff1a769cacc6f71b7697c4a7102e40b827e86e0357624b383\" returns successfully" Feb 13 15:28:10.864910 systemd[1]: Started cri-containerd-895343bb39d03f12c10c91f3a782c6bf012c7ef45a2f0a04d219281f35a4bc7c.scope - libcontainer container 895343bb39d03f12c10c91f3a782c6bf012c7ef45a2f0a04d219281f35a4bc7c. Feb 13 15:28:10.867810 systemd-logind[1424]: Removed session 8. Feb 13 15:28:10.890297 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:28:10.907321 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:28:10.916565 containerd[1438]: time="2025-02-13T15:28:10.916465192Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:28:10.916565 containerd[1438]: time="2025-02-13T15:28:10.916534889Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:28:10.916565 containerd[1438]: time="2025-02-13T15:28:10.916556454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:28:10.916821 containerd[1438]: time="2025-02-13T15:28:10.916653797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:28:10.943442 systemd-networkd[1373]: calic83e5e72070: Link UP Feb 13 15:28:10.943801 systemd-networkd[1373]: calic83e5e72070: Gained carrier Feb 13 15:28:10.945902 containerd[1438]: time="2025-02-13T15:28:10.944860793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55bbcccb65-78qv4,Uid:db472e10-e2a5-49de-9955-0d1cf7adcfd6,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"7841690efaff621fba8207e5072d265eb6c9be7978ddf33f7ef4d0cac0dae0cf\"" Feb 13 15:28:10.954299 containerd[1438]: time="2025-02-13T15:28:10.953397493Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 15:28:10.973115 systemd[1]: Started cri-containerd-d73a3125696c192081e963d012d35642881c283a662d24cae9cc4a497272dab5.scope - libcontainer container d73a3125696c192081e963d012d35642881c283a662d24cae9cc4a497272dab5. Feb 13 15:28:10.979765 containerd[1438]: 2025-02-13 15:28:09.720 [INFO][4569] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:28:10.979765 containerd[1438]: 2025-02-13 15:28:09.789 [INFO][4569] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--hmqx4-eth0 csi-node-driver- calico-system d3daff69-f8cf-4771-8db4-eb9251b67560 602 0 2025-02-13 15:27:55 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-hmqx4 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic83e5e72070 [] []}} ContainerID="5bb5e1e05bfd11a3bc5310e0ea50612d62ea00979d4920d14c0ff1ea636f3d64" Namespace="calico-system" Pod="csi-node-driver-hmqx4" WorkloadEndpoint="localhost-k8s-csi--node--driver--hmqx4-" Feb 13 15:28:10.979765 containerd[1438]: 2025-02-13 15:28:09.791 [INFO][4569] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5bb5e1e05bfd11a3bc5310e0ea50612d62ea00979d4920d14c0ff1ea636f3d64" Namespace="calico-system" Pod="csi-node-driver-hmqx4" WorkloadEndpoint="localhost-k8s-csi--node--driver--hmqx4-eth0" Feb 13 15:28:10.979765 containerd[1438]: 2025-02-13 15:28:10.115 [INFO][4634] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5bb5e1e05bfd11a3bc5310e0ea50612d62ea00979d4920d14c0ff1ea636f3d64" HandleID="k8s-pod-network.5bb5e1e05bfd11a3bc5310e0ea50612d62ea00979d4920d14c0ff1ea636f3d64" Workload="localhost-k8s-csi--node--driver--hmqx4-eth0" Feb 13 15:28:10.979765 containerd[1438]: 2025-02-13 15:28:10.253 [INFO][4634] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5bb5e1e05bfd11a3bc5310e0ea50612d62ea00979d4920d14c0ff1ea636f3d64" HandleID="k8s-pod-network.5bb5e1e05bfd11a3bc5310e0ea50612d62ea00979d4920d14c0ff1ea636f3d64" Workload="localhost-k8s-csi--node--driver--hmqx4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400037fb20), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-hmqx4", "timestamp":"2025-02-13 15:28:10.11324169 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:28:10.979765 containerd[1438]: 2025-02-13 15:28:10.267 [INFO][4634] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:28:10.979765 containerd[1438]: 2025-02-13 15:28:10.786 [INFO][4634] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:28:10.979765 containerd[1438]: 2025-02-13 15:28:10.786 [INFO][4634] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 15:28:10.979765 containerd[1438]: 2025-02-13 15:28:10.809 [INFO][4634] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5bb5e1e05bfd11a3bc5310e0ea50612d62ea00979d4920d14c0ff1ea636f3d64" host="localhost" Feb 13 15:28:10.979765 containerd[1438]: 2025-02-13 15:28:10.847 [INFO][4634] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 15:28:10.979765 containerd[1438]: 2025-02-13 15:28:10.864 [INFO][4634] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 15:28:10.979765 containerd[1438]: 2025-02-13 15:28:10.885 [INFO][4634] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 15:28:10.979765 containerd[1438]: 2025-02-13 15:28:10.889 [INFO][4634] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 15:28:10.979765 containerd[1438]: 2025-02-13 15:28:10.891 [INFO][4634] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5bb5e1e05bfd11a3bc5310e0ea50612d62ea00979d4920d14c0ff1ea636f3d64" host="localhost" Feb 13 15:28:10.979765 containerd[1438]: 2025-02-13 15:28:10.894 [INFO][4634] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5bb5e1e05bfd11a3bc5310e0ea50612d62ea00979d4920d14c0ff1ea636f3d64 Feb 13 15:28:10.979765 containerd[1438]: 2025-02-13 15:28:10.908 [INFO][4634] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5bb5e1e05bfd11a3bc5310e0ea50612d62ea00979d4920d14c0ff1ea636f3d64" host="localhost" Feb 13 15:28:10.979765 containerd[1438]: 2025-02-13 15:28:10.924 [INFO][4634] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.5bb5e1e05bfd11a3bc5310e0ea50612d62ea00979d4920d14c0ff1ea636f3d64" host="localhost" Feb 13 15:28:10.979765 containerd[1438]: 2025-02-13 15:28:10.925 [INFO][4634] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.5bb5e1e05bfd11a3bc5310e0ea50612d62ea00979d4920d14c0ff1ea636f3d64" host="localhost" Feb 13 15:28:10.979765 containerd[1438]: 2025-02-13 15:28:10.926 [INFO][4634] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:28:10.979765 containerd[1438]: 2025-02-13 15:28:10.926 [INFO][4634] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="5bb5e1e05bfd11a3bc5310e0ea50612d62ea00979d4920d14c0ff1ea636f3d64" HandleID="k8s-pod-network.5bb5e1e05bfd11a3bc5310e0ea50612d62ea00979d4920d14c0ff1ea636f3d64" Workload="localhost-k8s-csi--node--driver--hmqx4-eth0" Feb 13 15:28:10.980493 containerd[1438]: 2025-02-13 15:28:10.940 [INFO][4569] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5bb5e1e05bfd11a3bc5310e0ea50612d62ea00979d4920d14c0ff1ea636f3d64" Namespace="calico-system" Pod="csi-node-driver-hmqx4" WorkloadEndpoint="localhost-k8s-csi--node--driver--hmqx4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--hmqx4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d3daff69-f8cf-4771-8db4-eb9251b67560", ResourceVersion:"602", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 27, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-hmqx4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic83e5e72070", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:28:10.980493 containerd[1438]: 2025-02-13 15:28:10.941 [INFO][4569] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="5bb5e1e05bfd11a3bc5310e0ea50612d62ea00979d4920d14c0ff1ea636f3d64" Namespace="calico-system" Pod="csi-node-driver-hmqx4" WorkloadEndpoint="localhost-k8s-csi--node--driver--hmqx4-eth0" Feb 13 15:28:10.980493 containerd[1438]: 2025-02-13 15:28:10.941 [INFO][4569] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic83e5e72070 ContainerID="5bb5e1e05bfd11a3bc5310e0ea50612d62ea00979d4920d14c0ff1ea636f3d64" Namespace="calico-system" Pod="csi-node-driver-hmqx4" WorkloadEndpoint="localhost-k8s-csi--node--driver--hmqx4-eth0" Feb 13 15:28:10.980493 containerd[1438]: 2025-02-13 15:28:10.944 [INFO][4569] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5bb5e1e05bfd11a3bc5310e0ea50612d62ea00979d4920d14c0ff1ea636f3d64" Namespace="calico-system" Pod="csi-node-driver-hmqx4" WorkloadEndpoint="localhost-k8s-csi--node--driver--hmqx4-eth0" Feb 13 15:28:10.980493 containerd[1438]: 2025-02-13 15:28:10.950 [INFO][4569] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5bb5e1e05bfd11a3bc5310e0ea50612d62ea00979d4920d14c0ff1ea636f3d64" Namespace="calico-system" Pod="csi-node-driver-hmqx4" WorkloadEndpoint="localhost-k8s-csi--node--driver--hmqx4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--hmqx4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d3daff69-f8cf-4771-8db4-eb9251b67560", ResourceVersion:"602", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 27, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5bb5e1e05bfd11a3bc5310e0ea50612d62ea00979d4920d14c0ff1ea636f3d64", Pod:"csi-node-driver-hmqx4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic83e5e72070", MAC:"72:be:68:df:3c:c1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:28:10.980493 containerd[1438]: 2025-02-13 15:28:10.971 [INFO][4569] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5bb5e1e05bfd11a3bc5310e0ea50612d62ea00979d4920d14c0ff1ea636f3d64" Namespace="calico-system" Pod="csi-node-driver-hmqx4" WorkloadEndpoint="localhost-k8s-csi--node--driver--hmqx4-eth0" Feb 13 15:28:10.980493 containerd[1438]: time="2025-02-13T15:28:10.979820547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c68d99f8f-wlrdk,Uid:b17d9265-9df2-40dd-a8b7-46383a0e17ce,Namespace:calico-system,Attempt:5,} returns sandbox id \"895343bb39d03f12c10c91f3a782c6bf012c7ef45a2f0a04d219281f35a4bc7c\"" Feb 13 15:28:11.012893 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:28:11.022615 containerd[1438]: time="2025-02-13T15:28:11.022144587Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:28:11.022615 containerd[1438]: time="2025-02-13T15:28:11.022218724Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:28:11.022615 containerd[1438]: time="2025-02-13T15:28:11.022233847Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:28:11.022615 containerd[1438]: time="2025-02-13T15:28:11.022398805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:28:11.059789 containerd[1438]: time="2025-02-13T15:28:11.059735128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55bbcccb65-mmhbv,Uid:2330a284-0835-4d9f-929e-909c050006b6,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"d73a3125696c192081e963d012d35642881c283a662d24cae9cc4a497272dab5\"" Feb 13 15:28:11.069370 systemd[1]: Started cri-containerd-5bb5e1e05bfd11a3bc5310e0ea50612d62ea00979d4920d14c0ff1ea636f3d64.scope - libcontainer container 5bb5e1e05bfd11a3bc5310e0ea50612d62ea00979d4920d14c0ff1ea636f3d64. Feb 13 15:28:11.111149 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:28:11.138021 containerd[1438]: time="2025-02-13T15:28:11.137799509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hmqx4,Uid:d3daff69-f8cf-4771-8db4-eb9251b67560,Namespace:calico-system,Attempt:5,} returns sandbox id \"5bb5e1e05bfd11a3bc5310e0ea50612d62ea00979d4920d14c0ff1ea636f3d64\"" Feb 13 15:28:11.652633 kubelet[2521]: E0213 15:28:11.652579 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:11.665227 kubelet[2521]: I0213 15:28:11.665173 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-xrphc" podStartSLOduration=23.665156613 podStartE2EDuration="23.665156613s" podCreationTimestamp="2025-02-13 15:27:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:28:11.664290416 +0000 UTC m=+29.387788495" watchObservedRunningTime="2025-02-13 15:28:11.665156613 +0000 UTC m=+29.388654652" Feb 13 15:28:11.672357 kubelet[2521]: E0213 15:28:11.672328 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:11.731200 systemd-networkd[1373]: calia6753ea763c: Gained IPv6LL Feb 13 15:28:11.731501 systemd-networkd[1373]: cali819e3fe5589: Gained IPv6LL Feb 13 15:28:12.115217 systemd-networkd[1373]: calib44ad93cf55: Gained IPv6LL Feb 13 15:28:12.244142 systemd-networkd[1373]: cali85046783ee6: Gained IPv6LL Feb 13 15:28:12.435301 systemd-networkd[1373]: calic83e5e72070: Gained IPv6LL Feb 13 15:28:12.500214 systemd-networkd[1373]: cali48fe9c7b737: Gained IPv6LL Feb 13 15:28:12.672221 kubelet[2521]: E0213 15:28:12.672060 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:12.673390 kubelet[2521]: E0213 15:28:12.672360 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:12.870481 containerd[1438]: time="2025-02-13T15:28:12.870427834Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:12.871037 containerd[1438]: time="2025-02-13T15:28:12.870988397Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Feb 13 15:28:12.871732 containerd[1438]: time="2025-02-13T15:28:12.871696033Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:12.873711 containerd[1438]: time="2025-02-13T15:28:12.873672629Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:12.874616 containerd[1438]: time="2025-02-13T15:28:12.874486568Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 1.921047585s" Feb 13 15:28:12.874664 containerd[1438]: time="2025-02-13T15:28:12.874615797Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Feb 13 15:28:12.876096 containerd[1438]: time="2025-02-13T15:28:12.875867352Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Feb 13 15:28:12.878446 containerd[1438]: time="2025-02-13T15:28:12.878422596Z" level=info msg="CreateContainer within sandbox \"7841690efaff621fba8207e5072d265eb6c9be7978ddf33f7ef4d0cac0dae0cf\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 15:28:12.892313 containerd[1438]: time="2025-02-13T15:28:12.892262246Z" level=info msg="CreateContainer within sandbox \"7841690efaff621fba8207e5072d265eb6c9be7978ddf33f7ef4d0cac0dae0cf\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f9e7b87a5b8803505055e4bd03d13090744a5bb0445defca421e877aa60bdf61\"" Feb 13 15:28:12.892937 containerd[1438]: time="2025-02-13T15:28:12.892908589Z" level=info msg="StartContainer for \"f9e7b87a5b8803505055e4bd03d13090744a5bb0445defca421e877aa60bdf61\"" Feb 13 15:28:12.932255 systemd[1]: Started cri-containerd-f9e7b87a5b8803505055e4bd03d13090744a5bb0445defca421e877aa60bdf61.scope - libcontainer container f9e7b87a5b8803505055e4bd03d13090744a5bb0445defca421e877aa60bdf61. Feb 13 15:28:12.969927 containerd[1438]: time="2025-02-13T15:28:12.969792414Z" level=info msg="StartContainer for \"f9e7b87a5b8803505055e4bd03d13090744a5bb0445defca421e877aa60bdf61\" returns successfully" Feb 13 15:28:13.687194 kubelet[2521]: E0213 15:28:13.680600 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:13.689000 kubelet[2521]: E0213 15:28:13.687829 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:13.695151 kubelet[2521]: I0213 15:28:13.695064 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-55bbcccb65-78qv4" podStartSLOduration=16.772539109 podStartE2EDuration="18.695048261s" podCreationTimestamp="2025-02-13 15:27:55 +0000 UTC" firstStartedPulling="2025-02-13 15:28:10.953142073 +0000 UTC m=+28.676640112" lastFinishedPulling="2025-02-13 15:28:12.875651225 +0000 UTC m=+30.599149264" observedRunningTime="2025-02-13 15:28:13.694677061 +0000 UTC m=+31.418175141" watchObservedRunningTime="2025-02-13 15:28:13.695048261 +0000 UTC m=+31.418546300" Feb 13 15:28:14.415003 containerd[1438]: time="2025-02-13T15:28:14.414940876Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:14.416998 containerd[1438]: time="2025-02-13T15:28:14.416953771Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Feb 13 15:28:14.417669 containerd[1438]: time="2025-02-13T15:28:14.417639352Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:14.422843 containerd[1438]: time="2025-02-13T15:28:14.421347076Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:14.422843 containerd[1438]: time="2025-02-13T15:28:14.422429660Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 1.54652914s" Feb 13 15:28:14.422843 containerd[1438]: time="2025-02-13T15:28:14.422458746Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Feb 13 15:28:14.424472 containerd[1438]: time="2025-02-13T15:28:14.424425191Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 15:28:14.433399 containerd[1438]: time="2025-02-13T15:28:14.433361753Z" level=info msg="CreateContainer within sandbox \"895343bb39d03f12c10c91f3a782c6bf012c7ef45a2f0a04d219281f35a4bc7c\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 13 15:28:14.456138 containerd[1438]: time="2025-02-13T15:28:14.456053230Z" level=info msg="CreateContainer within sandbox \"895343bb39d03f12c10c91f3a782c6bf012c7ef45a2f0a04d219281f35a4bc7c\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"188d258539d302efd58ac1dc3c456416864f68f3b825bb15d088c88e687fd083\"" Feb 13 15:28:14.456790 containerd[1438]: time="2025-02-13T15:28:14.456667476Z" level=info msg="StartContainer for \"188d258539d302efd58ac1dc3c456416864f68f3b825bb15d088c88e687fd083\"" Feb 13 15:28:14.496280 systemd[1]: Started cri-containerd-188d258539d302efd58ac1dc3c456416864f68f3b825bb15d088c88e687fd083.scope - libcontainer container 188d258539d302efd58ac1dc3c456416864f68f3b825bb15d088c88e687fd083. Feb 13 15:28:14.531455 containerd[1438]: time="2025-02-13T15:28:14.531237166Z" level=info msg="StartContainer for \"188d258539d302efd58ac1dc3c456416864f68f3b825bb15d088c88e687fd083\" returns successfully" Feb 13 15:28:14.683951 kubelet[2521]: I0213 15:28:14.683724 2521 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:28:14.702421 kubelet[2521]: I0213 15:28:14.702192 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6c68d99f8f-wlrdk" podStartSLOduration=16.260684549 podStartE2EDuration="19.702172078s" podCreationTimestamp="2025-02-13 15:27:55 +0000 UTC" firstStartedPulling="2025-02-13 15:28:10.982189908 +0000 UTC m=+28.705687907" lastFinishedPulling="2025-02-13 15:28:14.423677437 +0000 UTC m=+32.147175436" observedRunningTime="2025-02-13 15:28:14.701668614 +0000 UTC m=+32.425166653" watchObservedRunningTime="2025-02-13 15:28:14.702172078 +0000 UTC m=+32.425670077" Feb 13 15:28:14.872697 containerd[1438]: time="2025-02-13T15:28:14.872638133Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:14.878149 containerd[1438]: time="2025-02-13T15:28:14.878087536Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Feb 13 15:28:14.880332 containerd[1438]: time="2025-02-13T15:28:14.880283509Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 455.67508ms" Feb 13 15:28:14.880332 containerd[1438]: time="2025-02-13T15:28:14.880321517Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Feb 13 15:28:14.881219 containerd[1438]: time="2025-02-13T15:28:14.881179653Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 15:28:14.883827 containerd[1438]: time="2025-02-13T15:28:14.883788751Z" level=info msg="CreateContainer within sandbox \"d73a3125696c192081e963d012d35642881c283a662d24cae9cc4a497272dab5\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 15:28:14.901209 containerd[1438]: time="2025-02-13T15:28:14.901148169Z" level=info msg="CreateContainer within sandbox \"d73a3125696c192081e963d012d35642881c283a662d24cae9cc4a497272dab5\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1960ad03bd98ed1f78cf49cc27ae232fc67ae0ac9b3b0dd503d48c94d4920dbf\"" Feb 13 15:28:14.901844 containerd[1438]: time="2025-02-13T15:28:14.901801784Z" level=info msg="StartContainer for \"1960ad03bd98ed1f78cf49cc27ae232fc67ae0ac9b3b0dd503d48c94d4920dbf\"" Feb 13 15:28:14.931373 systemd[1]: Started cri-containerd-1960ad03bd98ed1f78cf49cc27ae232fc67ae0ac9b3b0dd503d48c94d4920dbf.scope - libcontainer container 1960ad03bd98ed1f78cf49cc27ae232fc67ae0ac9b3b0dd503d48c94d4920dbf. Feb 13 15:28:14.969891 containerd[1438]: time="2025-02-13T15:28:14.969771873Z" level=info msg="StartContainer for \"1960ad03bd98ed1f78cf49cc27ae232fc67ae0ac9b3b0dd503d48c94d4920dbf\" returns successfully" Feb 13 15:28:15.699553 kubelet[2521]: I0213 15:28:15.699488 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-55bbcccb65-mmhbv" podStartSLOduration=16.880135336 podStartE2EDuration="20.699470783s" podCreationTimestamp="2025-02-13 15:27:55 +0000 UTC" firstStartedPulling="2025-02-13 15:28:11.061665049 +0000 UTC m=+28.785163088" lastFinishedPulling="2025-02-13 15:28:14.881000456 +0000 UTC m=+32.604498535" observedRunningTime="2025-02-13 15:28:15.699199289 +0000 UTC m=+33.422697328" watchObservedRunningTime="2025-02-13 15:28:15.699470783 +0000 UTC m=+33.422968822" Feb 13 15:28:15.840680 systemd[1]: Started sshd@8-10.0.0.93:22-10.0.0.1:37856.service - OpenSSH per-connection server daemon (10.0.0.1:37856). Feb 13 15:28:15.918108 sshd[5453]: Accepted publickey for core from 10.0.0.1 port 37856 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:28:15.921670 sshd-session[5453]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:28:15.932785 systemd-logind[1424]: New session 9 of user core. Feb 13 15:28:15.944955 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 15:28:16.330897 containerd[1438]: time="2025-02-13T15:28:16.330824215Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:16.332466 containerd[1438]: time="2025-02-13T15:28:16.331694783Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Feb 13 15:28:16.333488 containerd[1438]: time="2025-02-13T15:28:16.333433520Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:16.340199 containerd[1438]: time="2025-02-13T15:28:16.338617283Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:16.340656 containerd[1438]: time="2025-02-13T15:28:16.340614510Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.459393488s" Feb 13 15:28:16.340723 containerd[1438]: time="2025-02-13T15:28:16.340656358Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Feb 13 15:28:16.341519 sshd[5455]: Connection closed by 10.0.0.1 port 37856 Feb 13 15:28:16.341850 sshd-session[5453]: pam_unix(sshd:session): session closed for user core Feb 13 15:28:16.344229 containerd[1438]: time="2025-02-13T15:28:16.344185881Z" level=info msg="CreateContainer within sandbox \"5bb5e1e05bfd11a3bc5310e0ea50612d62ea00979d4920d14c0ff1ea636f3d64\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 15:28:16.346992 systemd[1]: sshd@8-10.0.0.93:22-10.0.0.1:37856.service: Deactivated successfully. Feb 13 15:28:16.350055 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 15:28:16.353563 systemd-logind[1424]: Session 9 logged out. Waiting for processes to exit. Feb 13 15:28:16.355607 systemd-logind[1424]: Removed session 9. Feb 13 15:28:16.371490 containerd[1438]: time="2025-02-13T15:28:16.371426113Z" level=info msg="CreateContainer within sandbox \"5bb5e1e05bfd11a3bc5310e0ea50612d62ea00979d4920d14c0ff1ea636f3d64\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"eb90e341f1b4b36e44a8155908b16a072eb4e3ec7c556f9a72e2f00c26b64ba0\"" Feb 13 15:28:16.372169 containerd[1438]: time="2025-02-13T15:28:16.372014587Z" level=info msg="StartContainer for \"eb90e341f1b4b36e44a8155908b16a072eb4e3ec7c556f9a72e2f00c26b64ba0\"" Feb 13 15:28:16.396746 systemd[1]: run-containerd-runc-k8s.io-eb90e341f1b4b36e44a8155908b16a072eb4e3ec7c556f9a72e2f00c26b64ba0-runc.ztMUOc.mount: Deactivated successfully. Feb 13 15:28:16.411357 systemd[1]: Started cri-containerd-eb90e341f1b4b36e44a8155908b16a072eb4e3ec7c556f9a72e2f00c26b64ba0.scope - libcontainer container eb90e341f1b4b36e44a8155908b16a072eb4e3ec7c556f9a72e2f00c26b64ba0. Feb 13 15:28:16.445603 containerd[1438]: time="2025-02-13T15:28:16.445547379Z" level=info msg="StartContainer for \"eb90e341f1b4b36e44a8155908b16a072eb4e3ec7c556f9a72e2f00c26b64ba0\" returns successfully" Feb 13 15:28:16.447500 containerd[1438]: time="2025-02-13T15:28:16.447427783Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 15:28:16.699759 kubelet[2521]: I0213 15:28:16.699714 2521 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:28:17.717201 containerd[1438]: time="2025-02-13T15:28:17.717134529Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:17.721132 containerd[1438]: time="2025-02-13T15:28:17.721034142Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Feb 13 15:28:17.722638 containerd[1438]: time="2025-02-13T15:28:17.722552467Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:17.724965 containerd[1438]: time="2025-02-13T15:28:17.724918591Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:17.725966 containerd[1438]: time="2025-02-13T15:28:17.725675534Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.278197261s" Feb 13 15:28:17.725966 containerd[1438]: time="2025-02-13T15:28:17.725713741Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Feb 13 15:28:17.728543 containerd[1438]: time="2025-02-13T15:28:17.728489982Z" level=info msg="CreateContainer within sandbox \"5bb5e1e05bfd11a3bc5310e0ea50612d62ea00979d4920d14c0ff1ea636f3d64\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 15:28:17.767323 containerd[1438]: time="2025-02-13T15:28:17.767263346Z" level=info msg="CreateContainer within sandbox \"5bb5e1e05bfd11a3bc5310e0ea50612d62ea00979d4920d14c0ff1ea636f3d64\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"4f915995291e9cc908410dab40eceef0a7d65050ebb1f20c392de05665342fb8\"" Feb 13 15:28:17.767922 containerd[1438]: time="2025-02-13T15:28:17.767897345Z" level=info msg="StartContainer for \"4f915995291e9cc908410dab40eceef0a7d65050ebb1f20c392de05665342fb8\"" Feb 13 15:28:17.806322 systemd[1]: Started cri-containerd-4f915995291e9cc908410dab40eceef0a7d65050ebb1f20c392de05665342fb8.scope - libcontainer container 4f915995291e9cc908410dab40eceef0a7d65050ebb1f20c392de05665342fb8. Feb 13 15:28:17.861120 containerd[1438]: time="2025-02-13T15:28:17.860266376Z" level=info msg="StartContainer for \"4f915995291e9cc908410dab40eceef0a7d65050ebb1f20c392de05665342fb8\" returns successfully" Feb 13 15:28:18.447922 kubelet[2521]: I0213 15:28:18.447847 2521 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 15:28:18.448678 kubelet[2521]: I0213 15:28:18.448643 2521 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 15:28:18.759151 kubelet[2521]: I0213 15:28:18.758963 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-hmqx4" podStartSLOduration=17.171102862 podStartE2EDuration="23.75894395s" podCreationTimestamp="2025-02-13 15:27:55 +0000 UTC" firstStartedPulling="2025-02-13 15:28:11.139040632 +0000 UTC m=+28.862538671" lastFinishedPulling="2025-02-13 15:28:17.72688172 +0000 UTC m=+35.450379759" observedRunningTime="2025-02-13 15:28:18.757780457 +0000 UTC m=+36.481278496" watchObservedRunningTime="2025-02-13 15:28:18.75894395 +0000 UTC m=+36.482441989" Feb 13 15:28:21.355638 systemd[1]: Started sshd@9-10.0.0.93:22-10.0.0.1:37864.service - OpenSSH per-connection server daemon (10.0.0.1:37864). Feb 13 15:28:21.439176 sshd[5682]: Accepted publickey for core from 10.0.0.1 port 37864 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:28:21.440989 sshd-session[5682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:28:21.448172 systemd-logind[1424]: New session 10 of user core. Feb 13 15:28:21.465409 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 15:28:21.706353 sshd[5684]: Connection closed by 10.0.0.1 port 37864 Feb 13 15:28:21.706301 sshd-session[5682]: pam_unix(sshd:session): session closed for user core Feb 13 15:28:21.721867 systemd[1]: sshd@9-10.0.0.93:22-10.0.0.1:37864.service: Deactivated successfully. Feb 13 15:28:21.724291 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 15:28:21.731348 systemd-logind[1424]: Session 10 logged out. Waiting for processes to exit. Feb 13 15:28:21.739491 systemd[1]: Started sshd@10-10.0.0.93:22-10.0.0.1:37874.service - OpenSSH per-connection server daemon (10.0.0.1:37874). Feb 13 15:28:21.741785 systemd-logind[1424]: Removed session 10. Feb 13 15:28:21.782721 sshd[5698]: Accepted publickey for core from 10.0.0.1 port 37874 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:28:21.784377 sshd-session[5698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:28:21.793782 systemd-logind[1424]: New session 11 of user core. Feb 13 15:28:21.802417 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 15:28:21.947015 kubelet[2521]: I0213 15:28:21.946963 2521 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:28:22.072798 sshd[5708]: Connection closed by 10.0.0.1 port 37874 Feb 13 15:28:22.074782 sshd-session[5698]: pam_unix(sshd:session): session closed for user core Feb 13 15:28:22.099484 systemd[1]: Started sshd@11-10.0.0.93:22-10.0.0.1:37882.service - OpenSSH per-connection server daemon (10.0.0.1:37882). Feb 13 15:28:22.100237 systemd[1]: sshd@10-10.0.0.93:22-10.0.0.1:37874.service: Deactivated successfully. Feb 13 15:28:22.106121 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 15:28:22.112513 systemd-logind[1424]: Session 11 logged out. Waiting for processes to exit. Feb 13 15:28:22.121062 systemd-logind[1424]: Removed session 11. Feb 13 15:28:22.166443 sshd[5732]: Accepted publickey for core from 10.0.0.1 port 37882 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:28:22.169284 sshd-session[5732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:28:22.178247 systemd-logind[1424]: New session 12 of user core. Feb 13 15:28:22.186295 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 15:28:22.402233 sshd[5738]: Connection closed by 10.0.0.1 port 37882 Feb 13 15:28:22.401645 sshd-session[5732]: pam_unix(sshd:session): session closed for user core Feb 13 15:28:22.406337 systemd[1]: sshd@11-10.0.0.93:22-10.0.0.1:37882.service: Deactivated successfully. Feb 13 15:28:22.410446 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 15:28:22.413117 systemd-logind[1424]: Session 12 logged out. Waiting for processes to exit. Feb 13 15:28:22.414888 systemd-logind[1424]: Removed session 12. Feb 13 15:28:23.002441 kubelet[2521]: I0213 15:28:23.001690 2521 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:28:23.002441 kubelet[2521]: E0213 15:28:23.002115 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:23.605099 kernel: bpftool[5792]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 15:28:23.730281 kubelet[2521]: E0213 15:28:23.730204 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:23.774645 systemd-networkd[1373]: vxlan.calico: Link UP Feb 13 15:28:23.774654 systemd-networkd[1373]: vxlan.calico: Gained carrier Feb 13 15:28:25.043366 systemd-networkd[1373]: vxlan.calico: Gained IPv6LL Feb 13 15:28:25.491089 kubelet[2521]: I0213 15:28:25.491047 2521 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:28:25.491508 kubelet[2521]: E0213 15:28:25.491477 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:27.429579 systemd[1]: Started sshd@12-10.0.0.93:22-10.0.0.1:47914.service - OpenSSH per-connection server daemon (10.0.0.1:47914). Feb 13 15:28:27.499304 sshd[5958]: Accepted publickey for core from 10.0.0.1 port 47914 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:28:27.499992 sshd-session[5958]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:28:27.505411 systemd-logind[1424]: New session 13 of user core. Feb 13 15:28:27.510494 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 15:28:27.790167 sshd[5960]: Connection closed by 10.0.0.1 port 47914 Feb 13 15:28:27.790218 sshd-session[5958]: pam_unix(sshd:session): session closed for user core Feb 13 15:28:27.795374 systemd[1]: sshd@12-10.0.0.93:22-10.0.0.1:47914.service: Deactivated successfully. Feb 13 15:28:27.797785 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 15:28:27.798698 systemd-logind[1424]: Session 13 logged out. Waiting for processes to exit. Feb 13 15:28:27.799654 systemd-logind[1424]: Removed session 13. Feb 13 15:28:30.155155 kubelet[2521]: I0213 15:28:30.155106 2521 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:28:32.804818 systemd[1]: Started sshd@13-10.0.0.93:22-10.0.0.1:56624.service - OpenSSH per-connection server daemon (10.0.0.1:56624). Feb 13 15:28:32.845483 sshd[5983]: Accepted publickey for core from 10.0.0.1 port 56624 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:28:32.846929 sshd-session[5983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:28:32.850557 systemd-logind[1424]: New session 14 of user core. Feb 13 15:28:32.861268 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 15:28:33.001685 sshd[5985]: Connection closed by 10.0.0.1 port 56624 Feb 13 15:28:33.002263 sshd-session[5983]: pam_unix(sshd:session): session closed for user core Feb 13 15:28:33.014884 systemd[1]: sshd@13-10.0.0.93:22-10.0.0.1:56624.service: Deactivated successfully. Feb 13 15:28:33.016468 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 15:28:33.018419 systemd-logind[1424]: Session 14 logged out. Waiting for processes to exit. Feb 13 15:28:33.019880 systemd-logind[1424]: Removed session 14. Feb 13 15:28:33.027364 systemd[1]: Started sshd@14-10.0.0.93:22-10.0.0.1:56632.service - OpenSSH per-connection server daemon (10.0.0.1:56632). Feb 13 15:28:33.064816 sshd[5998]: Accepted publickey for core from 10.0.0.1 port 56632 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:28:33.066465 sshd-session[5998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:28:33.070246 systemd-logind[1424]: New session 15 of user core. Feb 13 15:28:33.092302 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 15:28:33.297249 sshd[6000]: Connection closed by 10.0.0.1 port 56632 Feb 13 15:28:33.297857 sshd-session[5998]: pam_unix(sshd:session): session closed for user core Feb 13 15:28:33.306935 systemd[1]: sshd@14-10.0.0.93:22-10.0.0.1:56632.service: Deactivated successfully. Feb 13 15:28:33.309671 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 15:28:33.311666 systemd-logind[1424]: Session 15 logged out. Waiting for processes to exit. Feb 13 15:28:33.322372 systemd[1]: Started sshd@15-10.0.0.93:22-10.0.0.1:56648.service - OpenSSH per-connection server daemon (10.0.0.1:56648). Feb 13 15:28:33.323878 systemd-logind[1424]: Removed session 15. Feb 13 15:28:33.368154 sshd[6010]: Accepted publickey for core from 10.0.0.1 port 56648 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:28:33.369695 sshd-session[6010]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:28:33.375538 systemd-logind[1424]: New session 16 of user core. Feb 13 15:28:33.386309 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 15:28:35.039469 sshd[6012]: Connection closed by 10.0.0.1 port 56648 Feb 13 15:28:35.038706 sshd-session[6010]: pam_unix(sshd:session): session closed for user core Feb 13 15:28:35.055447 systemd[1]: sshd@15-10.0.0.93:22-10.0.0.1:56648.service: Deactivated successfully. Feb 13 15:28:35.060669 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 15:28:35.064049 systemd-logind[1424]: Session 16 logged out. Waiting for processes to exit. Feb 13 15:28:35.075851 systemd[1]: Started sshd@16-10.0.0.93:22-10.0.0.1:56650.service - OpenSSH per-connection server daemon (10.0.0.1:56650). Feb 13 15:28:35.078258 systemd-logind[1424]: Removed session 16. Feb 13 15:28:35.126418 sshd[6038]: Accepted publickey for core from 10.0.0.1 port 56650 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:28:35.127998 sshd-session[6038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:28:35.132050 systemd-logind[1424]: New session 17 of user core. Feb 13 15:28:35.138265 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 15:28:35.536095 sshd[6040]: Connection closed by 10.0.0.1 port 56650 Feb 13 15:28:35.536631 sshd-session[6038]: pam_unix(sshd:session): session closed for user core Feb 13 15:28:35.545887 systemd[1]: sshd@16-10.0.0.93:22-10.0.0.1:56650.service: Deactivated successfully. Feb 13 15:28:35.552502 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 15:28:35.559581 systemd-logind[1424]: Session 17 logged out. Waiting for processes to exit. Feb 13 15:28:35.569480 systemd[1]: Started sshd@17-10.0.0.93:22-10.0.0.1:56658.service - OpenSSH per-connection server daemon (10.0.0.1:56658). Feb 13 15:28:35.575575 systemd-logind[1424]: Removed session 17. Feb 13 15:28:35.618028 sshd[6051]: Accepted publickey for core from 10.0.0.1 port 56658 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:28:35.619403 sshd-session[6051]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:28:35.623226 systemd-logind[1424]: New session 18 of user core. Feb 13 15:28:35.631255 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 15:28:35.766093 sshd[6053]: Connection closed by 10.0.0.1 port 56658 Feb 13 15:28:35.766478 sshd-session[6051]: pam_unix(sshd:session): session closed for user core Feb 13 15:28:35.769856 systemd[1]: sshd@17-10.0.0.93:22-10.0.0.1:56658.service: Deactivated successfully. Feb 13 15:28:35.772029 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 15:28:35.772767 systemd-logind[1424]: Session 18 logged out. Waiting for processes to exit. Feb 13 15:28:35.773762 systemd-logind[1424]: Removed session 18. Feb 13 15:28:39.293631 systemd[1]: run-containerd-runc-k8s.io-188d258539d302efd58ac1dc3c456416864f68f3b825bb15d088c88e687fd083-runc.s6VDlF.mount: Deactivated successfully. Feb 13 15:28:40.777824 systemd[1]: Started sshd@18-10.0.0.93:22-10.0.0.1:56664.service - OpenSSH per-connection server daemon (10.0.0.1:56664). Feb 13 15:28:40.820384 sshd[6089]: Accepted publickey for core from 10.0.0.1 port 56664 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:28:40.821647 sshd-session[6089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:28:40.825248 systemd-logind[1424]: New session 19 of user core. Feb 13 15:28:40.833206 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 15:28:40.952385 sshd[6091]: Connection closed by 10.0.0.1 port 56664 Feb 13 15:28:40.953118 sshd-session[6089]: pam_unix(sshd:session): session closed for user core Feb 13 15:28:40.956942 systemd[1]: sshd@18-10.0.0.93:22-10.0.0.1:56664.service: Deactivated successfully. Feb 13 15:28:40.958647 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 15:28:40.960294 systemd-logind[1424]: Session 19 logged out. Waiting for processes to exit. Feb 13 15:28:40.961341 systemd-logind[1424]: Removed session 19. Feb 13 15:28:42.343062 containerd[1438]: time="2025-02-13T15:28:42.342581105Z" level=info msg="StopPodSandbox for \"0bbc8283fb1680fd35c9202a2ab37d2c9e7100b2f1110b5295aeee2d24e7261e\"" Feb 13 15:28:42.343062 containerd[1438]: time="2025-02-13T15:28:42.342696399Z" level=info msg="TearDown network for sandbox \"0bbc8283fb1680fd35c9202a2ab37d2c9e7100b2f1110b5295aeee2d24e7261e\" successfully" Feb 13 15:28:42.343062 containerd[1438]: time="2025-02-13T15:28:42.342708840Z" level=info msg="StopPodSandbox for \"0bbc8283fb1680fd35c9202a2ab37d2c9e7100b2f1110b5295aeee2d24e7261e\" returns successfully" Feb 13 15:28:42.352960 containerd[1438]: time="2025-02-13T15:28:42.352905818Z" level=info msg="RemovePodSandbox for \"0bbc8283fb1680fd35c9202a2ab37d2c9e7100b2f1110b5295aeee2d24e7261e\"" Feb 13 15:28:42.352960 containerd[1438]: time="2025-02-13T15:28:42.352956424Z" level=info msg="Forcibly stopping sandbox \"0bbc8283fb1680fd35c9202a2ab37d2c9e7100b2f1110b5295aeee2d24e7261e\"" Feb 13 15:28:42.353086 containerd[1438]: time="2025-02-13T15:28:42.353024272Z" level=info msg="TearDown network for sandbox \"0bbc8283fb1680fd35c9202a2ab37d2c9e7100b2f1110b5295aeee2d24e7261e\" successfully" Feb 13 15:28:42.356110 containerd[1438]: time="2025-02-13T15:28:42.356059154Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0bbc8283fb1680fd35c9202a2ab37d2c9e7100b2f1110b5295aeee2d24e7261e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:42.356164 containerd[1438]: time="2025-02-13T15:28:42.356137123Z" level=info msg="RemovePodSandbox \"0bbc8283fb1680fd35c9202a2ab37d2c9e7100b2f1110b5295aeee2d24e7261e\" returns successfully" Feb 13 15:28:42.356526 containerd[1438]: time="2025-02-13T15:28:42.356500007Z" level=info msg="StopPodSandbox for \"051fcd08b73edccb4ccb159d0ff5c057bc8f7b121c26a4566843e5c34094c720\"" Feb 13 15:28:42.356602 containerd[1438]: time="2025-02-13T15:28:42.356586257Z" level=info msg="TearDown network for sandbox \"051fcd08b73edccb4ccb159d0ff5c057bc8f7b121c26a4566843e5c34094c720\" successfully" Feb 13 15:28:42.356637 containerd[1438]: time="2025-02-13T15:28:42.356600899Z" level=info msg="StopPodSandbox for \"051fcd08b73edccb4ccb159d0ff5c057bc8f7b121c26a4566843e5c34094c720\" returns successfully" Feb 13 15:28:42.356924 containerd[1438]: time="2025-02-13T15:28:42.356903015Z" level=info msg="RemovePodSandbox for \"051fcd08b73edccb4ccb159d0ff5c057bc8f7b121c26a4566843e5c34094c720\"" Feb 13 15:28:42.356959 containerd[1438]: time="2025-02-13T15:28:42.356930218Z" level=info msg="Forcibly stopping sandbox \"051fcd08b73edccb4ccb159d0ff5c057bc8f7b121c26a4566843e5c34094c720\"" Feb 13 15:28:42.357008 containerd[1438]: time="2025-02-13T15:28:42.356993226Z" level=info msg="TearDown network for sandbox \"051fcd08b73edccb4ccb159d0ff5c057bc8f7b121c26a4566843e5c34094c720\" successfully" Feb 13 15:28:42.374602 containerd[1438]: time="2025-02-13T15:28:42.374573524Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"051fcd08b73edccb4ccb159d0ff5c057bc8f7b121c26a4566843e5c34094c720\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:42.374683 containerd[1438]: time="2025-02-13T15:28:42.374620450Z" level=info msg="RemovePodSandbox \"051fcd08b73edccb4ccb159d0ff5c057bc8f7b121c26a4566843e5c34094c720\" returns successfully" Feb 13 15:28:42.375119 containerd[1438]: time="2025-02-13T15:28:42.374959490Z" level=info msg="StopPodSandbox for \"7ebcedfe59cbd698c1faf3d81c5cddc2808250aba7f1b5b95ba461e61be017a4\"" Feb 13 15:28:42.375119 containerd[1438]: time="2025-02-13T15:28:42.375039580Z" level=info msg="TearDown network for sandbox \"7ebcedfe59cbd698c1faf3d81c5cddc2808250aba7f1b5b95ba461e61be017a4\" successfully" Feb 13 15:28:42.375119 containerd[1438]: time="2025-02-13T15:28:42.375049021Z" level=info msg="StopPodSandbox for \"7ebcedfe59cbd698c1faf3d81c5cddc2808250aba7f1b5b95ba461e61be017a4\" returns successfully" Feb 13 15:28:42.375605 containerd[1438]: time="2025-02-13T15:28:42.375580444Z" level=info msg="RemovePodSandbox for \"7ebcedfe59cbd698c1faf3d81c5cddc2808250aba7f1b5b95ba461e61be017a4\"" Feb 13 15:28:42.375653 containerd[1438]: time="2025-02-13T15:28:42.375609328Z" level=info msg="Forcibly stopping sandbox \"7ebcedfe59cbd698c1faf3d81c5cddc2808250aba7f1b5b95ba461e61be017a4\"" Feb 13 15:28:42.375698 containerd[1438]: time="2025-02-13T15:28:42.375682416Z" level=info msg="TearDown network for sandbox \"7ebcedfe59cbd698c1faf3d81c5cddc2808250aba7f1b5b95ba461e61be017a4\" successfully" Feb 13 15:28:42.377895 containerd[1438]: time="2025-02-13T15:28:42.377867477Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7ebcedfe59cbd698c1faf3d81c5cddc2808250aba7f1b5b95ba461e61be017a4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:42.377953 containerd[1438]: time="2025-02-13T15:28:42.377918003Z" level=info msg="RemovePodSandbox \"7ebcedfe59cbd698c1faf3d81c5cddc2808250aba7f1b5b95ba461e61be017a4\" returns successfully" Feb 13 15:28:42.378499 containerd[1438]: time="2025-02-13T15:28:42.378221840Z" level=info msg="StopPodSandbox for \"9ea262909c8919e05cb7fe199a283a585036b3c7d6a8a64160d8cb41a91b3d3d\"" Feb 13 15:28:42.378499 containerd[1438]: time="2025-02-13T15:28:42.378302009Z" level=info msg="TearDown network for sandbox \"9ea262909c8919e05cb7fe199a283a585036b3c7d6a8a64160d8cb41a91b3d3d\" successfully" Feb 13 15:28:42.378499 containerd[1438]: time="2025-02-13T15:28:42.378311850Z" level=info msg="StopPodSandbox for \"9ea262909c8919e05cb7fe199a283a585036b3c7d6a8a64160d8cb41a91b3d3d\" returns successfully" Feb 13 15:28:42.378649 containerd[1438]: time="2025-02-13T15:28:42.378620087Z" level=info msg="RemovePodSandbox for \"9ea262909c8919e05cb7fe199a283a585036b3c7d6a8a64160d8cb41a91b3d3d\"" Feb 13 15:28:42.378679 containerd[1438]: time="2025-02-13T15:28:42.378650891Z" level=info msg="Forcibly stopping sandbox \"9ea262909c8919e05cb7fe199a283a585036b3c7d6a8a64160d8cb41a91b3d3d\"" Feb 13 15:28:42.378730 containerd[1438]: time="2025-02-13T15:28:42.378711978Z" level=info msg="TearDown network for sandbox \"9ea262909c8919e05cb7fe199a283a585036b3c7d6a8a64160d8cb41a91b3d3d\" successfully" Feb 13 15:28:42.381055 containerd[1438]: time="2025-02-13T15:28:42.381025774Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9ea262909c8919e05cb7fe199a283a585036b3c7d6a8a64160d8cb41a91b3d3d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:42.381138 containerd[1438]: time="2025-02-13T15:28:42.381087022Z" level=info msg="RemovePodSandbox \"9ea262909c8919e05cb7fe199a283a585036b3c7d6a8a64160d8cb41a91b3d3d\" returns successfully" Feb 13 15:28:42.381455 containerd[1438]: time="2025-02-13T15:28:42.381358614Z" level=info msg="StopPodSandbox for \"86147124bed1f9993203b2167e20e2e93bb5e10c9702e17942935a551ce02d6c\"" Feb 13 15:28:42.381497 containerd[1438]: time="2025-02-13T15:28:42.381471187Z" level=info msg="TearDown network for sandbox \"86147124bed1f9993203b2167e20e2e93bb5e10c9702e17942935a551ce02d6c\" successfully" Feb 13 15:28:42.381497 containerd[1438]: time="2025-02-13T15:28:42.381484829Z" level=info msg="StopPodSandbox for \"86147124bed1f9993203b2167e20e2e93bb5e10c9702e17942935a551ce02d6c\" returns successfully" Feb 13 15:28:42.381793 containerd[1438]: time="2025-02-13T15:28:42.381765103Z" level=info msg="RemovePodSandbox for \"86147124bed1f9993203b2167e20e2e93bb5e10c9702e17942935a551ce02d6c\"" Feb 13 15:28:42.382918 containerd[1438]: time="2025-02-13T15:28:42.381858474Z" level=info msg="Forcibly stopping sandbox \"86147124bed1f9993203b2167e20e2e93bb5e10c9702e17942935a551ce02d6c\"" Feb 13 15:28:42.382918 containerd[1438]: time="2025-02-13T15:28:42.381923081Z" level=info msg="TearDown network for sandbox \"86147124bed1f9993203b2167e20e2e93bb5e10c9702e17942935a551ce02d6c\" successfully" Feb 13 15:28:42.384214 containerd[1438]: time="2025-02-13T15:28:42.384088780Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"86147124bed1f9993203b2167e20e2e93bb5e10c9702e17942935a551ce02d6c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:42.384214 containerd[1438]: time="2025-02-13T15:28:42.384138106Z" level=info msg="RemovePodSandbox \"86147124bed1f9993203b2167e20e2e93bb5e10c9702e17942935a551ce02d6c\" returns successfully" Feb 13 15:28:42.384484 containerd[1438]: time="2025-02-13T15:28:42.384460824Z" level=info msg="StopPodSandbox for \"ff8fd863173aed4a1fc618a73607b9ac08318473fa88fefca838b1ac4c6441db\"" Feb 13 15:28:42.384553 containerd[1438]: time="2025-02-13T15:28:42.384544274Z" level=info msg="TearDown network for sandbox \"ff8fd863173aed4a1fc618a73607b9ac08318473fa88fefca838b1ac4c6441db\" successfully" Feb 13 15:28:42.384576 containerd[1438]: time="2025-02-13T15:28:42.384554316Z" level=info msg="StopPodSandbox for \"ff8fd863173aed4a1fc618a73607b9ac08318473fa88fefca838b1ac4c6441db\" returns successfully" Feb 13 15:28:42.384836 containerd[1438]: time="2025-02-13T15:28:42.384809666Z" level=info msg="RemovePodSandbox for \"ff8fd863173aed4a1fc618a73607b9ac08318473fa88fefca838b1ac4c6441db\"" Feb 13 15:28:42.384886 containerd[1438]: time="2025-02-13T15:28:42.384837389Z" level=info msg="Forcibly stopping sandbox \"ff8fd863173aed4a1fc618a73607b9ac08318473fa88fefca838b1ac4c6441db\"" Feb 13 15:28:42.384911 containerd[1438]: time="2025-02-13T15:28:42.384895756Z" level=info msg="TearDown network for sandbox \"ff8fd863173aed4a1fc618a73607b9ac08318473fa88fefca838b1ac4c6441db\" successfully" Feb 13 15:28:42.387142 containerd[1438]: time="2025-02-13T15:28:42.387104460Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ff8fd863173aed4a1fc618a73607b9ac08318473fa88fefca838b1ac4c6441db\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:42.387188 containerd[1438]: time="2025-02-13T15:28:42.387160587Z" level=info msg="RemovePodSandbox \"ff8fd863173aed4a1fc618a73607b9ac08318473fa88fefca838b1ac4c6441db\" returns successfully" Feb 13 15:28:42.387805 containerd[1438]: time="2025-02-13T15:28:42.387764819Z" level=info msg="StopPodSandbox for \"93372508d2f96d2502f51ec2872c102fd38e529d8ae7a472f2ade1091b6b7b5e\"" Feb 13 15:28:42.387884 containerd[1438]: time="2025-02-13T15:28:42.387867351Z" level=info msg="TearDown network for sandbox \"93372508d2f96d2502f51ec2872c102fd38e529d8ae7a472f2ade1091b6b7b5e\" successfully" Feb 13 15:28:42.387968 containerd[1438]: time="2025-02-13T15:28:42.387882473Z" level=info msg="StopPodSandbox for \"93372508d2f96d2502f51ec2872c102fd38e529d8ae7a472f2ade1091b6b7b5e\" returns successfully" Feb 13 15:28:42.388284 containerd[1438]: time="2025-02-13T15:28:42.388251117Z" level=info msg="RemovePodSandbox for \"93372508d2f96d2502f51ec2872c102fd38e529d8ae7a472f2ade1091b6b7b5e\"" Feb 13 15:28:42.388284 containerd[1438]: time="2025-02-13T15:28:42.388278200Z" level=info msg="Forcibly stopping sandbox \"93372508d2f96d2502f51ec2872c102fd38e529d8ae7a472f2ade1091b6b7b5e\"" Feb 13 15:28:42.388342 containerd[1438]: time="2025-02-13T15:28:42.388335847Z" level=info msg="TearDown network for sandbox \"93372508d2f96d2502f51ec2872c102fd38e529d8ae7a472f2ade1091b6b7b5e\" successfully" Feb 13 15:28:42.390426 containerd[1438]: time="2025-02-13T15:28:42.390397653Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"93372508d2f96d2502f51ec2872c102fd38e529d8ae7a472f2ade1091b6b7b5e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:42.390472 containerd[1438]: time="2025-02-13T15:28:42.390454380Z" level=info msg="RemovePodSandbox \"93372508d2f96d2502f51ec2872c102fd38e529d8ae7a472f2ade1091b6b7b5e\" returns successfully" Feb 13 15:28:42.390764 containerd[1438]: time="2025-02-13T15:28:42.390725132Z" level=info msg="StopPodSandbox for \"c874b9a1f46bbd68aaf2a7ea1fe0a5aa6e80af9b640025e9e2baccd227229769\"" Feb 13 15:28:42.390826 containerd[1438]: time="2025-02-13T15:28:42.390806582Z" level=info msg="TearDown network for sandbox \"c874b9a1f46bbd68aaf2a7ea1fe0a5aa6e80af9b640025e9e2baccd227229769\" successfully" Feb 13 15:28:42.390826 containerd[1438]: time="2025-02-13T15:28:42.390820423Z" level=info msg="StopPodSandbox for \"c874b9a1f46bbd68aaf2a7ea1fe0a5aa6e80af9b640025e9e2baccd227229769\" returns successfully" Feb 13 15:28:42.391135 containerd[1438]: time="2025-02-13T15:28:42.391113138Z" level=info msg="RemovePodSandbox for \"c874b9a1f46bbd68aaf2a7ea1fe0a5aa6e80af9b640025e9e2baccd227229769\"" Feb 13 15:28:42.391194 containerd[1438]: time="2025-02-13T15:28:42.391138181Z" level=info msg="Forcibly stopping sandbox \"c874b9a1f46bbd68aaf2a7ea1fe0a5aa6e80af9b640025e9e2baccd227229769\"" Feb 13 15:28:42.391219 containerd[1438]: time="2025-02-13T15:28:42.391209270Z" level=info msg="TearDown network for sandbox \"c874b9a1f46bbd68aaf2a7ea1fe0a5aa6e80af9b640025e9e2baccd227229769\" successfully" Feb 13 15:28:42.393591 containerd[1438]: time="2025-02-13T15:28:42.393550949Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c874b9a1f46bbd68aaf2a7ea1fe0a5aa6e80af9b640025e9e2baccd227229769\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:42.393644 containerd[1438]: time="2025-02-13T15:28:42.393604676Z" level=info msg="RemovePodSandbox \"c874b9a1f46bbd68aaf2a7ea1fe0a5aa6e80af9b640025e9e2baccd227229769\" returns successfully" Feb 13 15:28:42.393929 containerd[1438]: time="2025-02-13T15:28:42.393887270Z" level=info msg="StopPodSandbox for \"35e63fa6e367659d0929bdff839d4d1b6c5f686fc9f977877b19d16edee3e4dd\"" Feb 13 15:28:42.393983 containerd[1438]: time="2025-02-13T15:28:42.393968679Z" level=info msg="TearDown network for sandbox \"35e63fa6e367659d0929bdff839d4d1b6c5f686fc9f977877b19d16edee3e4dd\" successfully" Feb 13 15:28:42.394034 containerd[1438]: time="2025-02-13T15:28:42.393981721Z" level=info msg="StopPodSandbox for \"35e63fa6e367659d0929bdff839d4d1b6c5f686fc9f977877b19d16edee3e4dd\" returns successfully" Feb 13 15:28:42.395091 containerd[1438]: time="2025-02-13T15:28:42.394332683Z" level=info msg="RemovePodSandbox for \"35e63fa6e367659d0929bdff839d4d1b6c5f686fc9f977877b19d16edee3e4dd\"" Feb 13 15:28:42.395091 containerd[1438]: time="2025-02-13T15:28:42.394359526Z" level=info msg="Forcibly stopping sandbox \"35e63fa6e367659d0929bdff839d4d1b6c5f686fc9f977877b19d16edee3e4dd\"" Feb 13 15:28:42.395091 containerd[1438]: time="2025-02-13T15:28:42.394419813Z" level=info msg="TearDown network for sandbox \"35e63fa6e367659d0929bdff839d4d1b6c5f686fc9f977877b19d16edee3e4dd\" successfully" Feb 13 15:28:42.396770 containerd[1438]: time="2025-02-13T15:28:42.396740010Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"35e63fa6e367659d0929bdff839d4d1b6c5f686fc9f977877b19d16edee3e4dd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:42.396812 containerd[1438]: time="2025-02-13T15:28:42.396786456Z" level=info msg="RemovePodSandbox \"35e63fa6e367659d0929bdff839d4d1b6c5f686fc9f977877b19d16edee3e4dd\" returns successfully" Feb 13 15:28:42.397106 containerd[1438]: time="2025-02-13T15:28:42.397059768Z" level=info msg="StopPodSandbox for \"41b2565a38bfef8f3a666d78b84f6bb5e7b0dd9a901cae77a54f239a6a5c47de\"" Feb 13 15:28:42.397195 containerd[1438]: time="2025-02-13T15:28:42.397177862Z" level=info msg="TearDown network for sandbox \"41b2565a38bfef8f3a666d78b84f6bb5e7b0dd9a901cae77a54f239a6a5c47de\" successfully" Feb 13 15:28:42.397236 containerd[1438]: time="2025-02-13T15:28:42.397195144Z" level=info msg="StopPodSandbox for \"41b2565a38bfef8f3a666d78b84f6bb5e7b0dd9a901cae77a54f239a6a5c47de\" returns successfully" Feb 13 15:28:42.397456 containerd[1438]: time="2025-02-13T15:28:42.397434413Z" level=info msg="RemovePodSandbox for \"41b2565a38bfef8f3a666d78b84f6bb5e7b0dd9a901cae77a54f239a6a5c47de\"" Feb 13 15:28:42.397482 containerd[1438]: time="2025-02-13T15:28:42.397472778Z" level=info msg="Forcibly stopping sandbox \"41b2565a38bfef8f3a666d78b84f6bb5e7b0dd9a901cae77a54f239a6a5c47de\"" Feb 13 15:28:42.397548 containerd[1438]: time="2025-02-13T15:28:42.397535585Z" level=info msg="TearDown network for sandbox \"41b2565a38bfef8f3a666d78b84f6bb5e7b0dd9a901cae77a54f239a6a5c47de\" successfully" Feb 13 15:28:42.399792 containerd[1438]: time="2025-02-13T15:28:42.399733968Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"41b2565a38bfef8f3a666d78b84f6bb5e7b0dd9a901cae77a54f239a6a5c47de\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:42.399792 containerd[1438]: time="2025-02-13T15:28:42.399781253Z" level=info msg="RemovePodSandbox \"41b2565a38bfef8f3a666d78b84f6bb5e7b0dd9a901cae77a54f239a6a5c47de\" returns successfully" Feb 13 15:28:42.400064 containerd[1438]: time="2025-02-13T15:28:42.400028603Z" level=info msg="StopPodSandbox for \"1e575dbffd3d4c245a8c3a4e13c078d1720170ccc9f1a8dfbda6639dd922a63d\"" Feb 13 15:28:42.400193 containerd[1438]: time="2025-02-13T15:28:42.400121894Z" level=info msg="TearDown network for sandbox \"1e575dbffd3d4c245a8c3a4e13c078d1720170ccc9f1a8dfbda6639dd922a63d\" successfully" Feb 13 15:28:42.400193 containerd[1438]: time="2025-02-13T15:28:42.400136176Z" level=info msg="StopPodSandbox for \"1e575dbffd3d4c245a8c3a4e13c078d1720170ccc9f1a8dfbda6639dd922a63d\" returns successfully" Feb 13 15:28:42.400408 containerd[1438]: time="2025-02-13T15:28:42.400336079Z" level=info msg="RemovePodSandbox for \"1e575dbffd3d4c245a8c3a4e13c078d1720170ccc9f1a8dfbda6639dd922a63d\"" Feb 13 15:28:42.400408 containerd[1438]: time="2025-02-13T15:28:42.400388606Z" level=info msg="Forcibly stopping sandbox \"1e575dbffd3d4c245a8c3a4e13c078d1720170ccc9f1a8dfbda6639dd922a63d\"" Feb 13 15:28:42.400473 containerd[1438]: time="2025-02-13T15:28:42.400458094Z" level=info msg="TearDown network for sandbox \"1e575dbffd3d4c245a8c3a4e13c078d1720170ccc9f1a8dfbda6639dd922a63d\" successfully" Feb 13 15:28:42.402568 containerd[1438]: time="2025-02-13T15:28:42.402532182Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1e575dbffd3d4c245a8c3a4e13c078d1720170ccc9f1a8dfbda6639dd922a63d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:42.402640 containerd[1438]: time="2025-02-13T15:28:42.402583268Z" level=info msg="RemovePodSandbox \"1e575dbffd3d4c245a8c3a4e13c078d1720170ccc9f1a8dfbda6639dd922a63d\" returns successfully" Feb 13 15:28:42.403083 containerd[1438]: time="2025-02-13T15:28:42.402961553Z" level=info msg="StopPodSandbox for \"a6226e92350381ab768784fb94d579846ab05715195002b23c3ad52b9bb59176\"" Feb 13 15:28:42.403440 containerd[1438]: time="2025-02-13T15:28:42.403328797Z" level=info msg="TearDown network for sandbox \"a6226e92350381ab768784fb94d579846ab05715195002b23c3ad52b9bb59176\" successfully" Feb 13 15:28:42.403440 containerd[1438]: time="2025-02-13T15:28:42.403350519Z" level=info msg="StopPodSandbox for \"a6226e92350381ab768784fb94d579846ab05715195002b23c3ad52b9bb59176\" returns successfully" Feb 13 15:28:42.404100 containerd[1438]: time="2025-02-13T15:28:42.404062004Z" level=info msg="RemovePodSandbox for \"a6226e92350381ab768784fb94d579846ab05715195002b23c3ad52b9bb59176\"" Feb 13 15:28:42.404149 containerd[1438]: time="2025-02-13T15:28:42.404106689Z" level=info msg="Forcibly stopping sandbox \"a6226e92350381ab768784fb94d579846ab05715195002b23c3ad52b9bb59176\"" Feb 13 15:28:42.404212 containerd[1438]: time="2025-02-13T15:28:42.404180938Z" level=info msg="TearDown network for sandbox \"a6226e92350381ab768784fb94d579846ab05715195002b23c3ad52b9bb59176\" successfully" Feb 13 15:28:42.414814 containerd[1438]: time="2025-02-13T15:28:42.414764562Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a6226e92350381ab768784fb94d579846ab05715195002b23c3ad52b9bb59176\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:42.414902 containerd[1438]: time="2025-02-13T15:28:42.414832930Z" level=info msg="RemovePodSandbox \"a6226e92350381ab768784fb94d579846ab05715195002b23c3ad52b9bb59176\" returns successfully" Feb 13 15:28:42.415280 containerd[1438]: time="2025-02-13T15:28:42.415233818Z" level=info msg="StopPodSandbox for \"5bcbbbaf93ca2f8c82394fbd2b1b82ed092a9669f081bf4b7be356296412d245\"" Feb 13 15:28:42.415364 containerd[1438]: time="2025-02-13T15:28:42.415337750Z" level=info msg="TearDown network for sandbox \"5bcbbbaf93ca2f8c82394fbd2b1b82ed092a9669f081bf4b7be356296412d245\" successfully" Feb 13 15:28:42.415364 containerd[1438]: time="2025-02-13T15:28:42.415354472Z" level=info msg="StopPodSandbox for \"5bcbbbaf93ca2f8c82394fbd2b1b82ed092a9669f081bf4b7be356296412d245\" returns successfully" Feb 13 15:28:42.415652 containerd[1438]: time="2025-02-13T15:28:42.415609783Z" level=info msg="RemovePodSandbox for \"5bcbbbaf93ca2f8c82394fbd2b1b82ed092a9669f081bf4b7be356296412d245\"" Feb 13 15:28:42.415707 containerd[1438]: time="2025-02-13T15:28:42.415642506Z" level=info msg="Forcibly stopping sandbox \"5bcbbbaf93ca2f8c82394fbd2b1b82ed092a9669f081bf4b7be356296412d245\"" Feb 13 15:28:42.415730 containerd[1438]: time="2025-02-13T15:28:42.415716195Z" level=info msg="TearDown network for sandbox \"5bcbbbaf93ca2f8c82394fbd2b1b82ed092a9669f081bf4b7be356296412d245\" successfully" Feb 13 15:28:42.418206 containerd[1438]: time="2025-02-13T15:28:42.418169248Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5bcbbbaf93ca2f8c82394fbd2b1b82ed092a9669f081bf4b7be356296412d245\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:42.418275 containerd[1438]: time="2025-02-13T15:28:42.418225055Z" level=info msg="RemovePodSandbox \"5bcbbbaf93ca2f8c82394fbd2b1b82ed092a9669f081bf4b7be356296412d245\" returns successfully" Feb 13 15:28:42.418777 containerd[1438]: time="2025-02-13T15:28:42.418488006Z" level=info msg="StopPodSandbox for \"459eef6c4c6e99a8aa588ac0d3d9efe223669135a0edbec7c0d02674b0ed92fa\"" Feb 13 15:28:42.418777 containerd[1438]: time="2025-02-13T15:28:42.418572576Z" level=info msg="TearDown network for sandbox \"459eef6c4c6e99a8aa588ac0d3d9efe223669135a0edbec7c0d02674b0ed92fa\" successfully" Feb 13 15:28:42.418777 containerd[1438]: time="2025-02-13T15:28:42.418582257Z" level=info msg="StopPodSandbox for \"459eef6c4c6e99a8aa588ac0d3d9efe223669135a0edbec7c0d02674b0ed92fa\" returns successfully" Feb 13 15:28:42.418903 containerd[1438]: time="2025-02-13T15:28:42.418842488Z" level=info msg="RemovePodSandbox for \"459eef6c4c6e99a8aa588ac0d3d9efe223669135a0edbec7c0d02674b0ed92fa\"" Feb 13 15:28:42.418903 containerd[1438]: time="2025-02-13T15:28:42.418865011Z" level=info msg="Forcibly stopping sandbox \"459eef6c4c6e99a8aa588ac0d3d9efe223669135a0edbec7c0d02674b0ed92fa\"" Feb 13 15:28:42.418943 containerd[1438]: time="2025-02-13T15:28:42.418923938Z" level=info msg="TearDown network for sandbox \"459eef6c4c6e99a8aa588ac0d3d9efe223669135a0edbec7c0d02674b0ed92fa\" successfully" Feb 13 15:28:42.421594 containerd[1438]: time="2025-02-13T15:28:42.421561253Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"459eef6c4c6e99a8aa588ac0d3d9efe223669135a0edbec7c0d02674b0ed92fa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:42.421594 containerd[1438]: time="2025-02-13T15:28:42.421612819Z" level=info msg="RemovePodSandbox \"459eef6c4c6e99a8aa588ac0d3d9efe223669135a0edbec7c0d02674b0ed92fa\" returns successfully" Feb 13 15:28:42.421922 containerd[1438]: time="2025-02-13T15:28:42.421891932Z" level=info msg="StopPodSandbox for \"8f7bee58858555566b8477fe6bb885380e3ec24102d973bc32a89681765c1508\"" Feb 13 15:28:42.422016 containerd[1438]: time="2025-02-13T15:28:42.421976343Z" level=info msg="TearDown network for sandbox \"8f7bee58858555566b8477fe6bb885380e3ec24102d973bc32a89681765c1508\" successfully" Feb 13 15:28:42.422016 containerd[1438]: time="2025-02-13T15:28:42.421990904Z" level=info msg="StopPodSandbox for \"8f7bee58858555566b8477fe6bb885380e3ec24102d973bc32a89681765c1508\" returns successfully" Feb 13 15:28:42.422471 containerd[1438]: time="2025-02-13T15:28:42.422446279Z" level=info msg="RemovePodSandbox for \"8f7bee58858555566b8477fe6bb885380e3ec24102d973bc32a89681765c1508\"" Feb 13 15:28:42.422471 containerd[1438]: time="2025-02-13T15:28:42.422473802Z" level=info msg="Forcibly stopping sandbox \"8f7bee58858555566b8477fe6bb885380e3ec24102d973bc32a89681765c1508\"" Feb 13 15:28:42.422608 containerd[1438]: time="2025-02-13T15:28:42.422532449Z" level=info msg="TearDown network for sandbox \"8f7bee58858555566b8477fe6bb885380e3ec24102d973bc32a89681765c1508\" successfully" Feb 13 15:28:42.425315 containerd[1438]: time="2025-02-13T15:28:42.425271296Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8f7bee58858555566b8477fe6bb885380e3ec24102d973bc32a89681765c1508\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:42.425379 containerd[1438]: time="2025-02-13T15:28:42.425329183Z" level=info msg="RemovePodSandbox \"8f7bee58858555566b8477fe6bb885380e3ec24102d973bc32a89681765c1508\" returns successfully" Feb 13 15:28:42.425643 containerd[1438]: time="2025-02-13T15:28:42.425620578Z" level=info msg="StopPodSandbox for \"2253f70f58a228177dae68e6db7242302fb163a8e7df684f97a846e3d6faf81d\"" Feb 13 15:28:42.425718 containerd[1438]: time="2025-02-13T15:28:42.425701907Z" level=info msg="TearDown network for sandbox \"2253f70f58a228177dae68e6db7242302fb163a8e7df684f97a846e3d6faf81d\" successfully" Feb 13 15:28:42.425718 containerd[1438]: time="2025-02-13T15:28:42.425715909Z" level=info msg="StopPodSandbox for \"2253f70f58a228177dae68e6db7242302fb163a8e7df684f97a846e3d6faf81d\" returns successfully" Feb 13 15:28:42.426092 containerd[1438]: time="2025-02-13T15:28:42.425919573Z" level=info msg="RemovePodSandbox for \"2253f70f58a228177dae68e6db7242302fb163a8e7df684f97a846e3d6faf81d\"" Feb 13 15:28:42.426092 containerd[1438]: time="2025-02-13T15:28:42.425949577Z" level=info msg="Forcibly stopping sandbox \"2253f70f58a228177dae68e6db7242302fb163a8e7df684f97a846e3d6faf81d\"" Feb 13 15:28:42.426092 containerd[1438]: time="2025-02-13T15:28:42.426010024Z" level=info msg="TearDown network for sandbox \"2253f70f58a228177dae68e6db7242302fb163a8e7df684f97a846e3d6faf81d\" successfully" Feb 13 15:28:42.428458 containerd[1438]: time="2025-02-13T15:28:42.428343663Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2253f70f58a228177dae68e6db7242302fb163a8e7df684f97a846e3d6faf81d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:42.428458 containerd[1438]: time="2025-02-13T15:28:42.428391148Z" level=info msg="RemovePodSandbox \"2253f70f58a228177dae68e6db7242302fb163a8e7df684f97a846e3d6faf81d\" returns successfully" Feb 13 15:28:42.428801 containerd[1438]: time="2025-02-13T15:28:42.428644699Z" level=info msg="StopPodSandbox for \"936cf36bd18b5d423427d367f40192355a28d275d154bbc72428cc0a0db2ce21\"" Feb 13 15:28:42.431973 containerd[1438]: time="2025-02-13T15:28:42.431724226Z" level=info msg="TearDown network for sandbox \"936cf36bd18b5d423427d367f40192355a28d275d154bbc72428cc0a0db2ce21\" successfully" Feb 13 15:28:42.431973 containerd[1438]: time="2025-02-13T15:28:42.431754950Z" level=info msg="StopPodSandbox for \"936cf36bd18b5d423427d367f40192355a28d275d154bbc72428cc0a0db2ce21\" returns successfully" Feb 13 15:28:42.432089 containerd[1438]: time="2025-02-13T15:28:42.432051745Z" level=info msg="RemovePodSandbox for \"936cf36bd18b5d423427d367f40192355a28d275d154bbc72428cc0a0db2ce21\"" Feb 13 15:28:42.432115 containerd[1438]: time="2025-02-13T15:28:42.432090390Z" level=info msg="Forcibly stopping sandbox \"936cf36bd18b5d423427d367f40192355a28d275d154bbc72428cc0a0db2ce21\"" Feb 13 15:28:42.432268 containerd[1438]: time="2025-02-13T15:28:42.432154998Z" level=info msg="TearDown network for sandbox \"936cf36bd18b5d423427d367f40192355a28d275d154bbc72428cc0a0db2ce21\" successfully" Feb 13 15:28:42.435766 containerd[1438]: time="2025-02-13T15:28:42.435717223Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"936cf36bd18b5d423427d367f40192355a28d275d154bbc72428cc0a0db2ce21\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:42.435837 containerd[1438]: time="2025-02-13T15:28:42.435783551Z" level=info msg="RemovePodSandbox \"936cf36bd18b5d423427d367f40192355a28d275d154bbc72428cc0a0db2ce21\" returns successfully" Feb 13 15:28:42.436321 containerd[1438]: time="2025-02-13T15:28:42.436115310Z" level=info msg="StopPodSandbox for \"233214fc7bb247cadbdaa1e736dbf4f26b5ab5d9169eb6a9c3f71561a51e36c2\"" Feb 13 15:28:42.436321 containerd[1438]: time="2025-02-13T15:28:42.436206281Z" level=info msg="TearDown network for sandbox \"233214fc7bb247cadbdaa1e736dbf4f26b5ab5d9169eb6a9c3f71561a51e36c2\" successfully" Feb 13 15:28:42.436321 containerd[1438]: time="2025-02-13T15:28:42.436216282Z" level=info msg="StopPodSandbox for \"233214fc7bb247cadbdaa1e736dbf4f26b5ab5d9169eb6a9c3f71561a51e36c2\" returns successfully" Feb 13 15:28:42.437826 containerd[1438]: time="2025-02-13T15:28:42.436705501Z" level=info msg="RemovePodSandbox for \"233214fc7bb247cadbdaa1e736dbf4f26b5ab5d9169eb6a9c3f71561a51e36c2\"" Feb 13 15:28:42.437826 containerd[1438]: time="2025-02-13T15:28:42.436757067Z" level=info msg="Forcibly stopping sandbox \"233214fc7bb247cadbdaa1e736dbf4f26b5ab5d9169eb6a9c3f71561a51e36c2\"" Feb 13 15:28:42.437826 containerd[1438]: time="2025-02-13T15:28:42.436821195Z" level=info msg="TearDown network for sandbox \"233214fc7bb247cadbdaa1e736dbf4f26b5ab5d9169eb6a9c3f71561a51e36c2\" successfully" Feb 13 15:28:42.439627 containerd[1438]: time="2025-02-13T15:28:42.439595446Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"233214fc7bb247cadbdaa1e736dbf4f26b5ab5d9169eb6a9c3f71561a51e36c2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:42.439778 containerd[1438]: time="2025-02-13T15:28:42.439761386Z" level=info msg="RemovePodSandbox \"233214fc7bb247cadbdaa1e736dbf4f26b5ab5d9169eb6a9c3f71561a51e36c2\" returns successfully" Feb 13 15:28:42.440183 containerd[1438]: time="2025-02-13T15:28:42.440147952Z" level=info msg="StopPodSandbox for \"7cd8e48fe4cf11325abf60135b81eee57f7a784d7d0bcf06096fcdf01876d51b\"" Feb 13 15:28:42.440635 containerd[1438]: time="2025-02-13T15:28:42.440534598Z" level=info msg="TearDown network for sandbox \"7cd8e48fe4cf11325abf60135b81eee57f7a784d7d0bcf06096fcdf01876d51b\" successfully" Feb 13 15:28:42.440635 containerd[1438]: time="2025-02-13T15:28:42.440552680Z" level=info msg="StopPodSandbox for \"7cd8e48fe4cf11325abf60135b81eee57f7a784d7d0bcf06096fcdf01876d51b\" returns successfully" Feb 13 15:28:42.440996 containerd[1438]: time="2025-02-13T15:28:42.440968850Z" level=info msg="RemovePodSandbox for \"7cd8e48fe4cf11325abf60135b81eee57f7a784d7d0bcf06096fcdf01876d51b\"" Feb 13 15:28:42.440996 containerd[1438]: time="2025-02-13T15:28:42.440996533Z" level=info msg="Forcibly stopping sandbox \"7cd8e48fe4cf11325abf60135b81eee57f7a784d7d0bcf06096fcdf01876d51b\"" Feb 13 15:28:42.441094 containerd[1438]: time="2025-02-13T15:28:42.441057460Z" level=info msg="TearDown network for sandbox \"7cd8e48fe4cf11325abf60135b81eee57f7a784d7d0bcf06096fcdf01876d51b\" successfully" Feb 13 15:28:42.445645 containerd[1438]: time="2025-02-13T15:28:42.445589321Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7cd8e48fe4cf11325abf60135b81eee57f7a784d7d0bcf06096fcdf01876d51b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:42.445721 containerd[1438]: time="2025-02-13T15:28:42.445650208Z" level=info msg="RemovePodSandbox \"7cd8e48fe4cf11325abf60135b81eee57f7a784d7d0bcf06096fcdf01876d51b\" returns successfully" Feb 13 15:28:42.446113 containerd[1438]: time="2025-02-13T15:28:42.446002451Z" level=info msg="StopPodSandbox for \"9dd50c1169bff60764c79272474e7ca909a0107af7a189c3cbeed0f7c50a856c\"" Feb 13 15:28:42.446332 containerd[1438]: time="2025-02-13T15:28:42.446116544Z" level=info msg="TearDown network for sandbox \"9dd50c1169bff60764c79272474e7ca909a0107af7a189c3cbeed0f7c50a856c\" successfully" Feb 13 15:28:42.446332 containerd[1438]: time="2025-02-13T15:28:42.446127825Z" level=info msg="StopPodSandbox for \"9dd50c1169bff60764c79272474e7ca909a0107af7a189c3cbeed0f7c50a856c\" returns successfully" Feb 13 15:28:42.447015 containerd[1438]: time="2025-02-13T15:28:42.446433542Z" level=info msg="RemovePodSandbox for \"9dd50c1169bff60764c79272474e7ca909a0107af7a189c3cbeed0f7c50a856c\"" Feb 13 15:28:42.447015 containerd[1438]: time="2025-02-13T15:28:42.447008451Z" level=info msg="Forcibly stopping sandbox \"9dd50c1169bff60764c79272474e7ca909a0107af7a189c3cbeed0f7c50a856c\"" Feb 13 15:28:42.447122 containerd[1438]: time="2025-02-13T15:28:42.447093741Z" level=info msg="TearDown network for sandbox \"9dd50c1169bff60764c79272474e7ca909a0107af7a189c3cbeed0f7c50a856c\" successfully" Feb 13 15:28:42.449656 containerd[1438]: time="2025-02-13T15:28:42.449617882Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9dd50c1169bff60764c79272474e7ca909a0107af7a189c3cbeed0f7c50a856c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:42.449707 containerd[1438]: time="2025-02-13T15:28:42.449673529Z" level=info msg="RemovePodSandbox \"9dd50c1169bff60764c79272474e7ca909a0107af7a189c3cbeed0f7c50a856c\" returns successfully" Feb 13 15:28:42.450094 containerd[1438]: time="2025-02-13T15:28:42.450061535Z" level=info msg="StopPodSandbox for \"9d676e713a9f9bdb0ef333af0423e3a978a265863987d14276ec5d59f17734f0\"" Feb 13 15:28:42.450321 containerd[1438]: time="2025-02-13T15:28:42.450237556Z" level=info msg="TearDown network for sandbox \"9d676e713a9f9bdb0ef333af0423e3a978a265863987d14276ec5d59f17734f0\" successfully" Feb 13 15:28:42.450321 containerd[1438]: time="2025-02-13T15:28:42.450256198Z" level=info msg="StopPodSandbox for \"9d676e713a9f9bdb0ef333af0423e3a978a265863987d14276ec5d59f17734f0\" returns successfully" Feb 13 15:28:42.450764 containerd[1438]: time="2025-02-13T15:28:42.450740296Z" level=info msg="RemovePodSandbox for \"9d676e713a9f9bdb0ef333af0423e3a978a265863987d14276ec5d59f17734f0\"" Feb 13 15:28:42.450807 containerd[1438]: time="2025-02-13T15:28:42.450768699Z" level=info msg="Forcibly stopping sandbox \"9d676e713a9f9bdb0ef333af0423e3a978a265863987d14276ec5d59f17734f0\"" Feb 13 15:28:42.450850 containerd[1438]: time="2025-02-13T15:28:42.450831867Z" level=info msg="TearDown network for sandbox \"9d676e713a9f9bdb0ef333af0423e3a978a265863987d14276ec5d59f17734f0\" successfully" Feb 13 15:28:42.453063 containerd[1438]: time="2025-02-13T15:28:42.453036690Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9d676e713a9f9bdb0ef333af0423e3a978a265863987d14276ec5d59f17734f0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:42.453144 containerd[1438]: time="2025-02-13T15:28:42.453090977Z" level=info msg="RemovePodSandbox \"9d676e713a9f9bdb0ef333af0423e3a978a265863987d14276ec5d59f17734f0\" returns successfully" Feb 13 15:28:42.453550 containerd[1438]: time="2025-02-13T15:28:42.453524668Z" level=info msg="StopPodSandbox for \"3212d16950422236b26587f9b3647a1dbfd28814e3a0abc8871bf58f07a51551\"" Feb 13 15:28:42.453630 containerd[1438]: time="2025-02-13T15:28:42.453613759Z" level=info msg="TearDown network for sandbox \"3212d16950422236b26587f9b3647a1dbfd28814e3a0abc8871bf58f07a51551\" successfully" Feb 13 15:28:42.453630 containerd[1438]: time="2025-02-13T15:28:42.453628441Z" level=info msg="StopPodSandbox for \"3212d16950422236b26587f9b3647a1dbfd28814e3a0abc8871bf58f07a51551\" returns successfully" Feb 13 15:28:42.453874 containerd[1438]: time="2025-02-13T15:28:42.453856468Z" level=info msg="RemovePodSandbox for \"3212d16950422236b26587f9b3647a1dbfd28814e3a0abc8871bf58f07a51551\"" Feb 13 15:28:42.453920 containerd[1438]: time="2025-02-13T15:28:42.453878351Z" level=info msg="Forcibly stopping sandbox \"3212d16950422236b26587f9b3647a1dbfd28814e3a0abc8871bf58f07a51551\"" Feb 13 15:28:42.453950 containerd[1438]: time="2025-02-13T15:28:42.453935557Z" level=info msg="TearDown network for sandbox \"3212d16950422236b26587f9b3647a1dbfd28814e3a0abc8871bf58f07a51551\" successfully" Feb 13 15:28:42.456076 containerd[1438]: time="2025-02-13T15:28:42.456039489Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3212d16950422236b26587f9b3647a1dbfd28814e3a0abc8871bf58f07a51551\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:42.456125 containerd[1438]: time="2025-02-13T15:28:42.456093615Z" level=info msg="RemovePodSandbox \"3212d16950422236b26587f9b3647a1dbfd28814e3a0abc8871bf58f07a51551\" returns successfully" Feb 13 15:28:42.456461 containerd[1438]: time="2025-02-13T15:28:42.456437336Z" level=info msg="StopPodSandbox for \"365686e2ea3126d4074c6dfa28c3e1ea5feabc62ece6b663740e99931c83d2ab\"" Feb 13 15:28:42.456531 containerd[1438]: time="2025-02-13T15:28:42.456516706Z" level=info msg="TearDown network for sandbox \"365686e2ea3126d4074c6dfa28c3e1ea5feabc62ece6b663740e99931c83d2ab\" successfully" Feb 13 15:28:42.456531 containerd[1438]: time="2025-02-13T15:28:42.456529387Z" level=info msg="StopPodSandbox for \"365686e2ea3126d4074c6dfa28c3e1ea5feabc62ece6b663740e99931c83d2ab\" returns successfully" Feb 13 15:28:42.456843 containerd[1438]: time="2025-02-13T15:28:42.456819702Z" level=info msg="RemovePodSandbox for \"365686e2ea3126d4074c6dfa28c3e1ea5feabc62ece6b663740e99931c83d2ab\"" Feb 13 15:28:42.456876 containerd[1438]: time="2025-02-13T15:28:42.456847625Z" level=info msg="Forcibly stopping sandbox \"365686e2ea3126d4074c6dfa28c3e1ea5feabc62ece6b663740e99931c83d2ab\"" Feb 13 15:28:42.456922 containerd[1438]: time="2025-02-13T15:28:42.456908032Z" level=info msg="TearDown network for sandbox \"365686e2ea3126d4074c6dfa28c3e1ea5feabc62ece6b663740e99931c83d2ab\" successfully" Feb 13 15:28:42.459023 containerd[1438]: time="2025-02-13T15:28:42.458943675Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"365686e2ea3126d4074c6dfa28c3e1ea5feabc62ece6b663740e99931c83d2ab\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:42.459099 containerd[1438]: time="2025-02-13T15:28:42.459052208Z" level=info msg="RemovePodSandbox \"365686e2ea3126d4074c6dfa28c3e1ea5feabc62ece6b663740e99931c83d2ab\" returns successfully" Feb 13 15:28:42.459364 containerd[1438]: time="2025-02-13T15:28:42.459341203Z" level=info msg="StopPodSandbox for \"b2bddd7fb16d5ae0064f9701b94ee6a0427769ed88e47c2918d43be64eaad657\"" Feb 13 15:28:42.459440 containerd[1438]: time="2025-02-13T15:28:42.459420412Z" level=info msg="TearDown network for sandbox \"b2bddd7fb16d5ae0064f9701b94ee6a0427769ed88e47c2918d43be64eaad657\" successfully" Feb 13 15:28:42.459440 containerd[1438]: time="2025-02-13T15:28:42.459434214Z" level=info msg="StopPodSandbox for \"b2bddd7fb16d5ae0064f9701b94ee6a0427769ed88e47c2918d43be64eaad657\" returns successfully" Feb 13 15:28:42.459771 containerd[1438]: time="2025-02-13T15:28:42.459743731Z" level=info msg="RemovePodSandbox for \"b2bddd7fb16d5ae0064f9701b94ee6a0427769ed88e47c2918d43be64eaad657\"" Feb 13 15:28:42.459805 containerd[1438]: time="2025-02-13T15:28:42.459771934Z" level=info msg="Forcibly stopping sandbox \"b2bddd7fb16d5ae0064f9701b94ee6a0427769ed88e47c2918d43be64eaad657\"" Feb 13 15:28:42.459854 containerd[1438]: time="2025-02-13T15:28:42.459836702Z" level=info msg="TearDown network for sandbox \"b2bddd7fb16d5ae0064f9701b94ee6a0427769ed88e47c2918d43be64eaad657\" successfully" Feb 13 15:28:42.462119 containerd[1438]: time="2025-02-13T15:28:42.462081450Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b2bddd7fb16d5ae0064f9701b94ee6a0427769ed88e47c2918d43be64eaad657\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:42.462668 containerd[1438]: time="2025-02-13T15:28:42.462622554Z" level=info msg="RemovePodSandbox \"b2bddd7fb16d5ae0064f9701b94ee6a0427769ed88e47c2918d43be64eaad657\" returns successfully" Feb 13 15:28:42.463169 containerd[1438]: time="2025-02-13T15:28:42.463098971Z" level=info msg="StopPodSandbox for \"91ac3c9c17810de765c7ea367d4d4a805d2038f43c9051e647a42f6dd72df458\"" Feb 13 15:28:42.463282 containerd[1438]: time="2025-02-13T15:28:42.463265351Z" level=info msg="TearDown network for sandbox \"91ac3c9c17810de765c7ea367d4d4a805d2038f43c9051e647a42f6dd72df458\" successfully" Feb 13 15:28:42.463327 containerd[1438]: time="2025-02-13T15:28:42.463281193Z" level=info msg="StopPodSandbox for \"91ac3c9c17810de765c7ea367d4d4a805d2038f43c9051e647a42f6dd72df458\" returns successfully" Feb 13 15:28:42.463593 containerd[1438]: time="2025-02-13T15:28:42.463571068Z" level=info msg="RemovePodSandbox for \"91ac3c9c17810de765c7ea367d4d4a805d2038f43c9051e647a42f6dd72df458\"" Feb 13 15:28:42.463620 containerd[1438]: time="2025-02-13T15:28:42.463601671Z" level=info msg="Forcibly stopping sandbox \"91ac3c9c17810de765c7ea367d4d4a805d2038f43c9051e647a42f6dd72df458\"" Feb 13 15:28:42.463693 containerd[1438]: time="2025-02-13T15:28:42.463678801Z" level=info msg="TearDown network for sandbox \"91ac3c9c17810de765c7ea367d4d4a805d2038f43c9051e647a42f6dd72df458\" successfully" Feb 13 15:28:42.466548 containerd[1438]: time="2025-02-13T15:28:42.466183019Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"91ac3c9c17810de765c7ea367d4d4a805d2038f43c9051e647a42f6dd72df458\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:42.466548 containerd[1438]: time="2025-02-13T15:28:42.466271830Z" level=info msg="RemovePodSandbox \"91ac3c9c17810de765c7ea367d4d4a805d2038f43c9051e647a42f6dd72df458\" returns successfully" Feb 13 15:28:42.467416 containerd[1438]: time="2025-02-13T15:28:42.467390044Z" level=info msg="StopPodSandbox for \"0bd54dba73f69ad05fdd64de844dfb9298048bde545384bf327983a09dc6b98c\"" Feb 13 15:28:42.467657 containerd[1438]: time="2025-02-13T15:28:42.467555783Z" level=info msg="TearDown network for sandbox \"0bd54dba73f69ad05fdd64de844dfb9298048bde545384bf327983a09dc6b98c\" successfully" Feb 13 15:28:42.467657 containerd[1438]: time="2025-02-13T15:28:42.467571065Z" level=info msg="StopPodSandbox for \"0bd54dba73f69ad05fdd64de844dfb9298048bde545384bf327983a09dc6b98c\" returns successfully" Feb 13 15:28:42.467930 containerd[1438]: time="2025-02-13T15:28:42.467909186Z" level=info msg="RemovePodSandbox for \"0bd54dba73f69ad05fdd64de844dfb9298048bde545384bf327983a09dc6b98c\"" Feb 13 15:28:42.469104 containerd[1438]: time="2025-02-13T15:28:42.468101168Z" level=info msg="Forcibly stopping sandbox \"0bd54dba73f69ad05fdd64de844dfb9298048bde545384bf327983a09dc6b98c\"" Feb 13 15:28:42.469104 containerd[1438]: time="2025-02-13T15:28:42.468179338Z" level=info msg="TearDown network for sandbox \"0bd54dba73f69ad05fdd64de844dfb9298048bde545384bf327983a09dc6b98c\" successfully" Feb 13 15:28:42.470400 containerd[1438]: time="2025-02-13T15:28:42.470368839Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0bd54dba73f69ad05fdd64de844dfb9298048bde545384bf327983a09dc6b98c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:42.470463 containerd[1438]: time="2025-02-13T15:28:42.470421205Z" level=info msg="RemovePodSandbox \"0bd54dba73f69ad05fdd64de844dfb9298048bde545384bf327983a09dc6b98c\" returns successfully" Feb 13 15:28:42.470916 containerd[1438]: time="2025-02-13T15:28:42.470889661Z" level=info msg="StopPodSandbox for \"39d45d65b330807fcecc7aa43e2979e258bdabd910b8af148e084f968f291a28\"" Feb 13 15:28:42.470987 containerd[1438]: time="2025-02-13T15:28:42.470970791Z" level=info msg="TearDown network for sandbox \"39d45d65b330807fcecc7aa43e2979e258bdabd910b8af148e084f968f291a28\" successfully" Feb 13 15:28:42.470987 containerd[1438]: time="2025-02-13T15:28:42.470984393Z" level=info msg="StopPodSandbox for \"39d45d65b330807fcecc7aa43e2979e258bdabd910b8af148e084f968f291a28\" returns successfully" Feb 13 15:28:42.471774 containerd[1438]: time="2025-02-13T15:28:42.471307751Z" level=info msg="RemovePodSandbox for \"39d45d65b330807fcecc7aa43e2979e258bdabd910b8af148e084f968f291a28\"" Feb 13 15:28:42.471774 containerd[1438]: time="2025-02-13T15:28:42.471334794Z" level=info msg="Forcibly stopping sandbox \"39d45d65b330807fcecc7aa43e2979e258bdabd910b8af148e084f968f291a28\"" Feb 13 15:28:42.471774 containerd[1438]: time="2025-02-13T15:28:42.471394242Z" level=info msg="TearDown network for sandbox \"39d45d65b330807fcecc7aa43e2979e258bdabd910b8af148e084f968f291a28\" successfully" Feb 13 15:28:42.474209 containerd[1438]: time="2025-02-13T15:28:42.474174653Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"39d45d65b330807fcecc7aa43e2979e258bdabd910b8af148e084f968f291a28\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:42.474329 containerd[1438]: time="2025-02-13T15:28:42.474312030Z" level=info msg="RemovePodSandbox \"39d45d65b330807fcecc7aa43e2979e258bdabd910b8af148e084f968f291a28\" returns successfully" Feb 13 15:28:42.474745 containerd[1438]: time="2025-02-13T15:28:42.474714958Z" level=info msg="StopPodSandbox for \"ebd99b89a667d6242d41d6f195e59eb6706c8195f2e4a6e0ccb977bed11e5855\"" Feb 13 15:28:42.474819 containerd[1438]: time="2025-02-13T15:28:42.474800048Z" level=info msg="TearDown network for sandbox \"ebd99b89a667d6242d41d6f195e59eb6706c8195f2e4a6e0ccb977bed11e5855\" successfully" Feb 13 15:28:42.474819 containerd[1438]: time="2025-02-13T15:28:42.474814290Z" level=info msg="StopPodSandbox for \"ebd99b89a667d6242d41d6f195e59eb6706c8195f2e4a6e0ccb977bed11e5855\" returns successfully" Feb 13 15:28:42.476114 containerd[1438]: time="2025-02-13T15:28:42.475089043Z" level=info msg="RemovePodSandbox for \"ebd99b89a667d6242d41d6f195e59eb6706c8195f2e4a6e0ccb977bed11e5855\"" Feb 13 15:28:42.476114 containerd[1438]: time="2025-02-13T15:28:42.475118126Z" level=info msg="Forcibly stopping sandbox \"ebd99b89a667d6242d41d6f195e59eb6706c8195f2e4a6e0ccb977bed11e5855\"" Feb 13 15:28:42.476114 containerd[1438]: time="2025-02-13T15:28:42.475187854Z" level=info msg="TearDown network for sandbox \"ebd99b89a667d6242d41d6f195e59eb6706c8195f2e4a6e0ccb977bed11e5855\" successfully" Feb 13 15:28:42.477273 containerd[1438]: time="2025-02-13T15:28:42.477238259Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ebd99b89a667d6242d41d6f195e59eb6706c8195f2e4a6e0ccb977bed11e5855\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:42.477465 containerd[1438]: time="2025-02-13T15:28:42.477432162Z" level=info msg="RemovePodSandbox \"ebd99b89a667d6242d41d6f195e59eb6706c8195f2e4a6e0ccb977bed11e5855\" returns successfully" Feb 13 15:28:42.477807 containerd[1438]: time="2025-02-13T15:28:42.477780604Z" level=info msg="StopPodSandbox for \"0f2c7fb92a9842fe2f78056fed98131e4b938b6c6d6db9d280cff119ae6753f6\"" Feb 13 15:28:42.477880 containerd[1438]: time="2025-02-13T15:28:42.477861213Z" level=info msg="TearDown network for sandbox \"0f2c7fb92a9842fe2f78056fed98131e4b938b6c6d6db9d280cff119ae6753f6\" successfully" Feb 13 15:28:42.477880 containerd[1438]: time="2025-02-13T15:28:42.477875775Z" level=info msg="StopPodSandbox for \"0f2c7fb92a9842fe2f78056fed98131e4b938b6c6d6db9d280cff119ae6753f6\" returns successfully" Feb 13 15:28:42.478308 containerd[1438]: time="2025-02-13T15:28:42.478286304Z" level=info msg="RemovePodSandbox for \"0f2c7fb92a9842fe2f78056fed98131e4b938b6c6d6db9d280cff119ae6753f6\"" Feb 13 15:28:42.478409 containerd[1438]: time="2025-02-13T15:28:42.478392917Z" level=info msg="Forcibly stopping sandbox \"0f2c7fb92a9842fe2f78056fed98131e4b938b6c6d6db9d280cff119ae6753f6\"" Feb 13 15:28:42.478573 containerd[1438]: time="2025-02-13T15:28:42.478504810Z" level=info msg="TearDown network for sandbox \"0f2c7fb92a9842fe2f78056fed98131e4b938b6c6d6db9d280cff119ae6753f6\" successfully" Feb 13 15:28:42.480736 containerd[1438]: time="2025-02-13T15:28:42.480687671Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0f2c7fb92a9842fe2f78056fed98131e4b938b6c6d6db9d280cff119ae6753f6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:42.480870 containerd[1438]: time="2025-02-13T15:28:42.480853091Z" level=info msg="RemovePodSandbox \"0f2c7fb92a9842fe2f78056fed98131e4b938b6c6d6db9d280cff119ae6753f6\" returns successfully" Feb 13 15:28:42.481364 containerd[1438]: time="2025-02-13T15:28:42.481293783Z" level=info msg="StopPodSandbox for \"d0c2eb6f93193316e44af9d4dbdeb228109b616fedaa816c6c35c6dd7186ac09\"" Feb 13 15:28:42.481462 containerd[1438]: time="2025-02-13T15:28:42.481443521Z" level=info msg="TearDown network for sandbox \"d0c2eb6f93193316e44af9d4dbdeb228109b616fedaa816c6c35c6dd7186ac09\" successfully" Feb 13 15:28:42.481508 containerd[1438]: time="2025-02-13T15:28:42.481460603Z" level=info msg="StopPodSandbox for \"d0c2eb6f93193316e44af9d4dbdeb228109b616fedaa816c6c35c6dd7186ac09\" returns successfully" Feb 13 15:28:42.481743 containerd[1438]: time="2025-02-13T15:28:42.481705792Z" level=info msg="RemovePodSandbox for \"d0c2eb6f93193316e44af9d4dbdeb228109b616fedaa816c6c35c6dd7186ac09\"" Feb 13 15:28:42.481743 containerd[1438]: time="2025-02-13T15:28:42.481733556Z" level=info msg="Forcibly stopping sandbox \"d0c2eb6f93193316e44af9d4dbdeb228109b616fedaa816c6c35c6dd7186ac09\"" Feb 13 15:28:42.481805 containerd[1438]: time="2025-02-13T15:28:42.481791963Z" level=info msg="TearDown network for sandbox \"d0c2eb6f93193316e44af9d4dbdeb228109b616fedaa816c6c35c6dd7186ac09\" successfully" Feb 13 15:28:42.483815 containerd[1438]: time="2025-02-13T15:28:42.483777120Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d0c2eb6f93193316e44af9d4dbdeb228109b616fedaa816c6c35c6dd7186ac09\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:42.483862 containerd[1438]: time="2025-02-13T15:28:42.483829726Z" level=info msg="RemovePodSandbox \"d0c2eb6f93193316e44af9d4dbdeb228109b616fedaa816c6c35c6dd7186ac09\" returns successfully" Feb 13 15:28:45.973109 systemd[1]: Started sshd@19-10.0.0.93:22-10.0.0.1:42520.service - OpenSSH per-connection server daemon (10.0.0.1:42520). Feb 13 15:28:46.016336 sshd[6113]: Accepted publickey for core from 10.0.0.1 port 42520 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:28:46.017921 sshd-session[6113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:28:46.022093 systemd-logind[1424]: New session 20 of user core. Feb 13 15:28:46.028329 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 15:28:46.149428 sshd[6115]: Connection closed by 10.0.0.1 port 42520 Feb 13 15:28:46.149980 sshd-session[6113]: pam_unix(sshd:session): session closed for user core Feb 13 15:28:46.153317 systemd[1]: sshd@19-10.0.0.93:22-10.0.0.1:42520.service: Deactivated successfully. Feb 13 15:28:46.155388 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 15:28:46.162157 systemd-logind[1424]: Session 20 logged out. Waiting for processes to exit. Feb 13 15:28:46.168779 systemd-logind[1424]: Removed session 20. Feb 13 15:28:51.165439 systemd[1]: Started sshd@20-10.0.0.93:22-10.0.0.1:42532.service - OpenSSH per-connection server daemon (10.0.0.1:42532). Feb 13 15:28:51.208545 sshd[6130]: Accepted publickey for core from 10.0.0.1 port 42532 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:28:51.210248 sshd-session[6130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:28:51.213971 systemd-logind[1424]: New session 21 of user core. Feb 13 15:28:51.222255 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 15:28:51.340111 sshd[6132]: Connection closed by 10.0.0.1 port 42532 Feb 13 15:28:51.341021 sshd-session[6130]: pam_unix(sshd:session): session closed for user core Feb 13 15:28:51.345270 systemd[1]: sshd@20-10.0.0.93:22-10.0.0.1:42532.service: Deactivated successfully. Feb 13 15:28:51.347090 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 15:28:51.349356 systemd-logind[1424]: Session 21 logged out. Waiting for processes to exit. Feb 13 15:28:51.350398 systemd-logind[1424]: Removed session 21.