Feb 13 15:38:28.881551 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 15:38:28.881572 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu Feb 13 14:02:42 -00 2025 Feb 13 15:38:28.881582 kernel: KASLR enabled Feb 13 15:38:28.881587 kernel: efi: EFI v2.7 by EDK II Feb 13 15:38:28.881593 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 Feb 13 15:38:28.881598 kernel: random: crng init done Feb 13 15:38:28.881605 kernel: secureboot: Secure boot disabled Feb 13 15:38:28.881611 kernel: ACPI: Early table checksum verification disabled Feb 13 15:38:28.881617 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Feb 13 15:38:28.881624 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 13 15:38:28.881630 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:38:28.881636 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:38:28.881642 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:38:28.881648 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:38:28.881655 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:38:28.881662 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:38:28.881668 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:38:28.881674 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:38:28.881681 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:38:28.881687 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 13 15:38:28.881693 kernel: NUMA: Failed to initialise from firmware Feb 13 15:38:28.881699 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 15:38:28.881705 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Feb 13 15:38:28.881711 kernel: Zone ranges: Feb 13 15:38:28.881717 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 15:38:28.881725 kernel: DMA32 empty Feb 13 15:38:28.881730 kernel: Normal empty Feb 13 15:38:28.881736 kernel: Movable zone start for each node Feb 13 15:38:28.881742 kernel: Early memory node ranges Feb 13 15:38:28.881748 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Feb 13 15:38:28.881755 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Feb 13 15:38:28.881761 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Feb 13 15:38:28.881767 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Feb 13 15:38:28.881773 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Feb 13 15:38:28.881778 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Feb 13 15:38:28.881784 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Feb 13 15:38:28.881798 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Feb 13 15:38:28.881807 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Feb 13 15:38:28.881813 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 15:38:28.881820 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 13 15:38:28.881828 kernel: psci: probing for conduit method from ACPI. Feb 13 15:38:28.881835 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 15:38:28.881841 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 15:38:28.881849 kernel: psci: Trusted OS migration not required Feb 13 15:38:28.881855 kernel: psci: SMC Calling Convention v1.1 Feb 13 15:38:28.881862 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 13 15:38:28.881868 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 15:38:28.881875 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 15:38:28.881882 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 13 15:38:28.881888 kernel: Detected PIPT I-cache on CPU0 Feb 13 15:38:28.881895 kernel: CPU features: detected: GIC system register CPU interface Feb 13 15:38:28.881901 kernel: CPU features: detected: Hardware dirty bit management Feb 13 15:38:28.881908 kernel: CPU features: detected: Spectre-v4 Feb 13 15:38:28.881915 kernel: CPU features: detected: Spectre-BHB Feb 13 15:38:28.881922 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 15:38:28.881928 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 15:38:28.881934 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 15:38:28.881941 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 15:38:28.881947 kernel: alternatives: applying boot alternatives Feb 13 15:38:28.881955 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=685b18f1e2a119f561f35348e788538aade62ddb9fa889a87d9e00058aaa4b5a Feb 13 15:38:28.881962 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:38:28.881968 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 15:38:28.881975 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 15:38:28.881981 kernel: Fallback order for Node 0: 0 Feb 13 15:38:28.881989 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 13 15:38:28.881995 kernel: Policy zone: DMA Feb 13 15:38:28.882001 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:38:28.882008 kernel: software IO TLB: area num 4. Feb 13 15:38:28.882014 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Feb 13 15:38:28.882021 kernel: Memory: 2385936K/2572288K available (10304K kernel code, 2184K rwdata, 8092K rodata, 39936K init, 897K bss, 186352K reserved, 0K cma-reserved) Feb 13 15:38:28.882028 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 15:38:28.882034 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:38:28.882041 kernel: rcu: RCU event tracing is enabled. Feb 13 15:38:28.882048 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 15:38:28.882055 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:38:28.882061 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:38:28.882069 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:38:28.882076 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 15:38:28.882082 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 15:38:28.882088 kernel: GICv3: 256 SPIs implemented Feb 13 15:38:28.882095 kernel: GICv3: 0 Extended SPIs implemented Feb 13 15:38:28.882101 kernel: Root IRQ handler: gic_handle_irq Feb 13 15:38:28.882107 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 15:38:28.882114 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 13 15:38:28.882120 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 13 15:38:28.882127 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 15:38:28.882133 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Feb 13 15:38:28.882142 kernel: GICv3: using LPI property table @0x00000000400f0000 Feb 13 15:38:28.882148 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Feb 13 15:38:28.882155 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:38:28.882161 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:38:28.882168 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 15:38:28.882174 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 15:38:28.882181 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 15:38:28.882187 kernel: arm-pv: using stolen time PV Feb 13 15:38:28.882194 kernel: Console: colour dummy device 80x25 Feb 13 15:38:28.882201 kernel: ACPI: Core revision 20230628 Feb 13 15:38:28.882208 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 15:38:28.882216 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:38:28.882222 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:38:28.882229 kernel: landlock: Up and running. Feb 13 15:38:28.882236 kernel: SELinux: Initializing. Feb 13 15:38:28.882243 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:38:28.882250 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:38:28.882257 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:38:28.882264 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:38:28.882271 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:38:28.882279 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:38:28.882285 kernel: Platform MSI: ITS@0x8080000 domain created Feb 13 15:38:28.882292 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 13 15:38:28.882298 kernel: Remapping and enabling EFI services. Feb 13 15:38:28.882305 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:38:28.882312 kernel: Detected PIPT I-cache on CPU1 Feb 13 15:38:28.882318 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 13 15:38:28.882325 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Feb 13 15:38:28.882332 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:38:28.882340 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 15:38:28.882346 kernel: Detected PIPT I-cache on CPU2 Feb 13 15:38:28.882357 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 13 15:38:28.882366 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Feb 13 15:38:28.882373 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:38:28.882380 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 13 15:38:28.882387 kernel: Detected PIPT I-cache on CPU3 Feb 13 15:38:28.882394 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 13 15:38:28.882401 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Feb 13 15:38:28.882410 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:38:28.882417 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 13 15:38:28.882424 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 15:38:28.882431 kernel: SMP: Total of 4 processors activated. Feb 13 15:38:28.882438 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 15:38:28.882458 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 15:38:28.882465 kernel: CPU features: detected: Common not Private translations Feb 13 15:38:28.882497 kernel: CPU features: detected: CRC32 instructions Feb 13 15:38:28.882507 kernel: CPU features: detected: Enhanced Virtualization Traps Feb 13 15:38:28.882515 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 15:38:28.882522 kernel: CPU features: detected: LSE atomic instructions Feb 13 15:38:28.882529 kernel: CPU features: detected: Privileged Access Never Feb 13 15:38:28.882536 kernel: CPU features: detected: RAS Extension Support Feb 13 15:38:28.882543 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 13 15:38:28.882550 kernel: CPU: All CPU(s) started at EL1 Feb 13 15:38:28.882557 kernel: alternatives: applying system-wide alternatives Feb 13 15:38:28.882564 kernel: devtmpfs: initialized Feb 13 15:38:28.882574 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:38:28.882581 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 15:38:28.882588 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:38:28.882595 kernel: SMBIOS 3.0.0 present. Feb 13 15:38:28.882602 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Feb 13 15:38:28.882609 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:38:28.882616 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 15:38:28.882624 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 15:38:28.882631 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 15:38:28.882639 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:38:28.882647 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Feb 13 15:38:28.882654 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:38:28.882661 kernel: cpuidle: using governor menu Feb 13 15:38:28.882668 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 15:38:28.882675 kernel: ASID allocator initialised with 32768 entries Feb 13 15:38:28.882682 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:38:28.882689 kernel: Serial: AMBA PL011 UART driver Feb 13 15:38:28.882696 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 15:38:28.882704 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 15:38:28.882711 kernel: Modules: 508880 pages in range for PLT usage Feb 13 15:38:28.882718 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 15:38:28.882725 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 15:38:28.882732 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 15:38:28.882739 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 15:38:28.882746 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:38:28.882753 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:38:28.882760 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 15:38:28.882768 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 15:38:28.882775 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:38:28.882782 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:38:28.882793 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:38:28.882800 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:38:28.882807 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 15:38:28.882814 kernel: ACPI: Interpreter enabled Feb 13 15:38:28.882821 kernel: ACPI: Using GIC for interrupt routing Feb 13 15:38:28.882828 kernel: ACPI: MCFG table detected, 1 entries Feb 13 15:38:28.882835 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 13 15:38:28.882844 kernel: printk: console [ttyAMA0] enabled Feb 13 15:38:28.882851 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 15:38:28.882976 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 15:38:28.883047 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 15:38:28.883112 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 15:38:28.883172 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 13 15:38:28.883233 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 13 15:38:28.883249 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 13 15:38:28.883256 kernel: PCI host bridge to bus 0000:00 Feb 13 15:38:28.883321 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 13 15:38:28.883378 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 15:38:28.883432 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 13 15:38:28.883504 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 15:38:28.883583 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 13 15:38:28.883663 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 15:38:28.883742 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 13 15:38:28.883814 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 13 15:38:28.883879 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 15:38:28.883944 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 15:38:28.884005 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 13 15:38:28.884070 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 13 15:38:28.884144 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 13 15:38:28.884202 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 15:38:28.884271 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 13 15:38:28.884281 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 15:38:28.884288 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 15:38:28.884295 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 15:38:28.884302 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 15:38:28.884311 kernel: iommu: Default domain type: Translated Feb 13 15:38:28.884318 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 15:38:28.884325 kernel: efivars: Registered efivars operations Feb 13 15:38:28.884333 kernel: vgaarb: loaded Feb 13 15:38:28.884340 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 15:38:28.884347 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:38:28.884354 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:38:28.884361 kernel: pnp: PnP ACPI init Feb 13 15:38:28.884430 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 13 15:38:28.884459 kernel: pnp: PnP ACPI: found 1 devices Feb 13 15:38:28.884470 kernel: NET: Registered PF_INET protocol family Feb 13 15:38:28.884477 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 15:38:28.884485 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 15:38:28.884492 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:38:28.884499 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 15:38:28.884506 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 15:38:28.884513 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 15:38:28.884523 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:38:28.884530 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:38:28.884537 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:38:28.884544 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:38:28.884551 kernel: kvm [1]: HYP mode not available Feb 13 15:38:28.884558 kernel: Initialise system trusted keyrings Feb 13 15:38:28.884565 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 15:38:28.884572 kernel: Key type asymmetric registered Feb 13 15:38:28.884579 kernel: Asymmetric key parser 'x509' registered Feb 13 15:38:28.884588 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 15:38:28.884595 kernel: io scheduler mq-deadline registered Feb 13 15:38:28.884602 kernel: io scheduler kyber registered Feb 13 15:38:28.884609 kernel: io scheduler bfq registered Feb 13 15:38:28.884616 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 15:38:28.884623 kernel: ACPI: button: Power Button [PWRB] Feb 13 15:38:28.884630 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 15:38:28.884701 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 13 15:38:28.884711 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:38:28.884719 kernel: thunder_xcv, ver 1.0 Feb 13 15:38:28.884727 kernel: thunder_bgx, ver 1.0 Feb 13 15:38:28.884734 kernel: nicpf, ver 1.0 Feb 13 15:38:28.884741 kernel: nicvf, ver 1.0 Feb 13 15:38:28.884818 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 15:38:28.884878 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T15:38:28 UTC (1739461108) Feb 13 15:38:28.884888 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 15:38:28.884895 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 13 15:38:28.884904 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 15:38:28.884911 kernel: watchdog: Hard watchdog permanently disabled Feb 13 15:38:28.884918 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:38:28.884925 kernel: Segment Routing with IPv6 Feb 13 15:38:28.884932 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:38:28.884939 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:38:28.884946 kernel: Key type dns_resolver registered Feb 13 15:38:28.884953 kernel: registered taskstats version 1 Feb 13 15:38:28.884960 kernel: Loading compiled-in X.509 certificates Feb 13 15:38:28.884967 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 62d673f884efd54b6d6ef802a9b879413c8a346e' Feb 13 15:38:28.884976 kernel: Key type .fscrypt registered Feb 13 15:38:28.884983 kernel: Key type fscrypt-provisioning registered Feb 13 15:38:28.884990 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 15:38:28.884997 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:38:28.885003 kernel: ima: No architecture policies found Feb 13 15:38:28.885010 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 15:38:28.885017 kernel: clk: Disabling unused clocks Feb 13 15:38:28.885024 kernel: Freeing unused kernel memory: 39936K Feb 13 15:38:28.885032 kernel: Run /init as init process Feb 13 15:38:28.885039 kernel: with arguments: Feb 13 15:38:28.885046 kernel: /init Feb 13 15:38:28.885052 kernel: with environment: Feb 13 15:38:28.885059 kernel: HOME=/ Feb 13 15:38:28.885066 kernel: TERM=linux Feb 13 15:38:28.885072 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:38:28.885081 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:38:28.885091 systemd[1]: Detected virtualization kvm. Feb 13 15:38:28.885099 systemd[1]: Detected architecture arm64. Feb 13 15:38:28.885106 systemd[1]: Running in initrd. Feb 13 15:38:28.885113 systemd[1]: No hostname configured, using default hostname. Feb 13 15:38:28.885120 systemd[1]: Hostname set to . Feb 13 15:38:28.885128 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:38:28.885135 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:38:28.885142 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:38:28.885151 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:38:28.885159 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:38:28.885166 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:38:28.885174 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:38:28.885182 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:38:28.885191 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:38:28.885198 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:38:28.885207 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:38:28.885215 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:38:28.885222 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:38:28.885230 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:38:28.885237 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:38:28.885245 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:38:28.885252 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:38:28.885260 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:38:28.885267 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:38:28.885276 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 15:38:28.885284 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:38:28.885291 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:38:28.885299 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:38:28.885306 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:38:28.885313 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:38:28.885321 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:38:28.885328 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:38:28.885337 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:38:28.885344 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:38:28.885352 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:38:28.885359 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:38:28.885366 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:38:28.885374 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:38:28.885381 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:38:28.885391 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:38:28.885398 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:38:28.885421 systemd-journald[238]: Collecting audit messages is disabled. Feb 13 15:38:28.885450 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:38:28.885460 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:38:28.885481 systemd-journald[238]: Journal started Feb 13 15:38:28.885505 systemd-journald[238]: Runtime Journal (/run/log/journal/4c9f701a2a564f098ea05fcbffd04b35) is 5.9M, max 47.3M, 41.4M free. Feb 13 15:38:28.877171 systemd-modules-load[240]: Inserted module 'overlay' Feb 13 15:38:28.887577 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:38:28.890466 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:38:28.890380 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:38:28.893384 kernel: Bridge firewalling registered Feb 13 15:38:28.891752 systemd-modules-load[240]: Inserted module 'br_netfilter' Feb 13 15:38:28.893618 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:38:28.896469 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:38:28.900641 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:38:28.902336 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:38:28.904803 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:38:28.906040 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:38:28.908835 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:38:28.910998 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:38:28.913054 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:38:28.921628 dracut-cmdline[274]: dracut-dracut-053 Feb 13 15:38:28.927045 dracut-cmdline[274]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=685b18f1e2a119f561f35348e788538aade62ddb9fa889a87d9e00058aaa4b5a Feb 13 15:38:28.952460 systemd-resolved[278]: Positive Trust Anchors: Feb 13 15:38:28.952476 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:38:28.952508 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:38:28.957023 systemd-resolved[278]: Defaulting to hostname 'linux'. Feb 13 15:38:28.957905 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:38:28.959720 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:38:28.990470 kernel: SCSI subsystem initialized Feb 13 15:38:28.996464 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:38:29.002475 kernel: iscsi: registered transport (tcp) Feb 13 15:38:29.014467 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:38:29.014483 kernel: QLogic iSCSI HBA Driver Feb 13 15:38:29.053996 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:38:29.060622 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:38:29.077942 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:38:29.077982 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:38:29.079142 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:38:29.125464 kernel: raid6: neonx8 gen() 15801 MB/s Feb 13 15:38:29.142469 kernel: raid6: neonx4 gen() 15814 MB/s Feb 13 15:38:29.159459 kernel: raid6: neonx2 gen() 13202 MB/s Feb 13 15:38:29.176457 kernel: raid6: neonx1 gen() 10541 MB/s Feb 13 15:38:29.193460 kernel: raid6: int64x8 gen() 6796 MB/s Feb 13 15:38:29.210458 kernel: raid6: int64x4 gen() 7350 MB/s Feb 13 15:38:29.227459 kernel: raid6: int64x2 gen() 6114 MB/s Feb 13 15:38:29.244458 kernel: raid6: int64x1 gen() 5059 MB/s Feb 13 15:38:29.244470 kernel: raid6: using algorithm neonx4 gen() 15814 MB/s Feb 13 15:38:29.261460 kernel: raid6: .... xor() 12410 MB/s, rmw enabled Feb 13 15:38:29.261473 kernel: raid6: using neon recovery algorithm Feb 13 15:38:29.266760 kernel: xor: measuring software checksum speed Feb 13 15:38:29.266777 kernel: 8regs : 21015 MB/sec Feb 13 15:38:29.266792 kernel: 32regs : 21716 MB/sec Feb 13 15:38:29.267696 kernel: arm64_neon : 27195 MB/sec Feb 13 15:38:29.267711 kernel: xor: using function: arm64_neon (27195 MB/sec) Feb 13 15:38:29.320473 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:38:29.332284 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:38:29.345693 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:38:29.357846 systemd-udevd[462]: Using default interface naming scheme 'v255'. Feb 13 15:38:29.361131 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:38:29.364120 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:38:29.378091 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation Feb 13 15:38:29.405055 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:38:29.416608 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:38:29.459637 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:38:29.470606 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:38:29.485101 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:38:29.487591 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:38:29.488488 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:38:29.490973 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:38:29.497587 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:38:29.508841 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:38:29.516924 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Feb 13 15:38:29.521134 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 15:38:29.521263 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 15:38:29.521275 kernel: GPT:9289727 != 19775487 Feb 13 15:38:29.521285 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 15:38:29.521302 kernel: GPT:9289727 != 19775487 Feb 13 15:38:29.521310 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 15:38:29.521319 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:38:29.520208 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:38:29.520326 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:38:29.522505 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:38:29.526596 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:38:29.526741 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:38:29.528767 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:38:29.537573 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:38:29.542497 kernel: BTRFS: device fsid dbbe73f5-49db-4e16-b023-d47ce63b488f devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (510) Feb 13 15:38:29.544478 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (508) Feb 13 15:38:29.549227 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:38:29.554650 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 15:38:29.563137 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 15:38:29.567022 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 15:38:29.568042 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 15:38:29.573585 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 15:38:29.586612 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:38:29.588224 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:38:29.594617 disk-uuid[551]: Primary Header is updated. Feb 13 15:38:29.594617 disk-uuid[551]: Secondary Entries is updated. Feb 13 15:38:29.594617 disk-uuid[551]: Secondary Header is updated. Feb 13 15:38:29.599828 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:38:29.608620 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:38:30.613478 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:38:30.613720 disk-uuid[552]: The operation has completed successfully. Feb 13 15:38:30.635064 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:38:30.635177 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:38:30.656625 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:38:30.659399 sh[571]: Success Feb 13 15:38:30.674462 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 15:38:30.701151 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:38:30.709774 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:38:30.711811 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:38:30.721650 kernel: BTRFS info (device dm-0): first mount of filesystem dbbe73f5-49db-4e16-b023-d47ce63b488f Feb 13 15:38:30.721686 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:38:30.721697 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:38:30.722992 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:38:30.723007 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:38:30.726659 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:38:30.728031 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:38:30.738614 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:38:30.740433 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:38:30.747636 kernel: BTRFS info (device vda6): first mount of filesystem f03a17c4-6ca2-4f02-a9a3-5e771d63df74 Feb 13 15:38:30.747682 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:38:30.747692 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:38:30.750479 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:38:30.758246 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 15:38:30.759093 kernel: BTRFS info (device vda6): last unmount of filesystem f03a17c4-6ca2-4f02-a9a3-5e771d63df74 Feb 13 15:38:30.763638 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:38:30.774606 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:38:30.836718 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:38:30.849628 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:38:30.868234 ignition[662]: Ignition 2.20.0 Feb 13 15:38:30.868245 ignition[662]: Stage: fetch-offline Feb 13 15:38:30.868284 ignition[662]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:38:30.868292 ignition[662]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:38:30.868463 ignition[662]: parsed url from cmdline: "" Feb 13 15:38:30.868467 ignition[662]: no config URL provided Feb 13 15:38:30.868472 ignition[662]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:38:30.868480 ignition[662]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:38:30.868506 ignition[662]: op(1): [started] loading QEMU firmware config module Feb 13 15:38:30.868510 ignition[662]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 15:38:30.875143 ignition[662]: op(1): [finished] loading QEMU firmware config module Feb 13 15:38:30.875177 systemd-networkd[761]: lo: Link UP Feb 13 15:38:30.875181 systemd-networkd[761]: lo: Gained carrier Feb 13 15:38:30.876008 systemd-networkd[761]: Enumeration completed Feb 13 15:38:30.876620 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:38:30.876867 systemd-networkd[761]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:38:30.876870 systemd-networkd[761]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:38:30.878575 systemd[1]: Reached target network.target - Network. Feb 13 15:38:30.880438 systemd-networkd[761]: eth0: Link UP Feb 13 15:38:30.880455 systemd-networkd[761]: eth0: Gained carrier Feb 13 15:38:30.880463 systemd-networkd[761]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:38:30.898499 systemd-networkd[761]: eth0: DHCPv4 address 10.0.0.113/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:38:30.922380 ignition[662]: parsing config with SHA512: 3012be598bf2560a010ceb497a4d26088fcde62b29c32ca559c878376e77ceb375a33fb486557d3ffd99b1206eb396a545ab918e52995c3de114175d818f4f27 Feb 13 15:38:30.927096 unknown[662]: fetched base config from "system" Feb 13 15:38:30.927106 unknown[662]: fetched user config from "qemu" Feb 13 15:38:30.927483 ignition[662]: fetch-offline: fetch-offline passed Feb 13 15:38:30.928749 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:38:30.927554 ignition[662]: Ignition finished successfully Feb 13 15:38:30.930335 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 15:38:30.942618 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:38:30.952756 ignition[768]: Ignition 2.20.0 Feb 13 15:38:30.952766 ignition[768]: Stage: kargs Feb 13 15:38:30.952935 ignition[768]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:38:30.952945 ignition[768]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:38:30.953769 ignition[768]: kargs: kargs passed Feb 13 15:38:30.953831 ignition[768]: Ignition finished successfully Feb 13 15:38:30.955841 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:38:30.964650 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:38:30.973933 ignition[778]: Ignition 2.20.0 Feb 13 15:38:30.973944 ignition[778]: Stage: disks Feb 13 15:38:30.974093 ignition[778]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:38:30.974103 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:38:30.974925 ignition[778]: disks: disks passed Feb 13 15:38:30.976351 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:38:30.974969 ignition[778]: Ignition finished successfully Feb 13 15:38:30.977741 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:38:30.978904 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:38:30.980220 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:38:30.981532 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:38:30.983011 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:38:30.993596 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:38:31.003931 systemd-fsck[789]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 15:38:31.007626 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:38:31.009404 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:38:31.053464 kernel: EXT4-fs (vda9): mounted filesystem 469d244b-00c1-45f4-bce0-c1d88e98a895 r/w with ordered data mode. Quota mode: none. Feb 13 15:38:31.054171 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:38:31.055335 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:38:31.070549 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:38:31.072112 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:38:31.073468 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 15:38:31.077480 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (797) Feb 13 15:38:31.073510 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:38:31.073532 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:38:31.080342 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:38:31.084105 kernel: BTRFS info (device vda6): first mount of filesystem f03a17c4-6ca2-4f02-a9a3-5e771d63df74 Feb 13 15:38:31.084125 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:38:31.084135 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:38:31.082228 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:38:31.087469 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:38:31.088201 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:38:31.126346 initrd-setup-root[821]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:38:31.130487 initrd-setup-root[828]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:38:31.134073 initrd-setup-root[835]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:38:31.138008 initrd-setup-root[842]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:38:31.219915 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:38:31.228529 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:38:31.230021 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:38:31.235484 kernel: BTRFS info (device vda6): last unmount of filesystem f03a17c4-6ca2-4f02-a9a3-5e771d63df74 Feb 13 15:38:31.250504 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:38:31.252758 ignition[909]: INFO : Ignition 2.20.0 Feb 13 15:38:31.252758 ignition[909]: INFO : Stage: mount Feb 13 15:38:31.253922 ignition[909]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:38:31.253922 ignition[909]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:38:31.253922 ignition[909]: INFO : mount: mount passed Feb 13 15:38:31.253922 ignition[909]: INFO : Ignition finished successfully Feb 13 15:38:31.254850 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:38:31.266577 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:38:31.720954 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:38:31.733616 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:38:31.740048 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (925) Feb 13 15:38:31.740076 kernel: BTRFS info (device vda6): first mount of filesystem f03a17c4-6ca2-4f02-a9a3-5e771d63df74 Feb 13 15:38:31.740087 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:38:31.740716 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:38:31.743483 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:38:31.744215 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:38:31.765575 ignition[942]: INFO : Ignition 2.20.0 Feb 13 15:38:31.765575 ignition[942]: INFO : Stage: files Feb 13 15:38:31.766883 ignition[942]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:38:31.766883 ignition[942]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:38:31.766883 ignition[942]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:38:31.769627 ignition[942]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:38:31.769627 ignition[942]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:38:31.771700 ignition[942]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:38:31.771700 ignition[942]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:38:31.771700 ignition[942]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:38:31.771700 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 15:38:31.771700 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 15:38:31.770331 unknown[942]: wrote ssh authorized keys file for user: core Feb 13 15:38:31.819278 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 15:38:31.974031 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 15:38:31.975587 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:38:31.975587 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:38:31.975587 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:38:31.975587 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:38:31.975587 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:38:31.975587 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:38:31.975587 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:38:31.975587 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:38:31.975587 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:38:31.975587 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:38:31.975587 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Feb 13 15:38:31.975587 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Feb 13 15:38:31.975587 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Feb 13 15:38:31.975587 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Feb 13 15:38:32.197696 systemd-networkd[761]: eth0: Gained IPv6LL Feb 13 15:38:32.297537 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 15:38:32.486868 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Feb 13 15:38:32.486868 ignition[942]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 15:38:32.489621 ignition[942]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:38:32.489621 ignition[942]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:38:32.489621 ignition[942]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 15:38:32.489621 ignition[942]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Feb 13 15:38:32.489621 ignition[942]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 15:38:32.489621 ignition[942]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 15:38:32.489621 ignition[942]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Feb 13 15:38:32.489621 ignition[942]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 15:38:32.509353 ignition[942]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 15:38:32.512719 ignition[942]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 15:38:32.514875 ignition[942]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 15:38:32.514875 ignition[942]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Feb 13 15:38:32.514875 ignition[942]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 15:38:32.514875 ignition[942]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:38:32.514875 ignition[942]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:38:32.514875 ignition[942]: INFO : files: files passed Feb 13 15:38:32.514875 ignition[942]: INFO : Ignition finished successfully Feb 13 15:38:32.515390 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:38:32.524611 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:38:32.526926 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:38:32.527968 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:38:32.528049 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:38:32.533485 initrd-setup-root-after-ignition[971]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 15:38:32.536608 initrd-setup-root-after-ignition[973]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:38:32.536608 initrd-setup-root-after-ignition[973]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:38:32.538792 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:38:32.539566 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:38:32.540873 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:38:32.552664 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:38:32.570286 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:38:32.570390 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:38:32.572152 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:38:32.573540 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:38:32.574951 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:38:32.575696 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:38:32.590165 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:38:32.592238 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:38:32.603177 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:38:32.605005 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:38:32.606009 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:38:32.607382 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:38:32.607524 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:38:32.609561 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:38:32.611110 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:38:32.612351 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:38:32.613639 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:38:32.615035 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:38:32.616494 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:38:32.617852 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:38:32.619266 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:38:32.620672 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:38:32.621932 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:38:32.623103 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:38:32.623228 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:38:32.624940 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:38:32.626312 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:38:32.627733 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:38:32.628559 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:38:32.630069 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:38:32.630189 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:38:32.632221 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:38:32.632339 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:38:32.633825 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:38:32.635017 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:38:32.638500 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:38:32.639490 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:38:32.641031 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:38:32.642166 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:38:32.642250 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:38:32.643337 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:38:32.643413 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:38:32.644539 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:38:32.644642 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:38:32.645935 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:38:32.646033 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:38:32.653612 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:38:32.654939 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:38:32.655628 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:38:32.655742 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:38:32.657087 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:38:32.657180 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:38:32.662175 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:38:32.663484 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:38:32.666007 ignition[998]: INFO : Ignition 2.20.0 Feb 13 15:38:32.666007 ignition[998]: INFO : Stage: umount Feb 13 15:38:32.668078 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:38:32.668078 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:38:32.668078 ignition[998]: INFO : umount: umount passed Feb 13 15:38:32.668078 ignition[998]: INFO : Ignition finished successfully Feb 13 15:38:32.669237 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:38:32.669726 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:38:32.669816 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:38:32.670760 systemd[1]: Stopped target network.target - Network. Feb 13 15:38:32.671921 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:38:32.671975 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:38:32.672781 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:38:32.672821 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:38:32.674117 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:38:32.674151 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:38:32.675402 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:38:32.675440 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:38:32.676879 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:38:32.678114 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:38:32.681429 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:38:32.681531 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:38:32.682828 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:38:32.682911 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:38:32.685806 systemd-networkd[761]: eth0: DHCPv6 lease lost Feb 13 15:38:32.687211 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:38:32.687320 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:38:32.689145 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:38:32.689275 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:38:32.691754 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:38:32.691808 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:38:32.702592 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:38:32.703909 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:38:32.703976 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:38:32.705520 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:38:32.705568 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:38:32.707024 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:38:32.707068 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:38:32.708319 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:38:32.708360 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:38:32.710047 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:38:32.720580 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:38:32.720698 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:38:32.722385 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:38:32.722531 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:38:32.724157 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:38:32.724226 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:38:32.726045 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:38:32.726086 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:38:32.727347 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:38:32.727394 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:38:32.729346 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:38:32.729386 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:38:32.731311 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:38:32.731359 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:38:32.742610 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:38:32.743436 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:38:32.743512 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:38:32.745227 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 15:38:32.745270 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:38:32.746765 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:38:32.746827 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:38:32.748484 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:38:32.748525 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:38:32.750304 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:38:32.750389 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:38:32.752165 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:38:32.753892 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:38:32.763813 systemd[1]: Switching root. Feb 13 15:38:32.789259 systemd-journald[238]: Journal stopped Feb 13 15:38:33.486471 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Feb 13 15:38:33.486541 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 15:38:33.486558 kernel: SELinux: policy capability open_perms=1 Feb 13 15:38:33.486568 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 15:38:33.486577 kernel: SELinux: policy capability always_check_network=0 Feb 13 15:38:33.486591 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 15:38:33.486600 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 15:38:33.486612 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 15:38:33.486622 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 15:38:33.486631 kernel: audit: type=1403 audit(1739461112.929:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 15:38:33.486641 systemd[1]: Successfully loaded SELinux policy in 34.840ms. Feb 13 15:38:33.486658 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.469ms. Feb 13 15:38:33.486669 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:38:33.486679 systemd[1]: Detected virtualization kvm. Feb 13 15:38:33.486694 systemd[1]: Detected architecture arm64. Feb 13 15:38:33.486704 systemd[1]: Detected first boot. Feb 13 15:38:33.486716 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:38:33.486727 zram_generator::config[1044]: No configuration found. Feb 13 15:38:33.486738 systemd[1]: Populated /etc with preset unit settings. Feb 13 15:38:33.486748 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 15:38:33.486759 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 15:38:33.486779 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 15:38:33.486793 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 15:38:33.486804 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 15:38:33.486817 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 15:38:33.486828 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 15:38:33.486843 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 15:38:33.486853 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 15:38:33.486863 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 15:38:33.486874 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 15:38:33.486886 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:38:33.486896 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:38:33.486907 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 15:38:33.486918 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 15:38:33.486929 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 15:38:33.486939 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:38:33.486949 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 15:38:33.486964 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:38:33.486974 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 15:38:33.486985 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 15:38:33.486995 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 15:38:33.487006 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 15:38:33.487017 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:38:33.487027 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:38:33.487037 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:38:33.487047 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:38:33.487058 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 15:38:33.487068 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 15:38:33.487078 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:38:33.487089 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:38:33.487100 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:38:33.487111 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 15:38:33.487121 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 15:38:33.487131 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 15:38:33.487141 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 15:38:33.487151 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 15:38:33.487161 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 15:38:33.487171 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 15:38:33.487182 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 15:38:33.487197 systemd[1]: Reached target machines.target - Containers. Feb 13 15:38:33.487207 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 15:38:33.487217 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:38:33.487228 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:38:33.487238 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 15:38:33.487248 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:38:33.487258 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:38:33.487268 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:38:33.487279 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 15:38:33.487289 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:38:33.487299 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 15:38:33.487310 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 15:38:33.487320 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 15:38:33.487331 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 15:38:33.487341 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 15:38:33.487351 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:38:33.487361 kernel: fuse: init (API version 7.39) Feb 13 15:38:33.487372 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:38:33.487392 kernel: loop: module loaded Feb 13 15:38:33.487402 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 15:38:33.487412 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 15:38:33.487422 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:38:33.487432 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 15:38:33.487464 systemd[1]: Stopped verity-setup.service. Feb 13 15:38:33.487477 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 15:38:33.487487 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 15:38:33.487499 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 15:38:33.487509 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 15:38:33.487547 systemd-journald[1108]: Collecting audit messages is disabled. Feb 13 15:38:33.487571 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 15:38:33.487582 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 15:38:33.487592 kernel: ACPI: bus type drm_connector registered Feb 13 15:38:33.487602 systemd-journald[1108]: Journal started Feb 13 15:38:33.487629 systemd-journald[1108]: Runtime Journal (/run/log/journal/4c9f701a2a564f098ea05fcbffd04b35) is 5.9M, max 47.3M, 41.4M free. Feb 13 15:38:33.304761 systemd[1]: Queued start job for default target multi-user.target. Feb 13 15:38:33.324503 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 15:38:33.324882 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 15:38:33.488473 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:38:33.490686 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:38:33.492292 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 15:38:33.492513 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 15:38:33.493655 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:38:33.493798 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:38:33.494953 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:38:33.495096 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:38:33.496239 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:38:33.497302 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:38:33.498934 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 15:38:33.499071 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 15:38:33.500243 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:38:33.500377 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:38:33.501739 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:38:33.503038 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 15:38:33.504417 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 15:38:33.505721 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 15:38:33.517664 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 15:38:33.524539 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 15:38:33.528596 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 15:38:33.529483 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 15:38:33.529531 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:38:33.531249 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 15:38:33.533277 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 15:38:33.535119 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 15:38:33.536044 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:38:33.537348 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 15:38:33.539098 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 15:38:33.540075 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:38:33.543623 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 15:38:33.544587 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:38:33.545635 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:38:33.555425 systemd-journald[1108]: Time spent on flushing to /var/log/journal/4c9f701a2a564f098ea05fcbffd04b35 is 25.475ms for 857 entries. Feb 13 15:38:33.555425 systemd-journald[1108]: System Journal (/var/log/journal/4c9f701a2a564f098ea05fcbffd04b35) is 8.0M, max 195.6M, 187.6M free. Feb 13 15:38:33.603644 systemd-journald[1108]: Received client request to flush runtime journal. Feb 13 15:38:33.603698 kernel: loop0: detected capacity change from 0 to 116784 Feb 13 15:38:33.603717 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 15:38:33.552628 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 15:38:33.554473 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:38:33.557811 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:38:33.558982 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 15:38:33.563551 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 15:38:33.565165 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 15:38:33.566429 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 15:38:33.570283 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 15:38:33.585627 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 15:38:33.588276 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 15:38:33.591790 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:38:33.606006 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 15:38:33.606215 systemd-tmpfiles[1156]: ACLs are not supported, ignoring. Feb 13 15:38:33.606473 systemd-tmpfiles[1156]: ACLs are not supported, ignoring. Feb 13 15:38:33.609868 udevadm[1167]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 15:38:33.611288 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 15:38:33.613492 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 15:38:33.614751 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:38:33.622585 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 15:38:33.633495 kernel: loop1: detected capacity change from 0 to 194512 Feb 13 15:38:33.644643 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 15:38:33.654602 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:38:33.659469 kernel: loop2: detected capacity change from 0 to 113552 Feb 13 15:38:33.672989 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. Feb 13 15:38:33.673007 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. Feb 13 15:38:33.677104 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:38:33.690661 kernel: loop3: detected capacity change from 0 to 116784 Feb 13 15:38:33.695609 kernel: loop4: detected capacity change from 0 to 194512 Feb 13 15:38:33.701492 kernel: loop5: detected capacity change from 0 to 113552 Feb 13 15:38:33.705260 (sd-merge)[1185]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 15:38:33.705632 (sd-merge)[1185]: Merged extensions into '/usr'. Feb 13 15:38:33.711846 systemd[1]: Reloading requested from client PID 1155 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 15:38:33.711865 systemd[1]: Reloading... Feb 13 15:38:33.761472 zram_generator::config[1208]: No configuration found. Feb 13 15:38:33.855493 ldconfig[1150]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 15:38:33.871528 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:38:33.910152 systemd[1]: Reloading finished in 197 ms. Feb 13 15:38:33.939477 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 15:38:33.940592 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 15:38:33.952597 systemd[1]: Starting ensure-sysext.service... Feb 13 15:38:33.954278 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:38:33.970185 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 15:38:33.970407 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 15:38:33.971082 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 15:38:33.971294 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Feb 13 15:38:33.971346 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Feb 13 15:38:33.973158 systemd[1]: Reloading requested from client PID 1246 ('systemctl') (unit ensure-sysext.service)... Feb 13 15:38:33.973173 systemd[1]: Reloading... Feb 13 15:38:33.974097 systemd-tmpfiles[1247]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:38:33.974108 systemd-tmpfiles[1247]: Skipping /boot Feb 13 15:38:33.982321 systemd-tmpfiles[1247]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:38:33.982422 systemd-tmpfiles[1247]: Skipping /boot Feb 13 15:38:34.016990 zram_generator::config[1274]: No configuration found. Feb 13 15:38:34.098102 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:38:34.136271 systemd[1]: Reloading finished in 162 ms. Feb 13 15:38:34.152536 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 15:38:34.165832 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:38:34.173143 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:38:34.175525 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 15:38:34.177484 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 15:38:34.181733 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:38:34.185278 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:38:34.189719 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 15:38:34.193333 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:38:34.194677 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:38:34.199821 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:38:34.203775 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:38:34.206839 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:38:34.210574 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 15:38:34.211866 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 15:38:34.213249 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:38:34.213396 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:38:34.214692 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:38:34.214821 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:38:34.220721 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:38:34.223062 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:38:34.227698 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:38:34.228770 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:38:34.231851 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 15:38:34.233635 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:38:34.233757 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:38:34.235796 systemd-udevd[1317]: Using default interface naming scheme 'v255'. Feb 13 15:38:34.239511 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:38:34.240559 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:38:34.242057 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 15:38:34.243834 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:38:34.246874 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:38:34.251317 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 15:38:34.253208 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 15:38:34.259689 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:38:34.276031 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:38:34.279370 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:38:34.283887 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:38:34.286497 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:38:34.289701 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:38:34.289861 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:38:34.290525 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:38:34.292204 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 15:38:34.293292 augenrules[1373]: No rules Feb 13 15:38:34.293542 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:38:34.293694 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:38:34.295045 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:38:34.295205 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:38:34.296474 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:38:34.296607 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:38:34.297928 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:38:34.298044 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:38:34.300082 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:38:34.301485 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:38:34.302798 systemd[1]: Finished ensure-sysext.service. Feb 13 15:38:34.318590 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Feb 13 15:38:34.325618 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:38:34.326592 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:38:34.326659 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:38:34.330392 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 15:38:34.367242 systemd-resolved[1313]: Positive Trust Anchors: Feb 13 15:38:34.367264 systemd-resolved[1313]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:38:34.367295 systemd-resolved[1313]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:38:34.383518 systemd-resolved[1313]: Defaulting to hostname 'linux'. Feb 13 15:38:34.388691 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1378) Feb 13 15:38:34.389932 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:38:34.393227 systemd-networkd[1390]: lo: Link UP Feb 13 15:38:34.393235 systemd-networkd[1390]: lo: Gained carrier Feb 13 15:38:34.393365 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:38:34.394269 systemd-networkd[1390]: Enumeration completed Feb 13 15:38:34.394543 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:38:34.395373 systemd[1]: Reached target network.target - Network. Feb 13 15:38:34.396873 systemd-networkd[1390]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:38:34.396883 systemd-networkd[1390]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:38:34.397573 systemd-networkd[1390]: eth0: Link UP Feb 13 15:38:34.397579 systemd-networkd[1390]: eth0: Gained carrier Feb 13 15:38:34.397593 systemd-networkd[1390]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:38:34.406673 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 15:38:34.411319 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 15:38:34.414002 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 15:38:34.414525 systemd-networkd[1390]: eth0: DHCPv4 address 10.0.0.113/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:38:34.415035 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 15:38:34.415798 systemd-timesyncd[1392]: Network configuration changed, trying to establish connection. Feb 13 15:38:34.416208 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 15:38:34.417554 systemd-timesyncd[1392]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 15:38:34.417617 systemd-timesyncd[1392]: Initial clock synchronization to Thu 2025-02-13 15:38:34.591231 UTC. Feb 13 15:38:34.437482 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 15:38:34.466704 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:38:34.476740 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 15:38:34.479326 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 15:38:34.497988 lvm[1410]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:38:34.505752 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:38:34.540108 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 15:38:34.541340 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:38:34.542223 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:38:34.543121 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 15:38:34.544060 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 15:38:34.545204 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 15:38:34.546126 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 15:38:34.547071 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 15:38:34.547966 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 15:38:34.548004 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:38:34.548657 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:38:34.550350 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 15:38:34.552759 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 15:38:34.565469 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 15:38:34.567500 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 15:38:34.568799 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 15:38:34.569669 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:38:34.570334 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:38:34.571096 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:38:34.571126 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:38:34.572065 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 15:38:34.573833 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 15:38:34.575361 lvm[1417]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:38:34.577148 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 15:38:34.581641 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 15:38:34.582400 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 15:38:34.583723 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 15:38:34.585014 jq[1420]: false Feb 13 15:38:34.586428 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 15:38:34.592508 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 15:38:34.594401 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 15:38:34.599179 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 15:38:34.602411 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 15:38:34.602871 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 15:38:34.603193 extend-filesystems[1421]: Found loop3 Feb 13 15:38:34.603193 extend-filesystems[1421]: Found loop4 Feb 13 15:38:34.603193 extend-filesystems[1421]: Found loop5 Feb 13 15:38:34.603193 extend-filesystems[1421]: Found vda Feb 13 15:38:34.603193 extend-filesystems[1421]: Found vda1 Feb 13 15:38:34.603193 extend-filesystems[1421]: Found vda2 Feb 13 15:38:34.603193 extend-filesystems[1421]: Found vda3 Feb 13 15:38:34.603193 extend-filesystems[1421]: Found usr Feb 13 15:38:34.603193 extend-filesystems[1421]: Found vda4 Feb 13 15:38:34.625270 extend-filesystems[1421]: Found vda6 Feb 13 15:38:34.625270 extend-filesystems[1421]: Found vda7 Feb 13 15:38:34.625270 extend-filesystems[1421]: Found vda9 Feb 13 15:38:34.625270 extend-filesystems[1421]: Checking size of /dev/vda9 Feb 13 15:38:34.625270 extend-filesystems[1421]: Resized partition /dev/vda9 Feb 13 15:38:34.630575 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 15:38:34.610621 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 15:38:34.606567 dbus-daemon[1419]: [system] SELinux support is enabled Feb 13 15:38:34.641602 extend-filesystems[1442]: resize2fs 1.47.1 (20-May-2024) Feb 13 15:38:34.612857 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 15:38:34.615005 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 15:38:34.620660 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 15:38:34.643162 jq[1439]: true Feb 13 15:38:34.635721 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 15:38:34.635897 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 15:38:34.636159 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 15:38:34.636291 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 15:38:34.639858 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 15:38:34.640008 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 15:38:34.649656 update_engine[1431]: I20250213 15:38:34.648543 1431 main.cc:92] Flatcar Update Engine starting Feb 13 15:38:34.655503 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1362) Feb 13 15:38:34.655553 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 15:38:34.656768 jq[1445]: true Feb 13 15:38:34.658906 update_engine[1431]: I20250213 15:38:34.658578 1431 update_check_scheduler.cc:74] Next update check in 10m54s Feb 13 15:38:34.661350 systemd[1]: Started update-engine.service - Update Engine. Feb 13 15:38:34.669590 extend-filesystems[1442]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 15:38:34.669590 extend-filesystems[1442]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 15:38:34.669590 extend-filesystems[1442]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 15:38:34.662855 (ntainerd)[1453]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 15:38:34.675954 extend-filesystems[1421]: Resized filesystem in /dev/vda9 Feb 13 15:38:34.664059 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 15:38:34.664085 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 15:38:34.665467 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 15:38:34.665490 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 15:38:34.670390 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 15:38:34.672181 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 15:38:34.673524 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 15:38:34.681156 tar[1444]: linux-arm64/helm Feb 13 15:38:34.692343 systemd-logind[1429]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 15:38:34.693513 systemd-logind[1429]: New seat seat0. Feb 13 15:38:34.712012 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 15:38:34.724509 bash[1474]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:38:34.728493 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 15:38:34.731476 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 15:38:34.755574 locksmithd[1458]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 15:38:34.874609 containerd[1453]: time="2025-02-13T15:38:34.874521640Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 15:38:34.901883 containerd[1453]: time="2025-02-13T15:38:34.901770680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:38:34.904475 containerd[1453]: time="2025-02-13T15:38:34.903315080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:38:34.904475 containerd[1453]: time="2025-02-13T15:38:34.903349400Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 15:38:34.904475 containerd[1453]: time="2025-02-13T15:38:34.903365680Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 15:38:34.904475 containerd[1453]: time="2025-02-13T15:38:34.903542320Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 15:38:34.904475 containerd[1453]: time="2025-02-13T15:38:34.903560200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 15:38:34.904475 containerd[1453]: time="2025-02-13T15:38:34.903615320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:38:34.904475 containerd[1453]: time="2025-02-13T15:38:34.903626840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:38:34.904475 containerd[1453]: time="2025-02-13T15:38:34.903793240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:38:34.904475 containerd[1453]: time="2025-02-13T15:38:34.903808800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 15:38:34.904475 containerd[1453]: time="2025-02-13T15:38:34.903820960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:38:34.904475 containerd[1453]: time="2025-02-13T15:38:34.903829640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 15:38:34.904792 containerd[1453]: time="2025-02-13T15:38:34.903912200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:38:34.904792 containerd[1453]: time="2025-02-13T15:38:34.904099160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:38:34.904792 containerd[1453]: time="2025-02-13T15:38:34.904190840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:38:34.904792 containerd[1453]: time="2025-02-13T15:38:34.904203640Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 15:38:34.904792 containerd[1453]: time="2025-02-13T15:38:34.904278840Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 15:38:34.904792 containerd[1453]: time="2025-02-13T15:38:34.904322320Z" level=info msg="metadata content store policy set" policy=shared Feb 13 15:38:34.908182 containerd[1453]: time="2025-02-13T15:38:34.908144520Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 15:38:34.908242 containerd[1453]: time="2025-02-13T15:38:34.908215720Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 15:38:34.908242 containerd[1453]: time="2025-02-13T15:38:34.908233320Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 15:38:34.908308 containerd[1453]: time="2025-02-13T15:38:34.908255440Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 15:38:34.908328 containerd[1453]: time="2025-02-13T15:38:34.908306080Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 15:38:34.908596 containerd[1453]: time="2025-02-13T15:38:34.908572280Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 15:38:34.911554 containerd[1453]: time="2025-02-13T15:38:34.908966720Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 15:38:34.911554 containerd[1453]: time="2025-02-13T15:38:34.909126640Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 15:38:34.911554 containerd[1453]: time="2025-02-13T15:38:34.909145120Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 15:38:34.911554 containerd[1453]: time="2025-02-13T15:38:34.909163880Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 15:38:34.911554 containerd[1453]: time="2025-02-13T15:38:34.909177600Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 15:38:34.911554 containerd[1453]: time="2025-02-13T15:38:34.909189960Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 15:38:34.911554 containerd[1453]: time="2025-02-13T15:38:34.909203200Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 15:38:34.911554 containerd[1453]: time="2025-02-13T15:38:34.909216480Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 15:38:34.911554 containerd[1453]: time="2025-02-13T15:38:34.909230000Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 15:38:34.911554 containerd[1453]: time="2025-02-13T15:38:34.909242280Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 15:38:34.911554 containerd[1453]: time="2025-02-13T15:38:34.909254280Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 15:38:34.911554 containerd[1453]: time="2025-02-13T15:38:34.909266280Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 15:38:34.911554 containerd[1453]: time="2025-02-13T15:38:34.909286120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 15:38:34.911554 containerd[1453]: time="2025-02-13T15:38:34.909299200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 15:38:34.911871 containerd[1453]: time="2025-02-13T15:38:34.909312040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 15:38:34.911871 containerd[1453]: time="2025-02-13T15:38:34.909324240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 15:38:34.911871 containerd[1453]: time="2025-02-13T15:38:34.909335920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 15:38:34.911871 containerd[1453]: time="2025-02-13T15:38:34.909353160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 15:38:34.911871 containerd[1453]: time="2025-02-13T15:38:34.909365000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 15:38:34.911871 containerd[1453]: time="2025-02-13T15:38:34.909376760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 15:38:34.911871 containerd[1453]: time="2025-02-13T15:38:34.909389760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 15:38:34.911871 containerd[1453]: time="2025-02-13T15:38:34.909403960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 15:38:34.911871 containerd[1453]: time="2025-02-13T15:38:34.909414760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 15:38:34.911871 containerd[1453]: time="2025-02-13T15:38:34.909425280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 15:38:34.911871 containerd[1453]: time="2025-02-13T15:38:34.909437200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 15:38:34.911871 containerd[1453]: time="2025-02-13T15:38:34.909475800Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 15:38:34.911871 containerd[1453]: time="2025-02-13T15:38:34.909499120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 15:38:34.911871 containerd[1453]: time="2025-02-13T15:38:34.909511960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 15:38:34.911871 containerd[1453]: time="2025-02-13T15:38:34.909522480Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 15:38:34.912104 containerd[1453]: time="2025-02-13T15:38:34.909703560Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 15:38:34.912104 containerd[1453]: time="2025-02-13T15:38:34.909721720Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 15:38:34.912104 containerd[1453]: time="2025-02-13T15:38:34.909732480Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 15:38:34.912104 containerd[1453]: time="2025-02-13T15:38:34.909744000Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 15:38:34.912104 containerd[1453]: time="2025-02-13T15:38:34.909754360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 15:38:34.912104 containerd[1453]: time="2025-02-13T15:38:34.909777920Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 15:38:34.912104 containerd[1453]: time="2025-02-13T15:38:34.909788680Z" level=info msg="NRI interface is disabled by configuration." Feb 13 15:38:34.912104 containerd[1453]: time="2025-02-13T15:38:34.909799080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 15:38:34.912231 containerd[1453]: time="2025-02-13T15:38:34.910162720Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 15:38:34.912231 containerd[1453]: time="2025-02-13T15:38:34.910205640Z" level=info msg="Connect containerd service" Feb 13 15:38:34.912231 containerd[1453]: time="2025-02-13T15:38:34.910239240Z" level=info msg="using legacy CRI server" Feb 13 15:38:34.912231 containerd[1453]: time="2025-02-13T15:38:34.910246440Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 15:38:34.912231 containerd[1453]: time="2025-02-13T15:38:34.910538120Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 15:38:34.912231 containerd[1453]: time="2025-02-13T15:38:34.911215560Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:38:34.912231 containerd[1453]: time="2025-02-13T15:38:34.911586120Z" level=info msg="Start subscribing containerd event" Feb 13 15:38:34.912231 containerd[1453]: time="2025-02-13T15:38:34.911799480Z" level=info msg="Start recovering state" Feb 13 15:38:34.912231 containerd[1453]: time="2025-02-13T15:38:34.911953320Z" level=info msg="Start event monitor" Feb 13 15:38:34.912231 containerd[1453]: time="2025-02-13T15:38:34.911968400Z" level=info msg="Start snapshots syncer" Feb 13 15:38:34.912231 containerd[1453]: time="2025-02-13T15:38:34.911977400Z" level=info msg="Start cni network conf syncer for default" Feb 13 15:38:34.912231 containerd[1453]: time="2025-02-13T15:38:34.911985880Z" level=info msg="Start streaming server" Feb 13 15:38:34.914183 containerd[1453]: time="2025-02-13T15:38:34.914154040Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 15:38:34.914336 containerd[1453]: time="2025-02-13T15:38:34.914318600Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 15:38:34.914469 containerd[1453]: time="2025-02-13T15:38:34.914436200Z" level=info msg="containerd successfully booted in 0.040889s" Feb 13 15:38:34.917511 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 15:38:35.030756 tar[1444]: linux-arm64/LICENSE Feb 13 15:38:35.030848 tar[1444]: linux-arm64/README.md Feb 13 15:38:35.045512 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 15:38:35.181362 sshd_keygen[1435]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 15:38:35.200832 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 15:38:35.215732 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 15:38:35.221200 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 15:38:35.221439 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 15:38:35.225147 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 15:38:35.237327 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 15:38:35.242355 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 15:38:35.244670 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 15:38:35.245733 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 15:38:36.361239 systemd-networkd[1390]: eth0: Gained IPv6LL Feb 13 15:38:36.366506 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 15:38:36.368503 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 15:38:36.379702 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 15:38:36.381833 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:38:36.383650 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 15:38:36.396939 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 15:38:36.397102 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 15:38:36.399034 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 15:38:36.401390 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 15:38:36.865000 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:38:36.866400 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 15:38:36.870126 (kubelet)[1530]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:38:36.872051 systemd[1]: Startup finished in 527ms (kernel) + 4.230s (initrd) + 3.979s (userspace) = 8.737s. Feb 13 15:38:36.875409 agetty[1506]: failed to open credentials directory Feb 13 15:38:36.875824 agetty[1507]: failed to open credentials directory Feb 13 15:38:37.342896 kubelet[1530]: E0213 15:38:37.342819 1530 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:38:37.345677 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:38:37.345825 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:38:41.313220 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 15:38:41.314384 systemd[1]: Started sshd@0-10.0.0.113:22-10.0.0.1:57996.service - OpenSSH per-connection server daemon (10.0.0.1:57996). Feb 13 15:38:41.367566 sshd[1544]: Accepted publickey for core from 10.0.0.1 port 57996 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:38:41.369575 sshd-session[1544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:38:41.376932 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 15:38:41.391709 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 15:38:41.393551 systemd-logind[1429]: New session 1 of user core. Feb 13 15:38:41.404499 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 15:38:41.406802 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 15:38:41.415820 (systemd)[1548]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 15:38:41.490885 systemd[1548]: Queued start job for default target default.target. Feb 13 15:38:41.500314 systemd[1548]: Created slice app.slice - User Application Slice. Feb 13 15:38:41.500515 systemd[1548]: Reached target paths.target - Paths. Feb 13 15:38:41.500603 systemd[1548]: Reached target timers.target - Timers. Feb 13 15:38:41.501892 systemd[1548]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 15:38:41.511427 systemd[1548]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 15:38:41.511510 systemd[1548]: Reached target sockets.target - Sockets. Feb 13 15:38:41.511523 systemd[1548]: Reached target basic.target - Basic System. Feb 13 15:38:41.511557 systemd[1548]: Reached target default.target - Main User Target. Feb 13 15:38:41.511582 systemd[1548]: Startup finished in 89ms. Feb 13 15:38:41.511824 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 15:38:41.513076 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 15:38:41.568900 systemd[1]: Started sshd@1-10.0.0.113:22-10.0.0.1:58002.service - OpenSSH per-connection server daemon (10.0.0.1:58002). Feb 13 15:38:41.623287 sshd[1559]: Accepted publickey for core from 10.0.0.1 port 58002 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:38:41.624745 sshd-session[1559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:38:41.629214 systemd-logind[1429]: New session 2 of user core. Feb 13 15:38:41.636680 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 15:38:41.688296 sshd[1561]: Connection closed by 10.0.0.1 port 58002 Feb 13 15:38:41.688622 sshd-session[1559]: pam_unix(sshd:session): session closed for user core Feb 13 15:38:41.709243 systemd[1]: sshd@1-10.0.0.113:22-10.0.0.1:58002.service: Deactivated successfully. Feb 13 15:38:41.712580 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 15:38:41.713744 systemd-logind[1429]: Session 2 logged out. Waiting for processes to exit. Feb 13 15:38:41.714851 systemd[1]: Started sshd@2-10.0.0.113:22-10.0.0.1:58016.service - OpenSSH per-connection server daemon (10.0.0.1:58016). Feb 13 15:38:41.715446 systemd-logind[1429]: Removed session 2. Feb 13 15:38:41.754716 sshd[1566]: Accepted publickey for core from 10.0.0.1 port 58016 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:38:41.755892 sshd-session[1566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:38:41.759588 systemd-logind[1429]: New session 3 of user core. Feb 13 15:38:41.773939 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 15:38:41.822101 sshd[1568]: Connection closed by 10.0.0.1 port 58016 Feb 13 15:38:41.822508 sshd-session[1566]: pam_unix(sshd:session): session closed for user core Feb 13 15:38:41.834850 systemd[1]: sshd@2-10.0.0.113:22-10.0.0.1:58016.service: Deactivated successfully. Feb 13 15:38:41.836849 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 15:38:41.838203 systemd-logind[1429]: Session 3 logged out. Waiting for processes to exit. Feb 13 15:38:41.839379 systemd[1]: Started sshd@3-10.0.0.113:22-10.0.0.1:58020.service - OpenSSH per-connection server daemon (10.0.0.1:58020). Feb 13 15:38:41.840144 systemd-logind[1429]: Removed session 3. Feb 13 15:38:41.878212 sshd[1573]: Accepted publickey for core from 10.0.0.1 port 58020 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:38:41.879300 sshd-session[1573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:38:41.882734 systemd-logind[1429]: New session 4 of user core. Feb 13 15:38:41.890645 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 15:38:41.941900 sshd[1575]: Connection closed by 10.0.0.1 port 58020 Feb 13 15:38:41.942182 sshd-session[1573]: pam_unix(sshd:session): session closed for user core Feb 13 15:38:41.955115 systemd[1]: sshd@3-10.0.0.113:22-10.0.0.1:58020.service: Deactivated successfully. Feb 13 15:38:41.956746 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 15:38:41.958019 systemd-logind[1429]: Session 4 logged out. Waiting for processes to exit. Feb 13 15:38:41.959203 systemd[1]: Started sshd@4-10.0.0.113:22-10.0.0.1:58022.service - OpenSSH per-connection server daemon (10.0.0.1:58022). Feb 13 15:38:41.960014 systemd-logind[1429]: Removed session 4. Feb 13 15:38:41.999157 sshd[1580]: Accepted publickey for core from 10.0.0.1 port 58022 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:38:42.000287 sshd-session[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:38:42.004166 systemd-logind[1429]: New session 5 of user core. Feb 13 15:38:42.011624 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 15:38:42.071521 sudo[1583]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 15:38:42.071824 sudo[1583]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:38:42.086368 sudo[1583]: pam_unix(sudo:session): session closed for user root Feb 13 15:38:42.087596 sshd[1582]: Connection closed by 10.0.0.1 port 58022 Feb 13 15:38:42.087916 sshd-session[1580]: pam_unix(sshd:session): session closed for user core Feb 13 15:38:42.100808 systemd[1]: sshd@4-10.0.0.113:22-10.0.0.1:58022.service: Deactivated successfully. Feb 13 15:38:42.103726 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 15:38:42.104937 systemd-logind[1429]: Session 5 logged out. Waiting for processes to exit. Feb 13 15:38:42.106153 systemd[1]: Started sshd@5-10.0.0.113:22-10.0.0.1:58038.service - OpenSSH per-connection server daemon (10.0.0.1:58038). Feb 13 15:38:42.106836 systemd-logind[1429]: Removed session 5. Feb 13 15:38:42.145235 sshd[1588]: Accepted publickey for core from 10.0.0.1 port 58038 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:38:42.146443 sshd-session[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:38:42.150495 systemd-logind[1429]: New session 6 of user core. Feb 13 15:38:42.165639 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 15:38:42.216744 sudo[1592]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 15:38:42.217024 sudo[1592]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:38:42.220401 sudo[1592]: pam_unix(sudo:session): session closed for user root Feb 13 15:38:42.224874 sudo[1591]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 15:38:42.225149 sudo[1591]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:38:42.243768 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:38:42.266219 augenrules[1614]: No rules Feb 13 15:38:42.266930 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:38:42.267101 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:38:42.268265 sudo[1591]: pam_unix(sudo:session): session closed for user root Feb 13 15:38:42.269408 sshd[1590]: Connection closed by 10.0.0.1 port 58038 Feb 13 15:38:42.269786 sshd-session[1588]: pam_unix(sshd:session): session closed for user core Feb 13 15:38:42.282703 systemd[1]: sshd@5-10.0.0.113:22-10.0.0.1:58038.service: Deactivated successfully. Feb 13 15:38:42.284128 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 15:38:42.285382 systemd-logind[1429]: Session 6 logged out. Waiting for processes to exit. Feb 13 15:38:42.286557 systemd[1]: Started sshd@6-10.0.0.113:22-10.0.0.1:58048.service - OpenSSH per-connection server daemon (10.0.0.1:58048). Feb 13 15:38:42.287291 systemd-logind[1429]: Removed session 6. Feb 13 15:38:42.325033 sshd[1622]: Accepted publickey for core from 10.0.0.1 port 58048 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:38:42.326178 sshd-session[1622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:38:42.330505 systemd-logind[1429]: New session 7 of user core. Feb 13 15:38:42.340604 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 15:38:42.391443 sudo[1625]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 15:38:42.391741 sudo[1625]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:38:42.726707 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 15:38:42.726800 (dockerd)[1646]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 15:38:42.997172 dockerd[1646]: time="2025-02-13T15:38:42.996802262Z" level=info msg="Starting up" Feb 13 15:38:43.137990 dockerd[1646]: time="2025-02-13T15:38:43.137941983Z" level=info msg="Loading containers: start." Feb 13 15:38:43.295495 kernel: Initializing XFRM netlink socket Feb 13 15:38:43.360168 systemd-networkd[1390]: docker0: Link UP Feb 13 15:38:43.401978 dockerd[1646]: time="2025-02-13T15:38:43.401897668Z" level=info msg="Loading containers: done." Feb 13 15:38:43.415352 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2775014306-merged.mount: Deactivated successfully. Feb 13 15:38:43.415995 dockerd[1646]: time="2025-02-13T15:38:43.415955084Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 15:38:43.416069 dockerd[1646]: time="2025-02-13T15:38:43.416050471Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Feb 13 15:38:43.416248 dockerd[1646]: time="2025-02-13T15:38:43.416220732Z" level=info msg="Daemon has completed initialization" Feb 13 15:38:43.446043 dockerd[1646]: time="2025-02-13T15:38:43.445985306Z" level=info msg="API listen on /run/docker.sock" Feb 13 15:38:43.446331 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 15:38:44.059559 containerd[1453]: time="2025-02-13T15:38:44.059509881Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.14\"" Feb 13 15:38:44.684018 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3584073460.mount: Deactivated successfully. Feb 13 15:38:45.751607 containerd[1453]: time="2025-02-13T15:38:45.751547693Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:38:45.752128 containerd[1453]: time="2025-02-13T15:38:45.752069377Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.14: active requests=0, bytes read=32205863" Feb 13 15:38:45.753012 containerd[1453]: time="2025-02-13T15:38:45.752986951Z" level=info msg="ImageCreate event name:\"sha256:c136612236eb39fcac4abea395de985f019cf87f72cc1afd828fb78de88a649f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:38:45.755733 containerd[1453]: time="2025-02-13T15:38:45.755695942Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1432b456b21015c99783d2b3a2010873fb67bf946c89d45e6d356449e083dcfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:38:45.757681 containerd[1453]: time="2025-02-13T15:38:45.757571920Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.14\" with image id \"sha256:c136612236eb39fcac4abea395de985f019cf87f72cc1afd828fb78de88a649f\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.14\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1432b456b21015c99783d2b3a2010873fb67bf946c89d45e6d356449e083dcfb\", size \"32202661\" in 1.698019288s" Feb 13 15:38:45.757681 containerd[1453]: time="2025-02-13T15:38:45.757605672Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.14\" returns image reference \"sha256:c136612236eb39fcac4abea395de985f019cf87f72cc1afd828fb78de88a649f\"" Feb 13 15:38:45.776190 containerd[1453]: time="2025-02-13T15:38:45.776159366Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.14\"" Feb 13 15:38:47.137772 containerd[1453]: time="2025-02-13T15:38:47.137696864Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:38:47.138475 containerd[1453]: time="2025-02-13T15:38:47.138409890Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.14: active requests=0, bytes read=29383093" Feb 13 15:38:47.139402 containerd[1453]: time="2025-02-13T15:38:47.139354042Z" level=info msg="ImageCreate event name:\"sha256:582085ec6cd04751293bebad40e35d6b2066b81f6e5868a9db60b8127ca7921d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:38:47.142097 containerd[1453]: time="2025-02-13T15:38:47.142064247Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:23ccdb5e7e2c317f5727652ef7e64ef91ead34a3c73dfa9c3ab23b3a5028e280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:38:47.143342 containerd[1453]: time="2025-02-13T15:38:47.143307422Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.14\" with image id \"sha256:582085ec6cd04751293bebad40e35d6b2066b81f6e5868a9db60b8127ca7921d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.14\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:23ccdb5e7e2c317f5727652ef7e64ef91ead34a3c73dfa9c3ab23b3a5028e280\", size \"30786820\" in 1.367109769s" Feb 13 15:38:47.143342 containerd[1453]: time="2025-02-13T15:38:47.143340406Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.14\" returns image reference \"sha256:582085ec6cd04751293bebad40e35d6b2066b81f6e5868a9db60b8127ca7921d\"" Feb 13 15:38:47.161745 containerd[1453]: time="2025-02-13T15:38:47.161711385Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.14\"" Feb 13 15:38:47.568670 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 15:38:47.579617 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:38:47.675665 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:38:47.679932 (kubelet)[1929]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:38:47.721566 kubelet[1929]: E0213 15:38:47.721500 1929 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:38:47.724985 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:38:47.725157 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:38:48.141909 containerd[1453]: time="2025-02-13T15:38:48.141854522Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:38:48.142438 containerd[1453]: time="2025-02-13T15:38:48.142391971Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.14: active requests=0, bytes read=15766982" Feb 13 15:38:48.143196 containerd[1453]: time="2025-02-13T15:38:48.143147492Z" level=info msg="ImageCreate event name:\"sha256:dfb84ea1121ad6a9ceccfe5078af3eee1b27b8d2b2e93d6449d11e1526dbeff8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:38:48.146349 containerd[1453]: time="2025-02-13T15:38:48.146304812Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf0046be3eb6c4831b6b2a1b3e24f18e27778663890144478f11a82622b48c48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:38:48.147462 containerd[1453]: time="2025-02-13T15:38:48.147320686Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.14\" with image id \"sha256:dfb84ea1121ad6a9ceccfe5078af3eee1b27b8d2b2e93d6449d11e1526dbeff8\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.14\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf0046be3eb6c4831b6b2a1b3e24f18e27778663890144478f11a82622b48c48\", size \"17170727\" in 985.57126ms" Feb 13 15:38:48.147462 containerd[1453]: time="2025-02-13T15:38:48.147361160Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.14\" returns image reference \"sha256:dfb84ea1121ad6a9ceccfe5078af3eee1b27b8d2b2e93d6449d11e1526dbeff8\"" Feb 13 15:38:48.167566 containerd[1453]: time="2025-02-13T15:38:48.167523928Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\"" Feb 13 15:38:49.249164 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1927358863.mount: Deactivated successfully. Feb 13 15:38:49.438759 containerd[1453]: time="2025-02-13T15:38:49.438695956Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:38:49.442272 containerd[1453]: time="2025-02-13T15:38:49.442218796Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.14: active requests=0, bytes read=25273377" Feb 13 15:38:49.443099 containerd[1453]: time="2025-02-13T15:38:49.443063817Z" level=info msg="ImageCreate event name:\"sha256:8acaac6288aef2fbe5821a7539f95a6043513e648e6ffaf6a545a93fa77fe8c8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:38:49.444831 containerd[1453]: time="2025-02-13T15:38:49.444778260Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:38:49.445707 containerd[1453]: time="2025-02-13T15:38:49.445670760Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.14\" with image id \"sha256:8acaac6288aef2fbe5821a7539f95a6043513e648e6ffaf6a545a93fa77fe8c8\", repo tag \"registry.k8s.io/kube-proxy:v1.29.14\", repo digest \"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\", size \"25272394\" in 1.278102268s" Feb 13 15:38:49.445707 containerd[1453]: time="2025-02-13T15:38:49.445706519Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\" returns image reference \"sha256:8acaac6288aef2fbe5821a7539f95a6043513e648e6ffaf6a545a93fa77fe8c8\"" Feb 13 15:38:49.463787 containerd[1453]: time="2025-02-13T15:38:49.463737431Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 15:38:50.098684 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount105389811.mount: Deactivated successfully. Feb 13 15:38:50.853367 containerd[1453]: time="2025-02-13T15:38:50.853320664Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:38:50.854316 containerd[1453]: time="2025-02-13T15:38:50.854037399Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Feb 13 15:38:50.855118 containerd[1453]: time="2025-02-13T15:38:50.855081450Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:38:50.858543 containerd[1453]: time="2025-02-13T15:38:50.858501684Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:38:50.859711 containerd[1453]: time="2025-02-13T15:38:50.859574339Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.395800951s" Feb 13 15:38:50.859711 containerd[1453]: time="2025-02-13T15:38:50.859606232Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 15:38:50.879115 containerd[1453]: time="2025-02-13T15:38:50.879078013Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 15:38:51.336169 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2292038623.mount: Deactivated successfully. Feb 13 15:38:51.341042 containerd[1453]: time="2025-02-13T15:38:51.340996605Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:38:51.342268 containerd[1453]: time="2025-02-13T15:38:51.342207021Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Feb 13 15:38:51.343020 containerd[1453]: time="2025-02-13T15:38:51.342958864Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:38:51.346686 containerd[1453]: time="2025-02-13T15:38:51.345989696Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:38:51.346686 containerd[1453]: time="2025-02-13T15:38:51.346558431Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 467.442029ms" Feb 13 15:38:51.346686 containerd[1453]: time="2025-02-13T15:38:51.346582172Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 13 15:38:51.366254 containerd[1453]: time="2025-02-13T15:38:51.366211220Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Feb 13 15:38:51.972421 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3680017238.mount: Deactivated successfully. Feb 13 15:38:53.544175 containerd[1453]: time="2025-02-13T15:38:53.543893128Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:38:53.545101 containerd[1453]: time="2025-02-13T15:38:53.544845033Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200788" Feb 13 15:38:53.545850 containerd[1453]: time="2025-02-13T15:38:53.545809724Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:38:53.548961 containerd[1453]: time="2025-02-13T15:38:53.548908676Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:38:53.550269 containerd[1453]: time="2025-02-13T15:38:53.550238602Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 2.183912253s" Feb 13 15:38:53.550325 containerd[1453]: time="2025-02-13T15:38:53.550271066Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Feb 13 15:38:57.441991 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:38:57.449667 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:38:57.465254 systemd[1]: Reloading requested from client PID 2147 ('systemctl') (unit session-7.scope)... Feb 13 15:38:57.465269 systemd[1]: Reloading... Feb 13 15:38:57.526477 zram_generator::config[2184]: No configuration found. Feb 13 15:38:57.640329 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:38:57.696836 systemd[1]: Reloading finished in 231 ms. Feb 13 15:38:57.735199 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:38:57.737578 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:38:57.737776 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:38:57.739374 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:38:57.824748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:38:57.829036 (kubelet)[2233]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:38:57.872593 kubelet[2233]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:38:57.872593 kubelet[2233]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:38:57.872593 kubelet[2233]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:38:57.872928 kubelet[2233]: I0213 15:38:57.872649 2233 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:38:58.910508 kubelet[2233]: I0213 15:38:58.910430 2233 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Feb 13 15:38:58.910508 kubelet[2233]: I0213 15:38:58.910505 2233 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:38:58.910858 kubelet[2233]: I0213 15:38:58.910718 2233 server.go:919] "Client rotation is on, will bootstrap in background" Feb 13 15:38:58.954137 kubelet[2233]: E0213 15:38:58.954107 2233 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.113:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.113:6443: connect: connection refused Feb 13 15:38:58.954421 kubelet[2233]: I0213 15:38:58.954203 2233 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:38:58.963918 kubelet[2233]: I0213 15:38:58.963895 2233 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:38:58.964110 kubelet[2233]: I0213 15:38:58.964097 2233 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:38:58.964272 kubelet[2233]: I0213 15:38:58.964260 2233 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:38:58.964350 kubelet[2233]: I0213 15:38:58.964280 2233 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:38:58.964350 kubelet[2233]: I0213 15:38:58.964289 2233 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:38:58.964411 kubelet[2233]: I0213 15:38:58.964396 2233 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:38:58.966483 kubelet[2233]: I0213 15:38:58.966460 2233 kubelet.go:396] "Attempting to sync node with API server" Feb 13 15:38:58.966483 kubelet[2233]: I0213 15:38:58.966484 2233 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:38:58.966547 kubelet[2233]: I0213 15:38:58.966504 2233 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:38:58.966547 kubelet[2233]: I0213 15:38:58.966518 2233 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:38:58.968871 kubelet[2233]: W0213 15:38:58.968813 2233 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.113:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Feb 13 15:38:58.968949 kubelet[2233]: E0213 15:38:58.968883 2233 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.113:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Feb 13 15:38:58.969000 kubelet[2233]: I0213 15:38:58.968971 2233 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:38:58.969345 kubelet[2233]: W0213 15:38:58.969212 2233 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.113:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Feb 13 15:38:58.969400 kubelet[2233]: E0213 15:38:58.969356 2233 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.113:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Feb 13 15:38:58.969476 kubelet[2233]: I0213 15:38:58.969436 2233 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:38:58.969592 kubelet[2233]: W0213 15:38:58.969576 2233 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 15:38:58.974202 kubelet[2233]: I0213 15:38:58.972618 2233 server.go:1256] "Started kubelet" Feb 13 15:38:58.974202 kubelet[2233]: I0213 15:38:58.973679 2233 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:38:58.974202 kubelet[2233]: I0213 15:38:58.973747 2233 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:38:58.975074 kubelet[2233]: I0213 15:38:58.975052 2233 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:38:58.979792 kubelet[2233]: I0213 15:38:58.979768 2233 server.go:461] "Adding debug handlers to kubelet server" Feb 13 15:38:58.980770 kubelet[2233]: I0213 15:38:58.980729 2233 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:38:58.984215 kubelet[2233]: E0213 15:38:58.983160 2233 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:38:58.984215 kubelet[2233]: I0213 15:38:58.983189 2233 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:38:58.984215 kubelet[2233]: I0213 15:38:58.983273 2233 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 13 15:38:58.984215 kubelet[2233]: I0213 15:38:58.983329 2233 reconciler_new.go:29] "Reconciler: start to sync state" Feb 13 15:38:58.984215 kubelet[2233]: W0213 15:38:58.983589 2233 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.113:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Feb 13 15:38:58.984215 kubelet[2233]: E0213 15:38:58.983625 2233 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.113:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Feb 13 15:38:58.984215 kubelet[2233]: E0213 15:38:58.983963 2233 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.113:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.113:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823ceb0990e9dde default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 15:38:58.972581342 +0000 UTC m=+1.140378404,LastTimestamp:2025-02-13 15:38:58.972581342 +0000 UTC m=+1.140378404,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 15:38:58.984439 kubelet[2233]: E0213 15:38:58.984155 2233 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.113:6443: connect: connection refused" interval="200ms" Feb 13 15:38:58.984439 kubelet[2233]: I0213 15:38:58.984382 2233 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:38:58.984492 kubelet[2233]: I0213 15:38:58.984477 2233 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:38:58.984647 kubelet[2233]: E0213 15:38:58.984630 2233 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:38:58.985248 kubelet[2233]: I0213 15:38:58.985229 2233 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:38:58.994786 kubelet[2233]: I0213 15:38:58.994754 2233 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:38:58.995764 kubelet[2233]: I0213 15:38:58.995738 2233 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:38:58.995764 kubelet[2233]: I0213 15:38:58.995757 2233 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:38:58.995764 kubelet[2233]: I0213 15:38:58.995771 2233 kubelet.go:2329] "Starting kubelet main sync loop" Feb 13 15:38:58.995875 kubelet[2233]: E0213 15:38:58.995822 2233 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:38:58.999221 kubelet[2233]: W0213 15:38:58.999179 2233 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.113:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Feb 13 15:38:58.999286 kubelet[2233]: E0213 15:38:58.999225 2233 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.113:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Feb 13 15:38:58.999607 kubelet[2233]: I0213 15:38:58.999376 2233 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:38:58.999607 kubelet[2233]: I0213 15:38:58.999392 2233 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:38:58.999607 kubelet[2233]: I0213 15:38:58.999406 2233 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:38:59.085190 kubelet[2233]: I0213 15:38:59.085153 2233 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:38:59.085592 kubelet[2233]: E0213 15:38:59.085568 2233 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.113:6443/api/v1/nodes\": dial tcp 10.0.0.113:6443: connect: connection refused" node="localhost" Feb 13 15:38:59.088858 kubelet[2233]: I0213 15:38:59.088810 2233 policy_none.go:49] "None policy: Start" Feb 13 15:38:59.089452 kubelet[2233]: I0213 15:38:59.089366 2233 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:38:59.089508 kubelet[2233]: I0213 15:38:59.089495 2233 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:38:59.096724 kubelet[2233]: E0213 15:38:59.096623 2233 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 15:38:59.097433 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 15:38:59.111293 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 15:38:59.114339 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 15:38:59.125279 kubelet[2233]: I0213 15:38:59.125102 2233 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:38:59.125393 kubelet[2233]: I0213 15:38:59.125334 2233 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:38:59.126406 kubelet[2233]: E0213 15:38:59.126372 2233 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 15:38:59.185230 kubelet[2233]: E0213 15:38:59.185145 2233 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.113:6443: connect: connection refused" interval="400ms" Feb 13 15:38:59.286594 kubelet[2233]: I0213 15:38:59.286572 2233 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:38:59.286853 kubelet[2233]: E0213 15:38:59.286839 2233 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.113:6443/api/v1/nodes\": dial tcp 10.0.0.113:6443: connect: connection refused" node="localhost" Feb 13 15:38:59.297113 kubelet[2233]: I0213 15:38:59.297089 2233 topology_manager.go:215] "Topology Admit Handler" podUID="56c437365493840a0ecb51e9bad330da" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 15:38:59.297891 kubelet[2233]: I0213 15:38:59.297870 2233 topology_manager.go:215] "Topology Admit Handler" podUID="8dd79284f50d348595750c57a6b03620" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 15:38:59.298791 kubelet[2233]: I0213 15:38:59.298771 2233 topology_manager.go:215] "Topology Admit Handler" podUID="34a43d8200b04e3b81251db6a65bc0ce" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 15:38:59.303837 systemd[1]: Created slice kubepods-burstable-pod56c437365493840a0ecb51e9bad330da.slice - libcontainer container kubepods-burstable-pod56c437365493840a0ecb51e9bad330da.slice. Feb 13 15:38:59.319325 systemd[1]: Created slice kubepods-burstable-pod8dd79284f50d348595750c57a6b03620.slice - libcontainer container kubepods-burstable-pod8dd79284f50d348595750c57a6b03620.slice. Feb 13 15:38:59.322577 systemd[1]: Created slice kubepods-burstable-pod34a43d8200b04e3b81251db6a65bc0ce.slice - libcontainer container kubepods-burstable-pod34a43d8200b04e3b81251db6a65bc0ce.slice. Feb 13 15:38:59.386190 kubelet[2233]: I0213 15:38:59.386157 2233 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:38:59.386190 kubelet[2233]: I0213 15:38:59.386193 2233 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:38:59.386284 kubelet[2233]: I0213 15:38:59.386216 2233 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:38:59.386284 kubelet[2233]: I0213 15:38:59.386235 2233 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/56c437365493840a0ecb51e9bad330da-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"56c437365493840a0ecb51e9bad330da\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:38:59.386374 kubelet[2233]: I0213 15:38:59.386333 2233 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/56c437365493840a0ecb51e9bad330da-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"56c437365493840a0ecb51e9bad330da\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:38:59.386436 kubelet[2233]: I0213 15:38:59.386419 2233 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/56c437365493840a0ecb51e9bad330da-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"56c437365493840a0ecb51e9bad330da\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:38:59.386482 kubelet[2233]: I0213 15:38:59.386469 2233 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:38:59.386506 kubelet[2233]: I0213 15:38:59.386501 2233 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:38:59.386536 kubelet[2233]: I0213 15:38:59.386527 2233 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/34a43d8200b04e3b81251db6a65bc0ce-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"34a43d8200b04e3b81251db6a65bc0ce\") " pod="kube-system/kube-scheduler-localhost" Feb 13 15:38:59.586499 kubelet[2233]: E0213 15:38:59.586458 2233 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.113:6443: connect: connection refused" interval="800ms" Feb 13 15:38:59.616939 kubelet[2233]: E0213 15:38:59.616860 2233 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:38:59.617572 containerd[1453]: time="2025-02-13T15:38:59.617528406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:56c437365493840a0ecb51e9bad330da,Namespace:kube-system,Attempt:0,}" Feb 13 15:38:59.621667 kubelet[2233]: E0213 15:38:59.621642 2233 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:38:59.621987 containerd[1453]: time="2025-02-13T15:38:59.621959987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8dd79284f50d348595750c57a6b03620,Namespace:kube-system,Attempt:0,}" Feb 13 15:38:59.624229 kubelet[2233]: E0213 15:38:59.624208 2233 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:38:59.624568 containerd[1453]: time="2025-02-13T15:38:59.624535215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:34a43d8200b04e3b81251db6a65bc0ce,Namespace:kube-system,Attempt:0,}" Feb 13 15:38:59.688715 kubelet[2233]: I0213 15:38:59.688692 2233 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:38:59.689013 kubelet[2233]: E0213 15:38:59.688997 2233 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.113:6443/api/v1/nodes\": dial tcp 10.0.0.113:6443: connect: connection refused" node="localhost" Feb 13 15:39:00.168472 kubelet[2233]: W0213 15:39:00.168428 2233 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.113:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Feb 13 15:39:00.168753 kubelet[2233]: E0213 15:39:00.168481 2233 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.113:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Feb 13 15:39:00.188960 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4078049569.mount: Deactivated successfully. Feb 13 15:39:00.193970 containerd[1453]: time="2025-02-13T15:39:00.193918456Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:39:00.195625 containerd[1453]: time="2025-02-13T15:39:00.195581857Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:39:00.196222 kubelet[2233]: W0213 15:39:00.196188 2233 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.113:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Feb 13 15:39:00.196256 kubelet[2233]: E0213 15:39:00.196223 2233 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.113:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Feb 13 15:39:00.196588 containerd[1453]: time="2025-02-13T15:39:00.196549803Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:39:00.197609 containerd[1453]: time="2025-02-13T15:39:00.197565666Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:39:00.198747 containerd[1453]: time="2025-02-13T15:39:00.198713350Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:39:00.199235 containerd[1453]: time="2025-02-13T15:39:00.199193360Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Feb 13 15:39:00.199712 containerd[1453]: time="2025-02-13T15:39:00.199673610Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:39:00.202606 containerd[1453]: time="2025-02-13T15:39:00.202567079Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:39:00.204559 containerd[1453]: time="2025-02-13T15:39:00.204510096Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 579.917631ms" Feb 13 15:39:00.205195 containerd[1453]: time="2025-02-13T15:39:00.205160597Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 587.551281ms" Feb 13 15:39:00.206622 containerd[1453]: time="2025-02-13T15:39:00.206590739Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 584.57162ms" Feb 13 15:39:00.381545 containerd[1453]: time="2025-02-13T15:39:00.381418670Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:39:00.381545 containerd[1453]: time="2025-02-13T15:39:00.381496370Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:39:00.382578 containerd[1453]: time="2025-02-13T15:39:00.381516786Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:39:00.382578 containerd[1453]: time="2025-02-13T15:39:00.381740918Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:39:00.382578 containerd[1453]: time="2025-02-13T15:39:00.381787474Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:39:00.382578 containerd[1453]: time="2025-02-13T15:39:00.381798402Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:39:00.386469 containerd[1453]: time="2025-02-13T15:39:00.386390540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:39:00.386774 containerd[1453]: time="2025-02-13T15:39:00.386723917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:39:00.386952 kubelet[2233]: E0213 15:39:00.386929 2233 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.113:6443: connect: connection refused" interval="1.6s" Feb 13 15:39:00.387218 containerd[1453]: time="2025-02-13T15:39:00.387149765Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:39:00.387218 containerd[1453]: time="2025-02-13T15:39:00.387192598Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:39:00.387218 containerd[1453]: time="2025-02-13T15:39:00.387202646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:39:00.387360 containerd[1453]: time="2025-02-13T15:39:00.387265735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:39:00.409641 systemd[1]: Started cri-containerd-09038e3871a73015dc36495f4ee8f5f29d1fb353f3d06f60508cbf819c791a48.scope - libcontainer container 09038e3871a73015dc36495f4ee8f5f29d1fb353f3d06f60508cbf819c791a48. Feb 13 15:39:00.410855 kubelet[2233]: W0213 15:39:00.410818 2233 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.113:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Feb 13 15:39:00.410855 kubelet[2233]: E0213 15:39:00.410855 2233 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.113:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Feb 13 15:39:00.413474 systemd[1]: Started cri-containerd-45aa97eea1a3cf5b02a4c4293d97c9805f476d2bb07a79e84dc23396d3486b58.scope - libcontainer container 45aa97eea1a3cf5b02a4c4293d97c9805f476d2bb07a79e84dc23396d3486b58. Feb 13 15:39:00.414847 systemd[1]: Started cri-containerd-769654ca6367ae4208100142f06d2f9b988c02fab3573f3ba2ea8f2071c3bc15.scope - libcontainer container 769654ca6367ae4208100142f06d2f9b988c02fab3573f3ba2ea8f2071c3bc15. Feb 13 15:39:00.416964 kubelet[2233]: W0213 15:39:00.416857 2233 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.113:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Feb 13 15:39:00.416964 kubelet[2233]: E0213 15:39:00.416891 2233 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.113:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Feb 13 15:39:00.438744 containerd[1453]: time="2025-02-13T15:39:00.438492000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8dd79284f50d348595750c57a6b03620,Namespace:kube-system,Attempt:0,} returns sandbox id \"09038e3871a73015dc36495f4ee8f5f29d1fb353f3d06f60508cbf819c791a48\"" Feb 13 15:39:00.442522 kubelet[2233]: E0213 15:39:00.442493 2233 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:39:00.446402 containerd[1453]: time="2025-02-13T15:39:00.446365827Z" level=info msg="CreateContainer within sandbox \"09038e3871a73015dc36495f4ee8f5f29d1fb353f3d06f60508cbf819c791a48\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 15:39:00.449916 containerd[1453]: time="2025-02-13T15:39:00.449841785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:34a43d8200b04e3b81251db6a65bc0ce,Namespace:kube-system,Attempt:0,} returns sandbox id \"45aa97eea1a3cf5b02a4c4293d97c9805f476d2bb07a79e84dc23396d3486b58\"" Feb 13 15:39:00.450874 kubelet[2233]: E0213 15:39:00.450854 2233 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:39:00.453314 containerd[1453]: time="2025-02-13T15:39:00.453282155Z" level=info msg="CreateContainer within sandbox \"45aa97eea1a3cf5b02a4c4293d97c9805f476d2bb07a79e84dc23396d3486b58\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 15:39:00.456045 containerd[1453]: time="2025-02-13T15:39:00.456015101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:56c437365493840a0ecb51e9bad330da,Namespace:kube-system,Attempt:0,} returns sandbox id \"769654ca6367ae4208100142f06d2f9b988c02fab3573f3ba2ea8f2071c3bc15\"" Feb 13 15:39:00.456625 kubelet[2233]: E0213 15:39:00.456603 2233 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:39:00.458342 containerd[1453]: time="2025-02-13T15:39:00.458247340Z" level=info msg="CreateContainer within sandbox \"769654ca6367ae4208100142f06d2f9b988c02fab3573f3ba2ea8f2071c3bc15\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 15:39:00.464581 containerd[1453]: time="2025-02-13T15:39:00.464548955Z" level=info msg="CreateContainer within sandbox \"09038e3871a73015dc36495f4ee8f5f29d1fb353f3d06f60508cbf819c791a48\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f39faa046177fe9ba9af8eb044987867d7f3ab9f7a35343d4c98a0b8b720bbe2\"" Feb 13 15:39:00.465121 containerd[1453]: time="2025-02-13T15:39:00.465098138Z" level=info msg="StartContainer for \"f39faa046177fe9ba9af8eb044987867d7f3ab9f7a35343d4c98a0b8b720bbe2\"" Feb 13 15:39:00.470662 containerd[1453]: time="2025-02-13T15:39:00.470616430Z" level=info msg="CreateContainer within sandbox \"45aa97eea1a3cf5b02a4c4293d97c9805f476d2bb07a79e84dc23396d3486b58\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"bd856dc8c575cc184599c7a63e61c9f61f38f964074ef21a74b03c2a8d1f4d9f\"" Feb 13 15:39:00.471291 containerd[1453]: time="2025-02-13T15:39:00.471075864Z" level=info msg="StartContainer for \"bd856dc8c575cc184599c7a63e61c9f61f38f964074ef21a74b03c2a8d1f4d9f\"" Feb 13 15:39:00.479572 containerd[1453]: time="2025-02-13T15:39:00.479387988Z" level=info msg="CreateContainer within sandbox \"769654ca6367ae4208100142f06d2f9b988c02fab3573f3ba2ea8f2071c3bc15\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e4880b17bc6d73b25e9134b5b26f3f6aea5b40dbe12aabb07ba4ca626836458a\"" Feb 13 15:39:00.480129 containerd[1453]: time="2025-02-13T15:39:00.480106061Z" level=info msg="StartContainer for \"e4880b17bc6d73b25e9134b5b26f3f6aea5b40dbe12aabb07ba4ca626836458a\"" Feb 13 15:39:00.489685 systemd[1]: Started cri-containerd-f39faa046177fe9ba9af8eb044987867d7f3ab9f7a35343d4c98a0b8b720bbe2.scope - libcontainer container f39faa046177fe9ba9af8eb044987867d7f3ab9f7a35343d4c98a0b8b720bbe2. Feb 13 15:39:00.490467 kubelet[2233]: I0213 15:39:00.490182 2233 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:39:00.490681 kubelet[2233]: E0213 15:39:00.490614 2233 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.113:6443/api/v1/nodes\": dial tcp 10.0.0.113:6443: connect: connection refused" node="localhost" Feb 13 15:39:00.492800 systemd[1]: Started cri-containerd-bd856dc8c575cc184599c7a63e61c9f61f38f964074ef21a74b03c2a8d1f4d9f.scope - libcontainer container bd856dc8c575cc184599c7a63e61c9f61f38f964074ef21a74b03c2a8d1f4d9f. Feb 13 15:39:00.509585 systemd[1]: Started cri-containerd-e4880b17bc6d73b25e9134b5b26f3f6aea5b40dbe12aabb07ba4ca626836458a.scope - libcontainer container e4880b17bc6d73b25e9134b5b26f3f6aea5b40dbe12aabb07ba4ca626836458a. Feb 13 15:39:00.535497 containerd[1453]: time="2025-02-13T15:39:00.535431325Z" level=info msg="StartContainer for \"bd856dc8c575cc184599c7a63e61c9f61f38f964074ef21a74b03c2a8d1f4d9f\" returns successfully" Feb 13 15:39:00.540997 containerd[1453]: time="2025-02-13T15:39:00.539776392Z" level=info msg="StartContainer for \"f39faa046177fe9ba9af8eb044987867d7f3ab9f7a35343d4c98a0b8b720bbe2\" returns successfully" Feb 13 15:39:00.561571 containerd[1453]: time="2025-02-13T15:39:00.558500778Z" level=info msg="StartContainer for \"e4880b17bc6d73b25e9134b5b26f3f6aea5b40dbe12aabb07ba4ca626836458a\" returns successfully" Feb 13 15:39:01.004622 kubelet[2233]: E0213 15:39:01.004585 2233 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:39:01.006489 kubelet[2233]: E0213 15:39:01.006467 2233 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:39:01.008516 kubelet[2233]: E0213 15:39:01.008498 2233 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:39:02.011944 kubelet[2233]: E0213 15:39:02.011898 2233 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:39:02.078133 kubelet[2233]: E0213 15:39:02.078090 2233 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 13 15:39:02.092495 kubelet[2233]: I0213 15:39:02.092470 2233 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:39:02.110416 kubelet[2233]: I0213 15:39:02.110366 2233 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 15:39:02.123877 kubelet[2233]: E0213 15:39:02.123838 2233 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:39:02.224689 kubelet[2233]: E0213 15:39:02.224632 2233 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:39:02.325179 kubelet[2233]: E0213 15:39:02.325138 2233 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:39:02.425683 kubelet[2233]: E0213 15:39:02.425644 2233 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:39:02.526411 kubelet[2233]: E0213 15:39:02.526369 2233 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:39:02.627146 kubelet[2233]: E0213 15:39:02.626893 2233 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:39:02.727479 kubelet[2233]: E0213 15:39:02.727418 2233 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:39:02.828003 kubelet[2233]: E0213 15:39:02.827946 2233 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:39:02.929348 kubelet[2233]: E0213 15:39:02.929102 2233 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:39:03.030040 kubelet[2233]: E0213 15:39:03.030005 2233 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:39:03.971708 kubelet[2233]: I0213 15:39:03.971663 2233 apiserver.go:52] "Watching apiserver" Feb 13 15:39:03.984231 kubelet[2233]: I0213 15:39:03.984201 2233 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 13 15:39:04.027354 kubelet[2233]: E0213 15:39:04.027326 2233 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:39:04.993887 systemd[1]: Reloading requested from client PID 2508 ('systemctl') (unit session-7.scope)... Feb 13 15:39:04.994196 systemd[1]: Reloading... Feb 13 15:39:05.015474 kubelet[2233]: E0213 15:39:05.015392 2233 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:39:05.071489 zram_generator::config[2547]: No configuration found. Feb 13 15:39:05.170825 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:39:05.240558 systemd[1]: Reloading finished in 245 ms. Feb 13 15:39:05.285641 kubelet[2233]: I0213 15:39:05.285554 2233 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:39:05.285770 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:39:05.294463 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:39:05.294708 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:39:05.294797 systemd[1]: kubelet.service: Consumed 1.518s CPU time, 113.7M memory peak, 0B memory swap peak. Feb 13 15:39:05.303726 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:39:05.397028 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:39:05.401363 (kubelet)[2589]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:39:05.444667 kubelet[2589]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:39:05.444667 kubelet[2589]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:39:05.444667 kubelet[2589]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:39:05.445016 kubelet[2589]: I0213 15:39:05.444715 2589 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:39:05.449209 kubelet[2589]: I0213 15:39:05.449171 2589 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Feb 13 15:39:05.449209 kubelet[2589]: I0213 15:39:05.449201 2589 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:39:05.449614 kubelet[2589]: I0213 15:39:05.449393 2589 server.go:919] "Client rotation is on, will bootstrap in background" Feb 13 15:39:05.451277 kubelet[2589]: I0213 15:39:05.451234 2589 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 15:39:05.453600 kubelet[2589]: I0213 15:39:05.453458 2589 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:39:05.461509 kubelet[2589]: I0213 15:39:05.461481 2589 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:39:05.462044 kubelet[2589]: I0213 15:39:05.461664 2589 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:39:05.462044 kubelet[2589]: I0213 15:39:05.461836 2589 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:39:05.462044 kubelet[2589]: I0213 15:39:05.461857 2589 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:39:05.462044 kubelet[2589]: I0213 15:39:05.461865 2589 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:39:05.462044 kubelet[2589]: I0213 15:39:05.461890 2589 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:39:05.462044 kubelet[2589]: I0213 15:39:05.461980 2589 kubelet.go:396] "Attempting to sync node with API server" Feb 13 15:39:05.465667 kubelet[2589]: I0213 15:39:05.461993 2589 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:39:05.465667 kubelet[2589]: I0213 15:39:05.462020 2589 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:39:05.465667 kubelet[2589]: I0213 15:39:05.462037 2589 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:39:05.466094 kubelet[2589]: I0213 15:39:05.466070 2589 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:39:05.466462 kubelet[2589]: I0213 15:39:05.466434 2589 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:39:05.469092 kubelet[2589]: I0213 15:39:05.466913 2589 server.go:1256] "Started kubelet" Feb 13 15:39:05.469211 kubelet[2589]: I0213 15:39:05.469183 2589 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:39:05.469307 kubelet[2589]: I0213 15:39:05.469283 2589 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:39:05.470962 kubelet[2589]: I0213 15:39:05.470939 2589 server.go:461] "Adding debug handlers to kubelet server" Feb 13 15:39:05.472018 kubelet[2589]: I0213 15:39:05.471983 2589 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:39:05.472866 kubelet[2589]: I0213 15:39:05.472849 2589 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:39:05.478251 kubelet[2589]: E0213 15:39:05.478225 2589 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:39:05.478867 kubelet[2589]: I0213 15:39:05.478262 2589 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:39:05.478867 kubelet[2589]: I0213 15:39:05.478381 2589 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 13 15:39:05.481396 kubelet[2589]: I0213 15:39:05.480626 2589 reconciler_new.go:29] "Reconciler: start to sync state" Feb 13 15:39:05.484667 kubelet[2589]: I0213 15:39:05.484574 2589 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:39:05.487837 kubelet[2589]: I0213 15:39:05.487778 2589 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:39:05.487837 kubelet[2589]: I0213 15:39:05.487797 2589 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:39:05.494643 kubelet[2589]: I0213 15:39:05.494614 2589 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:39:05.499029 kubelet[2589]: I0213 15:39:05.498799 2589 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:39:05.499029 kubelet[2589]: I0213 15:39:05.498824 2589 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:39:05.499029 kubelet[2589]: I0213 15:39:05.498890 2589 kubelet.go:2329] "Starting kubelet main sync loop" Feb 13 15:39:05.499029 kubelet[2589]: E0213 15:39:05.498946 2589 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:39:05.527888 kubelet[2589]: I0213 15:39:05.527785 2589 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:39:05.527888 kubelet[2589]: I0213 15:39:05.527805 2589 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:39:05.527888 kubelet[2589]: I0213 15:39:05.527824 2589 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:39:05.528261 kubelet[2589]: I0213 15:39:05.527985 2589 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 15:39:05.528261 kubelet[2589]: I0213 15:39:05.528006 2589 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 15:39:05.528261 kubelet[2589]: I0213 15:39:05.528012 2589 policy_none.go:49] "None policy: Start" Feb 13 15:39:05.528755 kubelet[2589]: I0213 15:39:05.528731 2589 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:39:05.528833 kubelet[2589]: I0213 15:39:05.528764 2589 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:39:05.528953 kubelet[2589]: I0213 15:39:05.528923 2589 state_mem.go:75] "Updated machine memory state" Feb 13 15:39:05.532738 kubelet[2589]: I0213 15:39:05.532696 2589 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:39:05.533011 kubelet[2589]: I0213 15:39:05.532915 2589 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:39:05.582775 kubelet[2589]: I0213 15:39:05.582005 2589 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:39:05.599784 kubelet[2589]: I0213 15:39:05.599721 2589 topology_manager.go:215] "Topology Admit Handler" podUID="56c437365493840a0ecb51e9bad330da" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 15:39:05.600064 kubelet[2589]: I0213 15:39:05.599823 2589 topology_manager.go:215] "Topology Admit Handler" podUID="8dd79284f50d348595750c57a6b03620" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 15:39:05.600064 kubelet[2589]: I0213 15:39:05.599895 2589 topology_manager.go:215] "Topology Admit Handler" podUID="34a43d8200b04e3b81251db6a65bc0ce" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 15:39:05.624020 kubelet[2589]: I0213 15:39:05.623981 2589 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Feb 13 15:39:05.624191 kubelet[2589]: I0213 15:39:05.624083 2589 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 15:39:05.625102 kubelet[2589]: E0213 15:39:05.624998 2589 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 15:39:05.681613 kubelet[2589]: I0213 15:39:05.681573 2589 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/56c437365493840a0ecb51e9bad330da-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"56c437365493840a0ecb51e9bad330da\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:39:05.681613 kubelet[2589]: I0213 15:39:05.681621 2589 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/56c437365493840a0ecb51e9bad330da-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"56c437365493840a0ecb51e9bad330da\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:39:05.681769 kubelet[2589]: I0213 15:39:05.681643 2589 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:39:05.681769 kubelet[2589]: I0213 15:39:05.681662 2589 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:39:05.681769 kubelet[2589]: I0213 15:39:05.681729 2589 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/34a43d8200b04e3b81251db6a65bc0ce-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"34a43d8200b04e3b81251db6a65bc0ce\") " pod="kube-system/kube-scheduler-localhost" Feb 13 15:39:05.681832 kubelet[2589]: I0213 15:39:05.681775 2589 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/56c437365493840a0ecb51e9bad330da-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"56c437365493840a0ecb51e9bad330da\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:39:05.681890 kubelet[2589]: I0213 15:39:05.681874 2589 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:39:05.681951 kubelet[2589]: I0213 15:39:05.681940 2589 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:39:05.681994 kubelet[2589]: I0213 15:39:05.681981 2589 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:39:05.927052 kubelet[2589]: E0213 15:39:05.926931 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:39:05.927752 kubelet[2589]: E0213 15:39:05.927722 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:39:05.927945 kubelet[2589]: E0213 15:39:05.927927 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:39:06.462477 kubelet[2589]: I0213 15:39:06.462432 2589 apiserver.go:52] "Watching apiserver" Feb 13 15:39:06.483783 kubelet[2589]: I0213 15:39:06.482148 2589 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 13 15:39:06.518328 kubelet[2589]: E0213 15:39:06.518265 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:39:06.523043 kubelet[2589]: E0213 15:39:06.522990 2589 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 15:39:06.523435 kubelet[2589]: E0213 15:39:06.523402 2589 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 13 15:39:06.523507 kubelet[2589]: E0213 15:39:06.523493 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:39:06.524479 kubelet[2589]: E0213 15:39:06.523656 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:39:06.542954 kubelet[2589]: I0213 15:39:06.542912 2589 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.542870675 podStartE2EDuration="2.542870675s" podCreationTimestamp="2025-02-13 15:39:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:39:06.535853128 +0000 UTC m=+1.131217018" watchObservedRunningTime="2025-02-13 15:39:06.542870675 +0000 UTC m=+1.138234565" Feb 13 15:39:06.543130 kubelet[2589]: I0213 15:39:06.543019 2589 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.543002881 podStartE2EDuration="1.543002881s" podCreationTimestamp="2025-02-13 15:39:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:39:06.542969069 +0000 UTC m=+1.138332959" watchObservedRunningTime="2025-02-13 15:39:06.543002881 +0000 UTC m=+1.138366731" Feb 13 15:39:06.551248 kubelet[2589]: I0213 15:39:06.551201 2589 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.551156181 podStartE2EDuration="1.551156181s" podCreationTimestamp="2025-02-13 15:39:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:39:06.55040092 +0000 UTC m=+1.145764810" watchObservedRunningTime="2025-02-13 15:39:06.551156181 +0000 UTC m=+1.146520151" Feb 13 15:39:07.519613 kubelet[2589]: E0213 15:39:07.519578 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:39:07.519613 kubelet[2589]: E0213 15:39:07.519598 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:39:09.614602 sudo[1625]: pam_unix(sudo:session): session closed for user root Feb 13 15:39:09.616004 sshd[1624]: Connection closed by 10.0.0.1 port 58048 Feb 13 15:39:09.616451 sshd-session[1622]: pam_unix(sshd:session): session closed for user core Feb 13 15:39:09.618821 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 15:39:09.619004 systemd[1]: session-7.scope: Consumed 6.096s CPU time, 191.4M memory peak, 0B memory swap peak. Feb 13 15:39:09.619461 systemd[1]: sshd@6-10.0.0.113:22-10.0.0.1:58048.service: Deactivated successfully. Feb 13 15:39:09.621829 systemd-logind[1429]: Session 7 logged out. Waiting for processes to exit. Feb 13 15:39:09.622729 systemd-logind[1429]: Removed session 7. Feb 13 15:39:11.170140 kubelet[2589]: E0213 15:39:11.170111 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:39:11.193916 kubelet[2589]: E0213 15:39:11.193826 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:39:11.523938 kubelet[2589]: E0213 15:39:11.523803 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:39:11.523938 kubelet[2589]: E0213 15:39:11.523926 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:39:12.749965 kubelet[2589]: E0213 15:39:12.749937 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:39:13.526901 kubelet[2589]: E0213 15:39:13.526822 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:39:18.102222 kubelet[2589]: I0213 15:39:18.102183 2589 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 15:39:18.116881 containerd[1453]: time="2025-02-13T15:39:18.116822450Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 15:39:18.117244 kubelet[2589]: I0213 15:39:18.117151 2589 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 15:39:18.950535 kubelet[2589]: I0213 15:39:18.950500 2589 topology_manager.go:215] "Topology Admit Handler" podUID="48ffd60a-c76e-4d22-9c39-361e5f4aac2f" podNamespace="kube-system" podName="kube-proxy-n8p9z" Feb 13 15:39:18.963088 systemd[1]: Created slice kubepods-besteffort-pod48ffd60a_c76e_4d22_9c39_361e5f4aac2f.slice - libcontainer container kubepods-besteffort-pod48ffd60a_c76e_4d22_9c39_361e5f4aac2f.slice. Feb 13 15:39:19.077517 kubelet[2589]: I0213 15:39:19.077474 2589 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/48ffd60a-c76e-4d22-9c39-361e5f4aac2f-lib-modules\") pod \"kube-proxy-n8p9z\" (UID: \"48ffd60a-c76e-4d22-9c39-361e5f4aac2f\") " pod="kube-system/kube-proxy-n8p9z" Feb 13 15:39:19.077517 kubelet[2589]: I0213 15:39:19.077519 2589 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/48ffd60a-c76e-4d22-9c39-361e5f4aac2f-xtables-lock\") pod \"kube-proxy-n8p9z\" (UID: \"48ffd60a-c76e-4d22-9c39-361e5f4aac2f\") " pod="kube-system/kube-proxy-n8p9z" Feb 13 15:39:19.077684 kubelet[2589]: I0213 15:39:19.077549 2589 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxsnh\" (UniqueName: \"kubernetes.io/projected/48ffd60a-c76e-4d22-9c39-361e5f4aac2f-kube-api-access-mxsnh\") pod \"kube-proxy-n8p9z\" (UID: \"48ffd60a-c76e-4d22-9c39-361e5f4aac2f\") " pod="kube-system/kube-proxy-n8p9z" Feb 13 15:39:19.077684 kubelet[2589]: I0213 15:39:19.077571 2589 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/48ffd60a-c76e-4d22-9c39-361e5f4aac2f-kube-proxy\") pod \"kube-proxy-n8p9z\" (UID: \"48ffd60a-c76e-4d22-9c39-361e5f4aac2f\") " pod="kube-system/kube-proxy-n8p9z" Feb 13 15:39:19.236961 kubelet[2589]: I0213 15:39:19.236824 2589 topology_manager.go:215] "Topology Admit Handler" podUID="2040a19a-ce1e-4fdf-9f1b-10a443590d11" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-fb567" Feb 13 15:39:19.249974 systemd[1]: Created slice kubepods-besteffort-pod2040a19a_ce1e_4fdf_9f1b_10a443590d11.slice - libcontainer container kubepods-besteffort-pod2040a19a_ce1e_4fdf_9f1b_10a443590d11.slice. Feb 13 15:39:19.275152 kubelet[2589]: E0213 15:39:19.275112 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:39:19.275756 containerd[1453]: time="2025-02-13T15:39:19.275713931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n8p9z,Uid:48ffd60a-c76e-4d22-9c39-361e5f4aac2f,Namespace:kube-system,Attempt:0,}" Feb 13 15:39:19.279668 kubelet[2589]: I0213 15:39:19.279629 2589 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2040a19a-ce1e-4fdf-9f1b-10a443590d11-var-lib-calico\") pod \"tigera-operator-c7ccbd65-fb567\" (UID: \"2040a19a-ce1e-4fdf-9f1b-10a443590d11\") " pod="tigera-operator/tigera-operator-c7ccbd65-fb567" Feb 13 15:39:19.279818 kubelet[2589]: I0213 15:39:19.279699 2589 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29bvh\" (UniqueName: \"kubernetes.io/projected/2040a19a-ce1e-4fdf-9f1b-10a443590d11-kube-api-access-29bvh\") pod \"tigera-operator-c7ccbd65-fb567\" (UID: \"2040a19a-ce1e-4fdf-9f1b-10a443590d11\") " pod="tigera-operator/tigera-operator-c7ccbd65-fb567" Feb 13 15:39:19.298712 containerd[1453]: time="2025-02-13T15:39:19.298461465Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:39:19.298712 containerd[1453]: time="2025-02-13T15:39:19.298526234Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:39:19.298712 containerd[1453]: time="2025-02-13T15:39:19.298538116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:39:19.298712 containerd[1453]: time="2025-02-13T15:39:19.298622928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:39:19.323653 systemd[1]: Started cri-containerd-629148966ee74b2d9e6b732c7ca352c5bb54b10ba7445e66fbc0c3e774961aff.scope - libcontainer container 629148966ee74b2d9e6b732c7ca352c5bb54b10ba7445e66fbc0c3e774961aff. Feb 13 15:39:19.344018 containerd[1453]: time="2025-02-13T15:39:19.343969294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n8p9z,Uid:48ffd60a-c76e-4d22-9c39-361e5f4aac2f,Namespace:kube-system,Attempt:0,} returns sandbox id \"629148966ee74b2d9e6b732c7ca352c5bb54b10ba7445e66fbc0c3e774961aff\"" Feb 13 15:39:19.347492 kubelet[2589]: E0213 15:39:19.347073 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:39:19.349586 containerd[1453]: time="2025-02-13T15:39:19.349527598Z" level=info msg="CreateContainer within sandbox \"629148966ee74b2d9e6b732c7ca352c5bb54b10ba7445e66fbc0c3e774961aff\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 15:39:19.382027 containerd[1453]: time="2025-02-13T15:39:19.381928244Z" level=info msg="CreateContainer within sandbox \"629148966ee74b2d9e6b732c7ca352c5bb54b10ba7445e66fbc0c3e774961aff\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4fe6bfc77ac30c91382386ed244780869c2133f50121b868af614ee505d723b5\"" Feb 13 15:39:19.382809 containerd[1453]: time="2025-02-13T15:39:19.382782131Z" level=info msg="StartContainer for \"4fe6bfc77ac30c91382386ed244780869c2133f50121b868af614ee505d723b5\"" Feb 13 15:39:19.411636 systemd[1]: Started cri-containerd-4fe6bfc77ac30c91382386ed244780869c2133f50121b868af614ee505d723b5.scope - libcontainer container 4fe6bfc77ac30c91382386ed244780869c2133f50121b868af614ee505d723b5. Feb 13 15:39:19.437639 containerd[1453]: time="2025-02-13T15:39:19.437579058Z" level=info msg="StartContainer for \"4fe6bfc77ac30c91382386ed244780869c2133f50121b868af614ee505d723b5\" returns successfully" Feb 13 15:39:19.538297 kubelet[2589]: E0213 15:39:19.538059 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:39:19.553664 kubelet[2589]: I0213 15:39:19.553028 2589 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-n8p9z" podStartSLOduration=1.552992815 podStartE2EDuration="1.552992815s" podCreationTimestamp="2025-02-13 15:39:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:39:19.55282175 +0000 UTC m=+14.148185640" watchObservedRunningTime="2025-02-13 15:39:19.552992815 +0000 UTC m=+14.148356705" Feb 13 15:39:19.553804 containerd[1453]: time="2025-02-13T15:39:19.553642232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-fb567,Uid:2040a19a-ce1e-4fdf-9f1b-10a443590d11,Namespace:tigera-operator,Attempt:0,}" Feb 13 15:39:19.576510 containerd[1453]: time="2025-02-13T15:39:19.575701263Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:39:19.576510 containerd[1453]: time="2025-02-13T15:39:19.576102003Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:39:19.576510 containerd[1453]: time="2025-02-13T15:39:19.576116525Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:39:19.576510 containerd[1453]: time="2025-02-13T15:39:19.576218300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:39:19.601662 systemd[1]: Started cri-containerd-40f5114b4f63fe51ead45ab1f7e41e6bb9890d73446f3aff32d978b311a866cd.scope - libcontainer container 40f5114b4f63fe51ead45ab1f7e41e6bb9890d73446f3aff32d978b311a866cd. Feb 13 15:39:19.627773 containerd[1453]: time="2025-02-13T15:39:19.627718658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-fb567,Uid:2040a19a-ce1e-4fdf-9f1b-10a443590d11,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"40f5114b4f63fe51ead45ab1f7e41e6bb9890d73446f3aff32d978b311a866cd\"" Feb 13 15:39:19.629456 containerd[1453]: time="2025-02-13T15:39:19.629417990Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Feb 13 15:39:20.135509 update_engine[1431]: I20250213 15:39:20.135425 1431 update_attempter.cc:509] Updating boot flags... Feb 13 15:39:20.156916 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2922) Feb 13 15:39:20.193491 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2924) Feb 13 15:39:27.985162 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4137462912.mount: Deactivated successfully. Feb 13 15:39:28.277268 containerd[1453]: time="2025-02-13T15:39:28.277138218Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:28.277763 containerd[1453]: time="2025-02-13T15:39:28.277705433Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19124160" Feb 13 15:39:28.278586 containerd[1453]: time="2025-02-13T15:39:28.278523352Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:28.289835 containerd[1453]: time="2025-02-13T15:39:28.289773962Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:28.290676 containerd[1453]: time="2025-02-13T15:39:28.290647167Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 8.661173209s" Feb 13 15:39:28.290859 containerd[1453]: time="2025-02-13T15:39:28.290759698Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Feb 13 15:39:28.295727 containerd[1453]: time="2025-02-13T15:39:28.295683295Z" level=info msg="CreateContainer within sandbox \"40f5114b4f63fe51ead45ab1f7e41e6bb9890d73446f3aff32d978b311a866cd\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 13 15:39:28.304719 containerd[1453]: time="2025-02-13T15:39:28.304683327Z" level=info msg="CreateContainer within sandbox \"40f5114b4f63fe51ead45ab1f7e41e6bb9890d73446f3aff32d978b311a866cd\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"0b9ce6e9d9ad2c8f35517ddbb78a2a206fdb430acd9b6353e5aaee5c90166184\"" Feb 13 15:39:28.305785 containerd[1453]: time="2025-02-13T15:39:28.305104808Z" level=info msg="StartContainer for \"0b9ce6e9d9ad2c8f35517ddbb78a2a206fdb430acd9b6353e5aaee5c90166184\"" Feb 13 15:39:28.340639 systemd[1]: Started cri-containerd-0b9ce6e9d9ad2c8f35517ddbb78a2a206fdb430acd9b6353e5aaee5c90166184.scope - libcontainer container 0b9ce6e9d9ad2c8f35517ddbb78a2a206fdb430acd9b6353e5aaee5c90166184. Feb 13 15:39:28.365329 containerd[1453]: time="2025-02-13T15:39:28.365289921Z" level=info msg="StartContainer for \"0b9ce6e9d9ad2c8f35517ddbb78a2a206fdb430acd9b6353e5aaee5c90166184\" returns successfully" Feb 13 15:39:32.000394 kubelet[2589]: I0213 15:39:32.000353 2589 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-fb567" podStartSLOduration=4.336695002 podStartE2EDuration="13.000304673s" podCreationTimestamp="2025-02-13 15:39:19 +0000 UTC" firstStartedPulling="2025-02-13 15:39:19.628954042 +0000 UTC m=+14.224317932" lastFinishedPulling="2025-02-13 15:39:28.292563713 +0000 UTC m=+22.887927603" observedRunningTime="2025-02-13 15:39:28.59455618 +0000 UTC m=+23.189920070" watchObservedRunningTime="2025-02-13 15:39:32.000304673 +0000 UTC m=+26.595668523" Feb 13 15:39:32.001682 kubelet[2589]: I0213 15:39:32.000504 2589 topology_manager.go:215] "Topology Admit Handler" podUID="f3e23580-1d57-4ae5-9877-8fa7b8b9dcda" podNamespace="calico-system" podName="calico-typha-7bc8d49d94-tzvgw" Feb 13 15:39:32.024728 systemd[1]: Created slice kubepods-besteffort-podf3e23580_1d57_4ae5_9877_8fa7b8b9dcda.slice - libcontainer container kubepods-besteffort-podf3e23580_1d57_4ae5_9877_8fa7b8b9dcda.slice. Feb 13 15:39:32.058055 kubelet[2589]: I0213 15:39:32.057993 2589 topology_manager.go:215] "Topology Admit Handler" podUID="ae783117-fff8-4e75-906f-4048a351794a" podNamespace="calico-system" podName="calico-node-8q5jw" Feb 13 15:39:32.068933 systemd[1]: Created slice kubepods-besteffort-podae783117_fff8_4e75_906f_4048a351794a.slice - libcontainer container kubepods-besteffort-podae783117_fff8_4e75_906f_4048a351794a.slice. Feb 13 15:39:32.171351 kubelet[2589]: I0213 15:39:32.171315 2589 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcbj7\" (UniqueName: \"kubernetes.io/projected/f3e23580-1d57-4ae5-9877-8fa7b8b9dcda-kube-api-access-jcbj7\") pod \"calico-typha-7bc8d49d94-tzvgw\" (UID: \"f3e23580-1d57-4ae5-9877-8fa7b8b9dcda\") " pod="calico-system/calico-typha-7bc8d49d94-tzvgw" Feb 13 15:39:32.171351 kubelet[2589]: I0213 15:39:32.171356 2589 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ae783117-fff8-4e75-906f-4048a351794a-var-run-calico\") pod \"calico-node-8q5jw\" (UID: \"ae783117-fff8-4e75-906f-4048a351794a\") " pod="calico-system/calico-node-8q5jw" Feb 13 15:39:32.171578 kubelet[2589]: I0213 15:39:32.171377 2589 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ae783117-fff8-4e75-906f-4048a351794a-xtables-lock\") pod \"calico-node-8q5jw\" (UID: \"ae783117-fff8-4e75-906f-4048a351794a\") " pod="calico-system/calico-node-8q5jw" Feb 13 15:39:32.175353 kubelet[2589]: I0213 15:39:32.173519 2589 topology_manager.go:215] "Topology Admit Handler" podUID="0c3e32e2-3a7c-428a-a18f-8761ef2b92d8" podNamespace="calico-system" podName="csi-node-driver-9jl5n" Feb 13 15:39:32.175881 kubelet[2589]: E0213 15:39:32.175859 2589 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9jl5n" podUID="0c3e32e2-3a7c-428a-a18f-8761ef2b92d8" Feb 13 15:39:32.177526 kubelet[2589]: I0213 15:39:32.177481 2589 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ae783117-fff8-4e75-906f-4048a351794a-cni-net-dir\") pod \"calico-node-8q5jw\" (UID: \"ae783117-fff8-4e75-906f-4048a351794a\") " pod="calico-system/calico-node-8q5jw" Feb 13 15:39:32.177698 kubelet[2589]: I0213 15:39:32.177679 2589 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ae783117-fff8-4e75-906f-4048a351794a-flexvol-driver-host\") pod \"calico-node-8q5jw\" (UID: \"ae783117-fff8-4e75-906f-4048a351794a\") " pod="calico-system/calico-node-8q5jw" Feb 13 15:39:32.178185 kubelet[2589]: I0213 15:39:32.177811 2589 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ae783117-fff8-4e75-906f-4048a351794a-policysync\") pod \"calico-node-8q5jw\" (UID: \"ae783117-fff8-4e75-906f-4048a351794a\") " pod="calico-system/calico-node-8q5jw" Feb 13 15:39:32.178185 kubelet[2589]: I0213 15:39:32.177844 2589 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ae783117-fff8-4e75-906f-4048a351794a-cni-log-dir\") pod \"calico-node-8q5jw\" (UID: \"ae783117-fff8-4e75-906f-4048a351794a\") " pod="calico-system/calico-node-8q5jw" Feb 13 15:39:32.178185 kubelet[2589]: I0213 15:39:32.177869 2589 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hl9h6\" (UniqueName: \"kubernetes.io/projected/ae783117-fff8-4e75-906f-4048a351794a-kube-api-access-hl9h6\") pod \"calico-node-8q5jw\" (UID: \"ae783117-fff8-4e75-906f-4048a351794a\") " pod="calico-system/calico-node-8q5jw" Feb 13 15:39:32.178185 kubelet[2589]: I0213 15:39:32.177896 2589 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f3e23580-1d57-4ae5-9877-8fa7b8b9dcda-tigera-ca-bundle\") pod \"calico-typha-7bc8d49d94-tzvgw\" (UID: \"f3e23580-1d57-4ae5-9877-8fa7b8b9dcda\") " pod="calico-system/calico-typha-7bc8d49d94-tzvgw" Feb 13 15:39:32.178185 kubelet[2589]: I0213 15:39:32.177918 2589 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f3e23580-1d57-4ae5-9877-8fa7b8b9dcda-typha-certs\") pod \"calico-typha-7bc8d49d94-tzvgw\" (UID: \"f3e23580-1d57-4ae5-9877-8fa7b8b9dcda\") " pod="calico-system/calico-typha-7bc8d49d94-tzvgw" Feb 13 15:39:32.178481 kubelet[2589]: I0213 15:39:32.177945 2589 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ae783117-fff8-4e75-906f-4048a351794a-lib-modules\") pod \"calico-node-8q5jw\" (UID: \"ae783117-fff8-4e75-906f-4048a351794a\") " pod="calico-system/calico-node-8q5jw" Feb 13 15:39:32.178481 kubelet[2589]: I0213 15:39:32.177963 2589 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ae783117-fff8-4e75-906f-4048a351794a-node-certs\") pod \"calico-node-8q5jw\" (UID: \"ae783117-fff8-4e75-906f-4048a351794a\") " pod="calico-system/calico-node-8q5jw" Feb 13 15:39:32.178481 kubelet[2589]: I0213 15:39:32.177989 2589 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ae783117-fff8-4e75-906f-4048a351794a-cni-bin-dir\") pod \"calico-node-8q5jw\" (UID: \"ae783117-fff8-4e75-906f-4048a351794a\") " pod="calico-system/calico-node-8q5jw" Feb 13 15:39:32.178481 kubelet[2589]: I0213 15:39:32.178031 2589 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae783117-fff8-4e75-906f-4048a351794a-tigera-ca-bundle\") pod \"calico-node-8q5jw\" (UID: \"ae783117-fff8-4e75-906f-4048a351794a\") " pod="calico-system/calico-node-8q5jw" Feb 13 15:39:32.178481 kubelet[2589]: I0213 15:39:32.178050 2589 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ae783117-fff8-4e75-906f-4048a351794a-var-lib-calico\") pod \"calico-node-8q5jw\" (UID: \"ae783117-fff8-4e75-906f-4048a351794a\") " pod="calico-system/calico-node-8q5jw" Feb 13 15:39:32.279193 kubelet[2589]: I0213 15:39:32.278588 2589 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0c3e32e2-3a7c-428a-a18f-8761ef2b92d8-socket-dir\") pod \"csi-node-driver-9jl5n\" (UID: \"0c3e32e2-3a7c-428a-a18f-8761ef2b92d8\") " pod="calico-system/csi-node-driver-9jl5n" Feb 13 15:39:32.279193 kubelet[2589]: I0213 15:39:32.278655 2589 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0c3e32e2-3a7c-428a-a18f-8761ef2b92d8-varrun\") pod \"csi-node-driver-9jl5n\" (UID: \"0c3e32e2-3a7c-428a-a18f-8761ef2b92d8\") " pod="calico-system/csi-node-driver-9jl5n" Feb 13 15:39:32.280338 kubelet[2589]: I0213 15:39:32.279433 2589 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0c3e32e2-3a7c-428a-a18f-8761ef2b92d8-kubelet-dir\") pod \"csi-node-driver-9jl5n\" (UID: \"0c3e32e2-3a7c-428a-a18f-8761ef2b92d8\") " pod="calico-system/csi-node-driver-9jl5n" Feb 13 15:39:32.280338 kubelet[2589]: E0213 15:39:32.280075 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:32.281478 kubelet[2589]: W0213 15:39:32.280204 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:32.281478 kubelet[2589]: E0213 15:39:32.281399 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:32.281583 kubelet[2589]: I0213 15:39:32.281538 2589 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpg7t\" (UniqueName: \"kubernetes.io/projected/0c3e32e2-3a7c-428a-a18f-8761ef2b92d8-kube-api-access-xpg7t\") pod \"csi-node-driver-9jl5n\" (UID: \"0c3e32e2-3a7c-428a-a18f-8761ef2b92d8\") " pod="calico-system/csi-node-driver-9jl5n" Feb 13 15:39:32.281926 kubelet[2589]: E0213 15:39:32.281896 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:32.281926 kubelet[2589]: W0213 15:39:32.281916 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:32.282107 kubelet[2589]: E0213 15:39:32.282024 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:32.282107 kubelet[2589]: I0213 15:39:32.282059 2589 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0c3e32e2-3a7c-428a-a18f-8761ef2b92d8-registration-dir\") pod \"csi-node-driver-9jl5n\" (UID: \"0c3e32e2-3a7c-428a-a18f-8761ef2b92d8\") " pod="calico-system/csi-node-driver-9jl5n" Feb 13 15:39:32.282181 kubelet[2589]: E0213 15:39:32.282166 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:32.282181 kubelet[2589]: W0213 15:39:32.282176 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:32.282326 kubelet[2589]: E0213 15:39:32.282309 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:32.282453 kubelet[2589]: E0213 15:39:32.282433 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:32.282499 kubelet[2589]: W0213 15:39:32.282456 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:32.282591 kubelet[2589]: E0213 15:39:32.282567 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:32.282689 kubelet[2589]: E0213 15:39:32.282677 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:32.282689 kubelet[2589]: W0213 15:39:32.282689 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:32.282835 kubelet[2589]: E0213 15:39:32.282794 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:32.283072 kubelet[2589]: E0213 15:39:32.283050 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:32.283106 kubelet[2589]: W0213 15:39:32.283079 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:32.283169 kubelet[2589]: E0213 15:39:32.283145 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:32.283306 kubelet[2589]: E0213 15:39:32.283293 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:32.283306 kubelet[2589]: W0213 15:39:32.283305 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:32.283473 kubelet[2589]: E0213 15:39:32.283373 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:32.283548 kubelet[2589]: E0213 15:39:32.283533 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:32.283548 kubelet[2589]: W0213 15:39:32.283546 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:32.283685 kubelet[2589]: E0213 15:39:32.283630 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:32.283719 kubelet[2589]: E0213 15:39:32.283705 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:32.283719 kubelet[2589]: W0213 15:39:32.283711 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:32.283785 kubelet[2589]: E0213 15:39:32.283773 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:32.283890 kubelet[2589]: E0213 15:39:32.283879 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:32.283910 kubelet[2589]: W0213 15:39:32.283891 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:32.283987 kubelet[2589]: E0213 15:39:32.283970 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:32.284153 kubelet[2589]: E0213 15:39:32.284138 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:32.284153 kubelet[2589]: W0213 15:39:32.284151 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:32.284240 kubelet[2589]: E0213 15:39:32.284228 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:32.284331 kubelet[2589]: E0213 15:39:32.284322 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:32.284357 kubelet[2589]: W0213 15:39:32.284330 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:32.284395 kubelet[2589]: E0213 15:39:32.284387 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:32.284583 kubelet[2589]: E0213 15:39:32.284570 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:32.284583 kubelet[2589]: W0213 15:39:32.284582 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:32.284661 kubelet[2589]: E0213 15:39:32.284624 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:32.284737 kubelet[2589]: E0213 15:39:32.284727 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:32.284737 kubelet[2589]: W0213 15:39:32.284736 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:32.284781 kubelet[2589]: E0213 15:39:32.284763 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:32.284964 kubelet[2589]: E0213 15:39:32.284949 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:32.284964 kubelet[2589]: W0213 15:39:32.284961 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:32.285052 kubelet[2589]: E0213 15:39:32.285033 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:32.285101 kubelet[2589]: E0213 15:39:32.285091 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:32.285124 kubelet[2589]: W0213 15:39:32.285102 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:32.285164 kubelet[2589]: E0213 15:39:32.285149 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:32.288275 kubelet[2589]: E0213 15:39:32.288035 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:32.288275 kubelet[2589]: W0213 15:39:32.288076 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:32.288275 kubelet[2589]: E0213 15:39:32.288116 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:32.288381 kubelet[2589]: E0213 15:39:32.288329 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:32.288381 kubelet[2589]: W0213 15:39:32.288338 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:32.288381 kubelet[2589]: E0213 15:39:32.288350 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:32.288917 kubelet[2589]: E0213 15:39:32.288892 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:32.288917 kubelet[2589]: W0213 15:39:32.288910 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:32.289021 kubelet[2589]: E0213 15:39:32.288930 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:32.291159 kubelet[2589]: E0213 15:39:32.290766 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:32.291159 kubelet[2589]: W0213 15:39:32.290783 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:32.291363 kubelet[2589]: E0213 15:39:32.291296 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:32.291363 kubelet[2589]: E0213 15:39:32.291354 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:32.291485 kubelet[2589]: W0213 15:39:32.291366 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:32.291485 kubelet[2589]: E0213 15:39:32.291474 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:32.293501 kubelet[2589]: E0213 15:39:32.291804 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:32.293501 kubelet[2589]: W0213 15:39:32.291982 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:32.293501 kubelet[2589]: E0213 15:39:32.292049 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:32.293501 kubelet[2589]: E0213 15:39:32.292201 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:32.293501 kubelet[2589]: W0213 15:39:32.292209 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:32.293501 kubelet[2589]: E0213 15:39:32.292256 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:32.293501 kubelet[2589]: E0213 15:39:32.293501 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:32.293703 kubelet[2589]: W0213 15:39:32.293524 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:32.293703 kubelet[2589]: E0213 15:39:32.293548 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:32.294770 kubelet[2589]: E0213 15:39:32.294245 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:32.294770 kubelet[2589]: W0213 15:39:32.294262 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:32.294904 kubelet[2589]: E0213 15:39:32.294287 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:32.299453 kubelet[2589]: E0213 15:39:32.297659 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:32.299453 kubelet[2589]: W0213 15:39:32.297678 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:32.299453 kubelet[2589]: E0213 15:39:32.297700 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:32.299453 kubelet[2589]: E0213 15:39:32.297964 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:32.299453 kubelet[2589]: W0213 15:39:32.297972 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:32.299453 kubelet[2589]: E0213 15:39:32.297987 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:32.299453 kubelet[2589]: E0213 15:39:32.298161 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:32.299453 kubelet[2589]: W0213 15:39:32.298170 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:32.299453 kubelet[2589]: E0213 15:39:32.298185 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:32.299453 kubelet[2589]: E0213 15:39:32.298337 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:32.299702 kubelet[2589]: W0213 15:39:32.298344 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:32.299702 kubelet[2589]: E0213 15:39:32.298358 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:32.299702 kubelet[2589]: E0213 15:39:32.298580 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:32.299702 kubelet[2589]: W0213 15:39:32.298588 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:32.299702 kubelet[2589]: E0213 15:39:32.298623 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:32.299702 kubelet[2589]: E0213 15:39:32.298759 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:32.299702 kubelet[2589]: W0213 15:39:32.298766 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:32.299702 kubelet[2589]: E0213 15:39:32.298781 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:32.303015 kubelet[2589]: E0213 15:39:32.302049 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:32.303015 kubelet[2589]: W0213 15:39:32.302065 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:32.303015 kubelet[2589]: E0213 15:39:32.302079 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:32.307842 kubelet[2589]: E0213 15:39:32.307824 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:32.307842 kubelet[2589]: W0213 15:39:32.307842 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:32.307931 kubelet[2589]: E0213 15:39:32.307863 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:32.328408 kubelet[2589]: E0213 15:39:32.328254 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:39:32.329615 containerd[1453]: time="2025-02-13T15:39:32.329484811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7bc8d49d94-tzvgw,Uid:f3e23580-1d57-4ae5-9877-8fa7b8b9dcda,Namespace:calico-system,Attempt:0,}" Feb 13 15:39:32.350284 containerd[1453]: time="2025-02-13T15:39:32.350187910Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:39:32.350284 containerd[1453]: time="2025-02-13T15:39:32.350247435Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:39:32.350553 containerd[1453]: time="2025-02-13T15:39:32.350262317Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:39:32.350553 containerd[1453]: time="2025-02-13T15:39:32.350342243Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:39:32.367649 systemd[1]: Started cri-containerd-6a2a43e20dca0cc9a505ecb02f71b9be16550091f8063a0d35b2f555b558b280.scope - libcontainer container 6a2a43e20dca0cc9a505ecb02f71b9be16550091f8063a0d35b2f555b558b280. Feb 13 15:39:32.371664 kubelet[2589]: E0213 15:39:32.371629 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:39:32.372929 containerd[1453]: time="2025-02-13T15:39:32.372895574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8q5jw,Uid:ae783117-fff8-4e75-906f-4048a351794a,Namespace:calico-system,Attempt:0,}" Feb 13 15:39:32.383064 kubelet[2589]: E0213 15:39:32.382997 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:32.383064 kubelet[2589]: W0213 15:39:32.383025 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:32.383064 kubelet[2589]: E0213 15:39:32.383053 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:32.383263 kubelet[2589]: E0213 15:39:32.383252 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:32.383346 kubelet[2589]: W0213 15:39:32.383261 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:32.383346 kubelet[2589]: E0213 15:39:32.383282 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:32.383552 kubelet[2589]: E0213 15:39:32.383506 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:32.383552 kubelet[2589]: W0213 15:39:32.383531 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:32.383552 kubelet[2589]: E0213 15:39:32.383552 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:32.383762 kubelet[2589]: E0213 15:39:32.383719 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:32.383762 kubelet[2589]: W0213 15:39:32.383729 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:32.383762 kubelet[2589]: E0213 15:39:32.383745 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:32.383924 kubelet[2589]: E0213 15:39:32.383909 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:32.383924 kubelet[2589]: W0213 15:39:32.383922 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:32.384009 kubelet[2589]: E0213 15:39:32.383938 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:32.384261 kubelet[2589]: E0213 15:39:32.384100 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:32.384261 kubelet[2589]: W0213 15:39:32.384114 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:32.384261 kubelet[2589]: E0213 15:39:32.384128 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:32.384406 kubelet[2589]: E0213 15:39:32.384279 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:32.384406 kubelet[2589]: W0213 15:39:32.384287 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:32.384406 kubelet[2589]: E0213 15:39:32.384304 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:32.384486 kubelet[2589]: E0213 15:39:32.384464 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:32.384486 kubelet[2589]: W0213 15:39:32.384471 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:32.384545 kubelet[2589]: E0213 15:39:32.384524 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:32.384676 kubelet[2589]: E0213 15:39:32.384662 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:32.384676 kubelet[2589]: W0213 15:39:32.384672 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:32.384758 kubelet[2589]: E0213 15:39:32.384738 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:32.384889 kubelet[2589]: E0213 15:39:32.384879 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:32.384889 kubelet[2589]: W0213 15:39:32.384888 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:32.384938 kubelet[2589]: E0213 15:39:32.384902 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:32.385083 kubelet[2589]: E0213 15:39:32.385064 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:32.385083 kubelet[2589]: W0213 15:39:32.385074 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:32.385083 kubelet[2589]: E0213 15:39:32.385087 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:32.385298 kubelet[2589]: E0213 15:39:32.385263 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:32.385298 kubelet[2589]: W0213 15:39:32.385276 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:32.385298 kubelet[2589]: E0213 15:39:32.385287 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:32.385439 kubelet[2589]: E0213 15:39:32.385427 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:32.385439 kubelet[2589]: W0213 15:39:32.385436 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:32.385505 kubelet[2589]: E0213 15:39:32.385461 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:32.385670 kubelet[2589]: E0213 15:39:32.385647 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:32.385670 kubelet[2589]: W0213 15:39:32.385658 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:32.385724 kubelet[2589]: E0213 15:39:32.385706 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:32.385869 kubelet[2589]: E0213 15:39:32.385855 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:32.385869 kubelet[2589]: W0213 15:39:32.385864 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:32.385926 kubelet[2589]: E0213 15:39:32.385906 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:32.386114 kubelet[2589]: E0213 15:39:32.386102 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:32.386114 kubelet[2589]: W0213 15:39:32.386111 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:32.386289 kubelet[2589]: E0213 15:39:32.386247 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:32.386403 kubelet[2589]: E0213 15:39:32.386370 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:32.386403 kubelet[2589]: W0213 15:39:32.386400 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:32.386514 kubelet[2589]: E0213 15:39:32.386454 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:32.386620 kubelet[2589]: E0213 15:39:32.386603 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:32.386620 kubelet[2589]: W0213 15:39:32.386616 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:32.386694 kubelet[2589]: E0213 15:39:32.386638 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:32.387063 kubelet[2589]: E0213 15:39:32.386794 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:32.387063 kubelet[2589]: W0213 15:39:32.386806 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:32.387063 kubelet[2589]: E0213 15:39:32.386820 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:32.387063 kubelet[2589]: E0213 15:39:32.387002 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:32.387063 kubelet[2589]: W0213 15:39:32.387011 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:32.387063 kubelet[2589]: E0213 15:39:32.387030 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:32.387296 kubelet[2589]: E0213 15:39:32.387176 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:32.387296 kubelet[2589]: W0213 15:39:32.387186 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:32.387296 kubelet[2589]: E0213 15:39:32.387197 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:32.387419 kubelet[2589]: E0213 15:39:32.387326 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:32.387419 kubelet[2589]: W0213 15:39:32.387394 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:32.387419 kubelet[2589]: E0213 15:39:32.387413 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:32.387635 kubelet[2589]: E0213 15:39:32.387617 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:32.387635 kubelet[2589]: W0213 15:39:32.387626 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:32.387713 kubelet[2589]: E0213 15:39:32.387660 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:32.387771 kubelet[2589]: E0213 15:39:32.387758 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:32.387771 kubelet[2589]: W0213 15:39:32.387770 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:32.387891 kubelet[2589]: E0213 15:39:32.387781 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:32.388743 kubelet[2589]: E0213 15:39:32.388563 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:32.388743 kubelet[2589]: W0213 15:39:32.388583 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:32.388743 kubelet[2589]: E0213 15:39:32.388600 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:32.399472 kubelet[2589]: E0213 15:39:32.399374 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:32.399472 kubelet[2589]: W0213 15:39:32.399398 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:32.399472 kubelet[2589]: E0213 15:39:32.399418 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:32.406954 containerd[1453]: time="2025-02-13T15:39:32.406908405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7bc8d49d94-tzvgw,Uid:f3e23580-1d57-4ae5-9877-8fa7b8b9dcda,Namespace:calico-system,Attempt:0,} returns sandbox id \"6a2a43e20dca0cc9a505ecb02f71b9be16550091f8063a0d35b2f555b558b280\"" Feb 13 15:39:32.407928 kubelet[2589]: E0213 15:39:32.407843 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:39:32.411410 containerd[1453]: time="2025-02-13T15:39:32.411286605Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:39:32.412291 containerd[1453]: time="2025-02-13T15:39:32.411460939Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:39:32.412291 containerd[1453]: time="2025-02-13T15:39:32.411553827Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Feb 13 15:39:32.412291 containerd[1453]: time="2025-02-13T15:39:32.411618232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:39:32.414287 containerd[1453]: time="2025-02-13T15:39:32.413516188Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:39:32.436644 systemd[1]: Started cri-containerd-7d42b31540996f15e1b192650239e666ab57fe20e14c2fc34a523a35cc8b7fc4.scope - libcontainer container 7d42b31540996f15e1b192650239e666ab57fe20e14c2fc34a523a35cc8b7fc4. Feb 13 15:39:32.459248 containerd[1453]: time="2025-02-13T15:39:32.459189776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8q5jw,Uid:ae783117-fff8-4e75-906f-4048a351794a,Namespace:calico-system,Attempt:0,} returns sandbox id \"7d42b31540996f15e1b192650239e666ab57fe20e14c2fc34a523a35cc8b7fc4\"" Feb 13 15:39:32.459989 kubelet[2589]: E0213 15:39:32.459952 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:39:33.500157 kubelet[2589]: E0213 15:39:33.499989 2589 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9jl5n" podUID="0c3e32e2-3a7c-428a-a18f-8761ef2b92d8" Feb 13 15:39:33.611737 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount307154215.mount: Deactivated successfully. Feb 13 15:39:34.062357 containerd[1453]: time="2025-02-13T15:39:34.062318048Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:34.063222 containerd[1453]: time="2025-02-13T15:39:34.063067305Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29231308" Feb 13 15:39:34.065158 containerd[1453]: time="2025-02-13T15:39:34.064584381Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:34.067091 containerd[1453]: time="2025-02-13T15:39:34.067063809Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:34.067673 containerd[1453]: time="2025-02-13T15:39:34.067632812Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 1.656042142s" Feb 13 15:39:34.067727 containerd[1453]: time="2025-02-13T15:39:34.067673815Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Feb 13 15:39:34.068204 containerd[1453]: time="2025-02-13T15:39:34.068164052Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 15:39:34.076320 containerd[1453]: time="2025-02-13T15:39:34.075441005Z" level=info msg="CreateContainer within sandbox \"6a2a43e20dca0cc9a505ecb02f71b9be16550091f8063a0d35b2f555b558b280\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 13 15:39:34.085198 containerd[1453]: time="2025-02-13T15:39:34.085022733Z" level=info msg="CreateContainer within sandbox \"6a2a43e20dca0cc9a505ecb02f71b9be16550091f8063a0d35b2f555b558b280\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"cf644df6b188cf4e46ace513da3c9950937e36489ffaa800ea48922c103a10a6\"" Feb 13 15:39:34.087341 containerd[1453]: time="2025-02-13T15:39:34.087302626Z" level=info msg="StartContainer for \"cf644df6b188cf4e46ace513da3c9950937e36489ffaa800ea48922c103a10a6\"" Feb 13 15:39:34.113653 systemd[1]: Started cri-containerd-cf644df6b188cf4e46ace513da3c9950937e36489ffaa800ea48922c103a10a6.scope - libcontainer container cf644df6b188cf4e46ace513da3c9950937e36489ffaa800ea48922c103a10a6. Feb 13 15:39:34.141566 containerd[1453]: time="2025-02-13T15:39:34.141189439Z" level=info msg="StartContainer for \"cf644df6b188cf4e46ace513da3c9950937e36489ffaa800ea48922c103a10a6\" returns successfully" Feb 13 15:39:34.590233 kubelet[2589]: E0213 15:39:34.590192 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:39:34.594502 kubelet[2589]: E0213 15:39:34.594476 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:34.594823 kubelet[2589]: W0213 15:39:34.594754 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:34.594823 kubelet[2589]: E0213 15:39:34.594783 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:34.595257 kubelet[2589]: E0213 15:39:34.595171 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:34.595257 kubelet[2589]: W0213 15:39:34.595184 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:34.595257 kubelet[2589]: E0213 15:39:34.595197 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:34.595409 kubelet[2589]: E0213 15:39:34.595397 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:34.595485 kubelet[2589]: W0213 15:39:34.595473 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:34.595544 kubelet[2589]: E0213 15:39:34.595535 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:34.595755 kubelet[2589]: E0213 15:39:34.595725 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:34.595934 kubelet[2589]: W0213 15:39:34.595813 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:34.595934 kubelet[2589]: E0213 15:39:34.595848 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:34.596129 kubelet[2589]: E0213 15:39:34.596073 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:34.596129 kubelet[2589]: W0213 15:39:34.596084 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:34.596129 kubelet[2589]: E0213 15:39:34.596096 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:34.596415 kubelet[2589]: E0213 15:39:34.596331 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:34.596415 kubelet[2589]: W0213 15:39:34.596342 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:34.596415 kubelet[2589]: E0213 15:39:34.596354 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:34.596566 kubelet[2589]: E0213 15:39:34.596555 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:34.596623 kubelet[2589]: W0213 15:39:34.596612 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:34.596672 kubelet[2589]: E0213 15:39:34.596664 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:34.596866 kubelet[2589]: E0213 15:39:34.596854 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:34.597037 kubelet[2589]: W0213 15:39:34.596920 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:34.597037 kubelet[2589]: E0213 15:39:34.596936 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:34.597206 kubelet[2589]: E0213 15:39:34.597194 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:34.597271 kubelet[2589]: W0213 15:39:34.597260 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:34.597328 kubelet[2589]: E0213 15:39:34.597319 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:34.597525 kubelet[2589]: E0213 15:39:34.597512 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:34.597688 kubelet[2589]: W0213 15:39:34.597592 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:34.597688 kubelet[2589]: E0213 15:39:34.597610 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:34.597798 kubelet[2589]: E0213 15:39:34.597787 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:34.597852 kubelet[2589]: W0213 15:39:34.597842 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:34.597902 kubelet[2589]: E0213 15:39:34.597894 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:34.598252 kubelet[2589]: E0213 15:39:34.598152 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:34.598252 kubelet[2589]: W0213 15:39:34.598165 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:34.598252 kubelet[2589]: E0213 15:39:34.598178 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:34.598407 kubelet[2589]: E0213 15:39:34.598394 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:34.598476 kubelet[2589]: W0213 15:39:34.598465 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:34.598536 kubelet[2589]: E0213 15:39:34.598527 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:34.598805 kubelet[2589]: E0213 15:39:34.598719 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:34.598805 kubelet[2589]: W0213 15:39:34.598731 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:34.598805 kubelet[2589]: E0213 15:39:34.598743 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:34.598954 kubelet[2589]: E0213 15:39:34.598941 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:34.599007 kubelet[2589]: W0213 15:39:34.598997 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:34.599064 kubelet[2589]: E0213 15:39:34.599055 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:34.599393 kubelet[2589]: E0213 15:39:34.599379 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:34.599474 kubelet[2589]: W0213 15:39:34.599462 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:34.599626 kubelet[2589]: E0213 15:39:34.599530 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:34.599834 kubelet[2589]: E0213 15:39:34.599729 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:34.599834 kubelet[2589]: W0213 15:39:34.599741 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:34.599834 kubelet[2589]: E0213 15:39:34.599753 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:34.600075 kubelet[2589]: E0213 15:39:34.600061 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:34.600495 kubelet[2589]: W0213 15:39:34.600477 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:34.600904 kubelet[2589]: E0213 15:39:34.600587 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:34.600964 kubelet[2589]: E0213 15:39:34.600910 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:34.600964 kubelet[2589]: W0213 15:39:34.600923 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:34.600964 kubelet[2589]: E0213 15:39:34.600939 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:34.601861 kubelet[2589]: E0213 15:39:34.601469 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:34.601861 kubelet[2589]: W0213 15:39:34.601485 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:34.601861 kubelet[2589]: E0213 15:39:34.601501 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:34.602067 kubelet[2589]: E0213 15:39:34.602046 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:34.602067 kubelet[2589]: W0213 15:39:34.602062 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:34.602539 kubelet[2589]: E0213 15:39:34.602233 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:34.602539 kubelet[2589]: W0213 15:39:34.602241 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:34.602539 kubelet[2589]: E0213 15:39:34.602532 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:34.602539 kubelet[2589]: W0213 15:39:34.602541 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:34.602637 kubelet[2589]: E0213 15:39:34.602553 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:34.602794 kubelet[2589]: E0213 15:39:34.602763 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:34.603204 kubelet[2589]: E0213 15:39:34.602795 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:34.603204 kubelet[2589]: W0213 15:39:34.602812 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:34.603204 kubelet[2589]: E0213 15:39:34.602825 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:34.603204 kubelet[2589]: E0213 15:39:34.602799 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:34.603204 kubelet[2589]: E0213 15:39:34.603055 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:34.603204 kubelet[2589]: W0213 15:39:34.603065 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:34.603204 kubelet[2589]: E0213 15:39:34.603087 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:34.603563 kubelet[2589]: E0213 15:39:34.603256 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:34.603563 kubelet[2589]: W0213 15:39:34.603265 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:34.603563 kubelet[2589]: E0213 15:39:34.603280 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:34.604183 kubelet[2589]: E0213 15:39:34.603669 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:34.604183 kubelet[2589]: W0213 15:39:34.603693 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:34.604183 kubelet[2589]: E0213 15:39:34.603716 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:34.604183 kubelet[2589]: I0213 15:39:34.604119 2589 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-7bc8d49d94-tzvgw" podStartSLOduration=1.947231634 podStartE2EDuration="3.60406552s" podCreationTimestamp="2025-02-13 15:39:31 +0000 UTC" firstStartedPulling="2025-02-13 15:39:32.41111063 +0000 UTC m=+27.006474480" lastFinishedPulling="2025-02-13 15:39:34.067944476 +0000 UTC m=+28.663308366" observedRunningTime="2025-02-13 15:39:34.601616133 +0000 UTC m=+29.196980023" watchObservedRunningTime="2025-02-13 15:39:34.60406552 +0000 UTC m=+29.199429370" Feb 13 15:39:34.604623 kubelet[2589]: E0213 15:39:34.604397 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:34.604623 kubelet[2589]: W0213 15:39:34.604411 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:34.604623 kubelet[2589]: E0213 15:39:34.604426 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:34.604926 kubelet[2589]: E0213 15:39:34.604903 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:34.605006 kubelet[2589]: W0213 15:39:34.604994 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:34.605142 kubelet[2589]: E0213 15:39:34.605116 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:34.605477 kubelet[2589]: E0213 15:39:34.605298 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:34.605477 kubelet[2589]: W0213 15:39:34.605381 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:34.605477 kubelet[2589]: E0213 15:39:34.605421 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:34.605643 kubelet[2589]: E0213 15:39:34.605630 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:34.605703 kubelet[2589]: W0213 15:39:34.605691 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:34.605818 kubelet[2589]: E0213 15:39:34.605796 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:34.605961 kubelet[2589]: E0213 15:39:34.605949 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:34.606017 kubelet[2589]: W0213 15:39:34.606006 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:34.606075 kubelet[2589]: E0213 15:39:34.606065 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:34.606294 kubelet[2589]: E0213 15:39:34.606254 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:39:34.606294 kubelet[2589]: W0213 15:39:34.606265 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:39:34.606294 kubelet[2589]: E0213 15:39:34.606276 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:39:35.260255 containerd[1453]: time="2025-02-13T15:39:35.260214681Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:35.261935 containerd[1453]: time="2025-02-13T15:39:35.261870322Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5117811" Feb 13 15:39:35.262704 containerd[1453]: time="2025-02-13T15:39:35.262668421Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:35.264940 containerd[1453]: time="2025-02-13T15:39:35.264900784Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:35.265360 containerd[1453]: time="2025-02-13T15:39:35.265328015Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.197130401s" Feb 13 15:39:35.265360 containerd[1453]: time="2025-02-13T15:39:35.265355137Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Feb 13 15:39:35.268374 containerd[1453]: time="2025-02-13T15:39:35.268343996Z" level=info msg="CreateContainer within sandbox \"7d42b31540996f15e1b192650239e666ab57fe20e14c2fc34a523a35cc8b7fc4\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 15:39:35.279454 containerd[1453]: time="2025-02-13T15:39:35.279415926Z" level=info msg="CreateContainer within sandbox \"7d42b31540996f15e1b192650239e666ab57fe20e14c2fc34a523a35cc8b7fc4\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e844521c29e2c7cc6e478c0c5b9a94a7ab9be3ad83b6f00e913b5787826e6425\"" Feb 13 15:39:35.280979 containerd[1453]: time="2025-02-13T15:39:35.279892561Z" level=info msg="StartContainer for \"e844521c29e2c7cc6e478c0c5b9a94a7ab9be3ad83b6f00e913b5787826e6425\"" Feb 13 15:39:35.310615 systemd[1]: Started cri-containerd-e844521c29e2c7cc6e478c0c5b9a94a7ab9be3ad83b6f00e913b5787826e6425.scope - libcontainer container e844521c29e2c7cc6e478c0c5b9a94a7ab9be3ad83b6f00e913b5787826e6425. Feb 13 15:39:35.337604 containerd[1453]: time="2025-02-13T15:39:35.337564702Z" level=info msg="StartContainer for \"e844521c29e2c7cc6e478c0c5b9a94a7ab9be3ad83b6f00e913b5787826e6425\" returns successfully" Feb 13 15:39:35.364254 systemd[1]: Started sshd@7-10.0.0.113:22-10.0.0.1:51066.service - OpenSSH per-connection server daemon (10.0.0.1:51066). Feb 13 15:39:35.367317 systemd[1]: cri-containerd-e844521c29e2c7cc6e478c0c5b9a94a7ab9be3ad83b6f00e913b5787826e6425.scope: Deactivated successfully. Feb 13 15:39:35.397939 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e844521c29e2c7cc6e478c0c5b9a94a7ab9be3ad83b6f00e913b5787826e6425-rootfs.mount: Deactivated successfully. Feb 13 15:39:35.500041 kubelet[2589]: E0213 15:39:35.499968 2589 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9jl5n" podUID="0c3e32e2-3a7c-428a-a18f-8761ef2b92d8" Feb 13 15:39:35.506525 containerd[1453]: time="2025-02-13T15:39:35.497235149Z" level=info msg="shim disconnected" id=e844521c29e2c7cc6e478c0c5b9a94a7ab9be3ad83b6f00e913b5787826e6425 namespace=k8s.io Feb 13 15:39:35.506525 containerd[1453]: time="2025-02-13T15:39:35.506401620Z" level=warning msg="cleaning up after shim disconnected" id=e844521c29e2c7cc6e478c0c5b9a94a7ab9be3ad83b6f00e913b5787826e6425 namespace=k8s.io Feb 13 15:39:35.506525 containerd[1453]: time="2025-02-13T15:39:35.506416181Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:39:35.510682 sshd[3284]: Accepted publickey for core from 10.0.0.1 port 51066 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:39:35.512519 sshd-session[3284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:39:35.516301 systemd-logind[1429]: New session 8 of user core. Feb 13 15:39:35.524593 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 15:39:35.596207 kubelet[2589]: E0213 15:39:35.596175 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:39:35.597690 kubelet[2589]: E0213 15:39:35.597262 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:39:35.599177 containerd[1453]: time="2025-02-13T15:39:35.599108405Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 15:39:35.672507 sshd[3312]: Connection closed by 10.0.0.1 port 51066 Feb 13 15:39:35.672867 sshd-session[3284]: pam_unix(sshd:session): session closed for user core Feb 13 15:39:35.676281 systemd[1]: sshd@7-10.0.0.113:22-10.0.0.1:51066.service: Deactivated successfully. Feb 13 15:39:35.679077 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 15:39:35.680023 systemd-logind[1429]: Session 8 logged out. Waiting for processes to exit. Feb 13 15:39:35.682898 systemd-logind[1429]: Removed session 8. Feb 13 15:39:36.602889 kubelet[2589]: E0213 15:39:36.602847 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:39:37.500092 kubelet[2589]: E0213 15:39:37.500060 2589 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9jl5n" podUID="0c3e32e2-3a7c-428a-a18f-8761ef2b92d8" Feb 13 15:39:39.159695 containerd[1453]: time="2025-02-13T15:39:39.159614397Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:39.160656 containerd[1453]: time="2025-02-13T15:39:39.160427368Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Feb 13 15:39:39.161561 containerd[1453]: time="2025-02-13T15:39:39.161521758Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:39.163911 containerd[1453]: time="2025-02-13T15:39:39.163851267Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:39.164746 containerd[1453]: time="2025-02-13T15:39:39.164712361Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 3.565557994s" Feb 13 15:39:39.164817 containerd[1453]: time="2025-02-13T15:39:39.164750684Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Feb 13 15:39:39.166652 containerd[1453]: time="2025-02-13T15:39:39.166616323Z" level=info msg="CreateContainer within sandbox \"7d42b31540996f15e1b192650239e666ab57fe20e14c2fc34a523a35cc8b7fc4\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 15:39:39.188662 containerd[1453]: time="2025-02-13T15:39:39.188600524Z" level=info msg="CreateContainer within sandbox \"7d42b31540996f15e1b192650239e666ab57fe20e14c2fc34a523a35cc8b7fc4\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"2ce5a2871604ada79823843e91c8d6dff7185cd6ca0e285492d3b9ad27197649\"" Feb 13 15:39:39.189434 containerd[1453]: time="2025-02-13T15:39:39.189392575Z" level=info msg="StartContainer for \"2ce5a2871604ada79823843e91c8d6dff7185cd6ca0e285492d3b9ad27197649\"" Feb 13 15:39:39.219665 systemd[1]: Started cri-containerd-2ce5a2871604ada79823843e91c8d6dff7185cd6ca0e285492d3b9ad27197649.scope - libcontainer container 2ce5a2871604ada79823843e91c8d6dff7185cd6ca0e285492d3b9ad27197649. Feb 13 15:39:39.290760 containerd[1453]: time="2025-02-13T15:39:39.289845337Z" level=info msg="StartContainer for \"2ce5a2871604ada79823843e91c8d6dff7185cd6ca0e285492d3b9ad27197649\" returns successfully" Feb 13 15:39:39.500718 kubelet[2589]: E0213 15:39:39.499248 2589 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9jl5n" podUID="0c3e32e2-3a7c-428a-a18f-8761ef2b92d8" Feb 13 15:39:39.610611 kubelet[2589]: E0213 15:39:39.610516 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:39:40.054607 systemd[1]: cri-containerd-2ce5a2871604ada79823843e91c8d6dff7185cd6ca0e285492d3b9ad27197649.scope: Deactivated successfully. Feb 13 15:39:40.073248 kubelet[2589]: I0213 15:39:40.073132 2589 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 15:39:40.073576 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ce5a2871604ada79823843e91c8d6dff7185cd6ca0e285492d3b9ad27197649-rootfs.mount: Deactivated successfully. Feb 13 15:39:40.117080 kubelet[2589]: I0213 15:39:40.117021 2589 topology_manager.go:215] "Topology Admit Handler" podUID="22169313-af53-4d8b-b855-dc02e6d1e640" podNamespace="kube-system" podName="coredns-76f75df574-tzqqh" Feb 13 15:39:40.125394 systemd[1]: Created slice kubepods-burstable-pod22169313_af53_4d8b_b855_dc02e6d1e640.slice - libcontainer container kubepods-burstable-pod22169313_af53_4d8b_b855_dc02e6d1e640.slice. Feb 13 15:39:40.135476 kubelet[2589]: I0213 15:39:40.132363 2589 topology_manager.go:215] "Topology Admit Handler" podUID="70d03d25-2cd5-469b-b092-195e4bf21efe" podNamespace="calico-system" podName="calico-kube-controllers-67d55cd4f9-c8fqd" Feb 13 15:39:40.135476 kubelet[2589]: I0213 15:39:40.132884 2589 topology_manager.go:215] "Topology Admit Handler" podUID="95d7909a-cd44-4f88-af35-6de766421d4b" podNamespace="calico-apiserver" podName="calico-apiserver-655c6976bf-p7qqq" Feb 13 15:39:40.135476 kubelet[2589]: I0213 15:39:40.132999 2589 topology_manager.go:215] "Topology Admit Handler" podUID="74996f45-87e3-49ee-bffd-dfcfa7bb4a84" podNamespace="kube-system" podName="coredns-76f75df574-m78g5" Feb 13 15:39:40.136456 containerd[1453]: time="2025-02-13T15:39:40.135842550Z" level=info msg="shim disconnected" id=2ce5a2871604ada79823843e91c8d6dff7185cd6ca0e285492d3b9ad27197649 namespace=k8s.io Feb 13 15:39:40.136456 containerd[1453]: time="2025-02-13T15:39:40.135900833Z" level=warning msg="cleaning up after shim disconnected" id=2ce5a2871604ada79823843e91c8d6dff7185cd6ca0e285492d3b9ad27197649 namespace=k8s.io Feb 13 15:39:40.136456 containerd[1453]: time="2025-02-13T15:39:40.135909594Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:39:40.144697 kubelet[2589]: I0213 15:39:40.144660 2589 topology_manager.go:215] "Topology Admit Handler" podUID="043eaf10-8df2-4749-97a8-7923e4159aba" podNamespace="calico-apiserver" podName="calico-apiserver-655c6976bf-dltfc" Feb 13 15:39:40.147949 systemd[1]: Created slice kubepods-besteffort-pod70d03d25_2cd5_469b_b092_195e4bf21efe.slice - libcontainer container kubepods-besteffort-pod70d03d25_2cd5_469b_b092_195e4bf21efe.slice. Feb 13 15:39:40.164154 kubelet[2589]: I0213 15:39:40.163849 2589 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zxp2\" (UniqueName: \"kubernetes.io/projected/70d03d25-2cd5-469b-b092-195e4bf21efe-kube-api-access-2zxp2\") pod \"calico-kube-controllers-67d55cd4f9-c8fqd\" (UID: \"70d03d25-2cd5-469b-b092-195e4bf21efe\") " pod="calico-system/calico-kube-controllers-67d55cd4f9-c8fqd" Feb 13 15:39:40.164154 kubelet[2589]: I0213 15:39:40.163896 2589 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74996f45-87e3-49ee-bffd-dfcfa7bb4a84-config-volume\") pod \"coredns-76f75df574-m78g5\" (UID: \"74996f45-87e3-49ee-bffd-dfcfa7bb4a84\") " pod="kube-system/coredns-76f75df574-m78g5" Feb 13 15:39:40.164154 kubelet[2589]: I0213 15:39:40.163938 2589 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/043eaf10-8df2-4749-97a8-7923e4159aba-calico-apiserver-certs\") pod \"calico-apiserver-655c6976bf-dltfc\" (UID: \"043eaf10-8df2-4749-97a8-7923e4159aba\") " pod="calico-apiserver/calico-apiserver-655c6976bf-dltfc" Feb 13 15:39:40.164154 kubelet[2589]: I0213 15:39:40.163962 2589 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/22169313-af53-4d8b-b855-dc02e6d1e640-config-volume\") pod \"coredns-76f75df574-tzqqh\" (UID: \"22169313-af53-4d8b-b855-dc02e6d1e640\") " pod="kube-system/coredns-76f75df574-tzqqh" Feb 13 15:39:40.164154 kubelet[2589]: I0213 15:39:40.163997 2589 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwf45\" (UniqueName: \"kubernetes.io/projected/22169313-af53-4d8b-b855-dc02e6d1e640-kube-api-access-xwf45\") pod \"coredns-76f75df574-tzqqh\" (UID: \"22169313-af53-4d8b-b855-dc02e6d1e640\") " pod="kube-system/coredns-76f75df574-tzqqh" Feb 13 15:39:40.163938 systemd[1]: Created slice kubepods-burstable-pod74996f45_87e3_49ee_bffd_dfcfa7bb4a84.slice - libcontainer container kubepods-burstable-pod74996f45_87e3_49ee_bffd_dfcfa7bb4a84.slice. Feb 13 15:39:40.164656 kubelet[2589]: I0213 15:39:40.164022 2589 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlv2v\" (UniqueName: \"kubernetes.io/projected/74996f45-87e3-49ee-bffd-dfcfa7bb4a84-kube-api-access-mlv2v\") pod \"coredns-76f75df574-m78g5\" (UID: \"74996f45-87e3-49ee-bffd-dfcfa7bb4a84\") " pod="kube-system/coredns-76f75df574-m78g5" Feb 13 15:39:40.164656 kubelet[2589]: I0213 15:39:40.164054 2589 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jndr5\" (UniqueName: \"kubernetes.io/projected/043eaf10-8df2-4749-97a8-7923e4159aba-kube-api-access-jndr5\") pod \"calico-apiserver-655c6976bf-dltfc\" (UID: \"043eaf10-8df2-4749-97a8-7923e4159aba\") " pod="calico-apiserver/calico-apiserver-655c6976bf-dltfc" Feb 13 15:39:40.164656 kubelet[2589]: I0213 15:39:40.164087 2589 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70d03d25-2cd5-469b-b092-195e4bf21efe-tigera-ca-bundle\") pod \"calico-kube-controllers-67d55cd4f9-c8fqd\" (UID: \"70d03d25-2cd5-469b-b092-195e4bf21efe\") " pod="calico-system/calico-kube-controllers-67d55cd4f9-c8fqd" Feb 13 15:39:40.164656 kubelet[2589]: I0213 15:39:40.164108 2589 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/95d7909a-cd44-4f88-af35-6de766421d4b-calico-apiserver-certs\") pod \"calico-apiserver-655c6976bf-p7qqq\" (UID: \"95d7909a-cd44-4f88-af35-6de766421d4b\") " pod="calico-apiserver/calico-apiserver-655c6976bf-p7qqq" Feb 13 15:39:40.164656 kubelet[2589]: I0213 15:39:40.164131 2589 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8drj6\" (UniqueName: \"kubernetes.io/projected/95d7909a-cd44-4f88-af35-6de766421d4b-kube-api-access-8drj6\") pod \"calico-apiserver-655c6976bf-p7qqq\" (UID: \"95d7909a-cd44-4f88-af35-6de766421d4b\") " pod="calico-apiserver/calico-apiserver-655c6976bf-p7qqq" Feb 13 15:39:40.171961 systemd[1]: Created slice kubepods-besteffort-pod95d7909a_cd44_4f88_af35_6de766421d4b.slice - libcontainer container kubepods-besteffort-pod95d7909a_cd44_4f88_af35_6de766421d4b.slice. Feb 13 15:39:40.178888 systemd[1]: Created slice kubepods-besteffort-pod043eaf10_8df2_4749_97a8_7923e4159aba.slice - libcontainer container kubepods-besteffort-pod043eaf10_8df2_4749_97a8_7923e4159aba.slice. Feb 13 15:39:40.430291 kubelet[2589]: E0213 15:39:40.430248 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:39:40.430947 containerd[1453]: time="2025-02-13T15:39:40.430914765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-tzqqh,Uid:22169313-af53-4d8b-b855-dc02e6d1e640,Namespace:kube-system,Attempt:0,}" Feb 13 15:39:40.457134 containerd[1453]: time="2025-02-13T15:39:40.457064939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67d55cd4f9-c8fqd,Uid:70d03d25-2cd5-469b-b092-195e4bf21efe,Namespace:calico-system,Attempt:0,}" Feb 13 15:39:40.471124 kubelet[2589]: E0213 15:39:40.471006 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:39:40.472585 containerd[1453]: time="2025-02-13T15:39:40.472530254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-m78g5,Uid:74996f45-87e3-49ee-bffd-dfcfa7bb4a84,Namespace:kube-system,Attempt:0,}" Feb 13 15:39:40.483050 containerd[1453]: time="2025-02-13T15:39:40.482781767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-655c6976bf-p7qqq,Uid:95d7909a-cd44-4f88-af35-6de766421d4b,Namespace:calico-apiserver,Attempt:0,}" Feb 13 15:39:40.483050 containerd[1453]: time="2025-02-13T15:39:40.482830530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-655c6976bf-dltfc,Uid:043eaf10-8df2-4749-97a8-7923e4159aba,Namespace:calico-apiserver,Attempt:0,}" Feb 13 15:39:40.615420 kubelet[2589]: E0213 15:39:40.615384 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:39:40.616761 containerd[1453]: time="2025-02-13T15:39:40.616657191Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 15:39:40.684266 systemd[1]: Started sshd@8-10.0.0.113:22-10.0.0.1:51078.service - OpenSSH per-connection server daemon (10.0.0.1:51078). Feb 13 15:39:40.747940 sshd[3409]: Accepted publickey for core from 10.0.0.1 port 51078 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:39:40.746048 sshd-session[3409]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:39:40.781836 systemd-logind[1429]: New session 9 of user core. Feb 13 15:39:40.797888 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 15:39:41.085263 sshd[3436]: Connection closed by 10.0.0.1 port 51078 Feb 13 15:39:41.085906 sshd-session[3409]: pam_unix(sshd:session): session closed for user core Feb 13 15:39:41.090967 systemd[1]: sshd@8-10.0.0.113:22-10.0.0.1:51078.service: Deactivated successfully. Feb 13 15:39:41.092776 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 15:39:41.094119 systemd-logind[1429]: Session 9 logged out. Waiting for processes to exit. Feb 13 15:39:41.096171 systemd-logind[1429]: Removed session 9. Feb 13 15:39:41.170666 containerd[1453]: time="2025-02-13T15:39:41.170606789Z" level=error msg="Failed to destroy network for sandbox \"bbe0e585cf74fab1708cc4fe366a97dc7b448d8d7a1e597df3010cf40cbe9ec3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:41.174020 containerd[1453]: time="2025-02-13T15:39:41.173862103Z" level=error msg="Failed to destroy network for sandbox \"fc9238e716f505d3af42ce11bca5e656f52405b452391535ed40e201ac9642d7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:41.177546 containerd[1453]: time="2025-02-13T15:39:41.177367073Z" level=error msg="encountered an error cleaning up failed sandbox \"fc9238e716f505d3af42ce11bca5e656f52405b452391535ed40e201ac9642d7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:41.177734 containerd[1453]: time="2025-02-13T15:39:41.177673412Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-655c6976bf-dltfc,Uid:043eaf10-8df2-4749-97a8-7923e4159aba,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fc9238e716f505d3af42ce11bca5e656f52405b452391535ed40e201ac9642d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:41.179750 containerd[1453]: time="2025-02-13T15:39:41.179695333Z" level=error msg="Failed to destroy network for sandbox \"04e549bcbe5ef51d7da81536fa57c286356e230b0a4a8de09f3b8b24843dcce6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:41.181852 containerd[1453]: time="2025-02-13T15:39:41.181788578Z" level=error msg="encountered an error cleaning up failed sandbox \"04e549bcbe5ef51d7da81536fa57c286356e230b0a4a8de09f3b8b24843dcce6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:41.181932 containerd[1453]: time="2025-02-13T15:39:41.181906705Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67d55cd4f9-c8fqd,Uid:70d03d25-2cd5-469b-b092-195e4bf21efe,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"04e549bcbe5ef51d7da81536fa57c286356e230b0a4a8de09f3b8b24843dcce6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:41.182609 containerd[1453]: time="2025-02-13T15:39:41.182392294Z" level=error msg="Failed to destroy network for sandbox \"1366802ebaa9a45a0b2a61073284e55df3a597069127539e15d8cfea7f6ea1a6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:41.183710 containerd[1453]: time="2025-02-13T15:39:41.183188782Z" level=error msg="encountered an error cleaning up failed sandbox \"1366802ebaa9a45a0b2a61073284e55df3a597069127539e15d8cfea7f6ea1a6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:41.183710 containerd[1453]: time="2025-02-13T15:39:41.183257626Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-655c6976bf-p7qqq,Uid:95d7909a-cd44-4f88-af35-6de766421d4b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1366802ebaa9a45a0b2a61073284e55df3a597069127539e15d8cfea7f6ea1a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:41.183849 kubelet[2589]: E0213 15:39:41.183316 2589 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc9238e716f505d3af42ce11bca5e656f52405b452391535ed40e201ac9642d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:41.183849 kubelet[2589]: E0213 15:39:41.183405 2589 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc9238e716f505d3af42ce11bca5e656f52405b452391535ed40e201ac9642d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-655c6976bf-dltfc" Feb 13 15:39:41.183849 kubelet[2589]: E0213 15:39:41.183425 2589 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc9238e716f505d3af42ce11bca5e656f52405b452391535ed40e201ac9642d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-655c6976bf-dltfc" Feb 13 15:39:41.183849 kubelet[2589]: E0213 15:39:41.183503 2589 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1366802ebaa9a45a0b2a61073284e55df3a597069127539e15d8cfea7f6ea1a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:41.183957 kubelet[2589]: E0213 15:39:41.183549 2589 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1366802ebaa9a45a0b2a61073284e55df3a597069127539e15d8cfea7f6ea1a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-655c6976bf-p7qqq" Feb 13 15:39:41.183957 kubelet[2589]: E0213 15:39:41.183568 2589 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1366802ebaa9a45a0b2a61073284e55df3a597069127539e15d8cfea7f6ea1a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-655c6976bf-p7qqq" Feb 13 15:39:41.183957 kubelet[2589]: E0213 15:39:41.183611 2589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-655c6976bf-p7qqq_calico-apiserver(95d7909a-cd44-4f88-af35-6de766421d4b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-655c6976bf-p7qqq_calico-apiserver(95d7909a-cd44-4f88-af35-6de766421d4b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1366802ebaa9a45a0b2a61073284e55df3a597069127539e15d8cfea7f6ea1a6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-655c6976bf-p7qqq" podUID="95d7909a-cd44-4f88-af35-6de766421d4b" Feb 13 15:39:41.184053 kubelet[2589]: E0213 15:39:41.183513 2589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-655c6976bf-dltfc_calico-apiserver(043eaf10-8df2-4749-97a8-7923e4159aba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-655c6976bf-dltfc_calico-apiserver(043eaf10-8df2-4749-97a8-7923e4159aba)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fc9238e716f505d3af42ce11bca5e656f52405b452391535ed40e201ac9642d7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-655c6976bf-dltfc" podUID="043eaf10-8df2-4749-97a8-7923e4159aba" Feb 13 15:39:41.184573 kubelet[2589]: E0213 15:39:41.184375 2589 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04e549bcbe5ef51d7da81536fa57c286356e230b0a4a8de09f3b8b24843dcce6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:41.184573 kubelet[2589]: E0213 15:39:41.184467 2589 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04e549bcbe5ef51d7da81536fa57c286356e230b0a4a8de09f3b8b24843dcce6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67d55cd4f9-c8fqd" Feb 13 15:39:41.184573 kubelet[2589]: E0213 15:39:41.184488 2589 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04e549bcbe5ef51d7da81536fa57c286356e230b0a4a8de09f3b8b24843dcce6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67d55cd4f9-c8fqd" Feb 13 15:39:41.184685 kubelet[2589]: E0213 15:39:41.184548 2589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-67d55cd4f9-c8fqd_calico-system(70d03d25-2cd5-469b-b092-195e4bf21efe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-67d55cd4f9-c8fqd_calico-system(70d03d25-2cd5-469b-b092-195e4bf21efe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"04e549bcbe5ef51d7da81536fa57c286356e230b0a4a8de09f3b8b24843dcce6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-67d55cd4f9-c8fqd" podUID="70d03d25-2cd5-469b-b092-195e4bf21efe" Feb 13 15:39:41.184752 containerd[1453]: time="2025-02-13T15:39:41.184685871Z" level=error msg="encountered an error cleaning up failed sandbox \"bbe0e585cf74fab1708cc4fe366a97dc7b448d8d7a1e597df3010cf40cbe9ec3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:41.185038 containerd[1453]: time="2025-02-13T15:39:41.184788877Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-m78g5,Uid:74996f45-87e3-49ee-bffd-dfcfa7bb4a84,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bbe0e585cf74fab1708cc4fe366a97dc7b448d8d7a1e597df3010cf40cbe9ec3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:41.185301 kubelet[2589]: E0213 15:39:41.185281 2589 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbe0e585cf74fab1708cc4fe366a97dc7b448d8d7a1e597df3010cf40cbe9ec3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:41.185468 kubelet[2589]: E0213 15:39:41.185426 2589 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbe0e585cf74fab1708cc4fe366a97dc7b448d8d7a1e597df3010cf40cbe9ec3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-m78g5" Feb 13 15:39:41.185587 kubelet[2589]: E0213 15:39:41.185552 2589 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbe0e585cf74fab1708cc4fe366a97dc7b448d8d7a1e597df3010cf40cbe9ec3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-m78g5" Feb 13 15:39:41.185713 kubelet[2589]: E0213 15:39:41.185700 2589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-m78g5_kube-system(74996f45-87e3-49ee-bffd-dfcfa7bb4a84)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-m78g5_kube-system(74996f45-87e3-49ee-bffd-dfcfa7bb4a84)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bbe0e585cf74fab1708cc4fe366a97dc7b448d8d7a1e597df3010cf40cbe9ec3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-m78g5" podUID="74996f45-87e3-49ee-bffd-dfcfa7bb4a84" Feb 13 15:39:41.189327 containerd[1453]: time="2025-02-13T15:39:41.189135258Z" level=error msg="Failed to destroy network for sandbox \"cda662325bbcccb4b3b73cb9c1a4ba575d4900e6a01847f7536483bed3456eb0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:41.189815 containerd[1453]: time="2025-02-13T15:39:41.189695371Z" level=error msg="encountered an error cleaning up failed sandbox \"cda662325bbcccb4b3b73cb9c1a4ba575d4900e6a01847f7536483bed3456eb0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:41.189815 containerd[1453]: time="2025-02-13T15:39:41.189768696Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-tzqqh,Uid:22169313-af53-4d8b-b855-dc02e6d1e640,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cda662325bbcccb4b3b73cb9c1a4ba575d4900e6a01847f7536483bed3456eb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:41.190336 kubelet[2589]: E0213 15:39:41.190157 2589 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cda662325bbcccb4b3b73cb9c1a4ba575d4900e6a01847f7536483bed3456eb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:41.190336 kubelet[2589]: E0213 15:39:41.190206 2589 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cda662325bbcccb4b3b73cb9c1a4ba575d4900e6a01847f7536483bed3456eb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-tzqqh" Feb 13 15:39:41.190336 kubelet[2589]: E0213 15:39:41.190231 2589 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cda662325bbcccb4b3b73cb9c1a4ba575d4900e6a01847f7536483bed3456eb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-tzqqh" Feb 13 15:39:41.190473 kubelet[2589]: E0213 15:39:41.190310 2589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-tzqqh_kube-system(22169313-af53-4d8b-b855-dc02e6d1e640)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-tzqqh_kube-system(22169313-af53-4d8b-b855-dc02e6d1e640)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cda662325bbcccb4b3b73cb9c1a4ba575d4900e6a01847f7536483bed3456eb0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-tzqqh" podUID="22169313-af53-4d8b-b855-dc02e6d1e640" Feb 13 15:39:41.273957 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cda662325bbcccb4b3b73cb9c1a4ba575d4900e6a01847f7536483bed3456eb0-shm.mount: Deactivated successfully. Feb 13 15:39:41.504878 systemd[1]: Created slice kubepods-besteffort-pod0c3e32e2_3a7c_428a_a18f_8761ef2b92d8.slice - libcontainer container kubepods-besteffort-pod0c3e32e2_3a7c_428a_a18f_8761ef2b92d8.slice. Feb 13 15:39:41.507292 containerd[1453]: time="2025-02-13T15:39:41.507239217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9jl5n,Uid:0c3e32e2-3a7c-428a-a18f-8761ef2b92d8,Namespace:calico-system,Attempt:0,}" Feb 13 15:39:41.558740 containerd[1453]: time="2025-02-13T15:39:41.558210347Z" level=error msg="Failed to destroy network for sandbox \"323440efec752a3eb15eea898bfaffb5e63aebd2a734fd29578ba11bb1b33be8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:41.558740 containerd[1453]: time="2025-02-13T15:39:41.558581930Z" level=error msg="encountered an error cleaning up failed sandbox \"323440efec752a3eb15eea898bfaffb5e63aebd2a734fd29578ba11bb1b33be8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:41.558740 containerd[1453]: time="2025-02-13T15:39:41.558641733Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9jl5n,Uid:0c3e32e2-3a7c-428a-a18f-8761ef2b92d8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"323440efec752a3eb15eea898bfaffb5e63aebd2a734fd29578ba11bb1b33be8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:41.559226 kubelet[2589]: E0213 15:39:41.559063 2589 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"323440efec752a3eb15eea898bfaffb5e63aebd2a734fd29578ba11bb1b33be8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:41.559226 kubelet[2589]: E0213 15:39:41.559156 2589 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"323440efec752a3eb15eea898bfaffb5e63aebd2a734fd29578ba11bb1b33be8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9jl5n" Feb 13 15:39:41.559226 kubelet[2589]: E0213 15:39:41.559178 2589 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"323440efec752a3eb15eea898bfaffb5e63aebd2a734fd29578ba11bb1b33be8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9jl5n" Feb 13 15:39:41.559414 kubelet[2589]: E0213 15:39:41.559357 2589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9jl5n_calico-system(0c3e32e2-3a7c-428a-a18f-8761ef2b92d8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9jl5n_calico-system(0c3e32e2-3a7c-428a-a18f-8761ef2b92d8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"323440efec752a3eb15eea898bfaffb5e63aebd2a734fd29578ba11bb1b33be8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9jl5n" podUID="0c3e32e2-3a7c-428a-a18f-8761ef2b92d8" Feb 13 15:39:41.560555 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-323440efec752a3eb15eea898bfaffb5e63aebd2a734fd29578ba11bb1b33be8-shm.mount: Deactivated successfully. Feb 13 15:39:41.619126 kubelet[2589]: I0213 15:39:41.619092 2589 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="323440efec752a3eb15eea898bfaffb5e63aebd2a734fd29578ba11bb1b33be8" Feb 13 15:39:41.620816 containerd[1453]: time="2025-02-13T15:39:41.620281702Z" level=info msg="StopPodSandbox for \"323440efec752a3eb15eea898bfaffb5e63aebd2a734fd29578ba11bb1b33be8\"" Feb 13 15:39:41.620816 containerd[1453]: time="2025-02-13T15:39:41.620470754Z" level=info msg="Ensure that sandbox 323440efec752a3eb15eea898bfaffb5e63aebd2a734fd29578ba11bb1b33be8 in task-service has been cleanup successfully" Feb 13 15:39:41.621425 containerd[1453]: time="2025-02-13T15:39:41.621023187Z" level=info msg="TearDown network for sandbox \"323440efec752a3eb15eea898bfaffb5e63aebd2a734fd29578ba11bb1b33be8\" successfully" Feb 13 15:39:41.621425 containerd[1453]: time="2025-02-13T15:39:41.621060029Z" level=info msg="StopPodSandbox for \"323440efec752a3eb15eea898bfaffb5e63aebd2a734fd29578ba11bb1b33be8\" returns successfully" Feb 13 15:39:41.622357 systemd[1]: run-netns-cni\x2d45ee20a8\x2d333c\x2d295c\x2d07aa\x2d3ee61937abd7.mount: Deactivated successfully. Feb 13 15:39:41.622547 containerd[1453]: time="2025-02-13T15:39:41.622516996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9jl5n,Uid:0c3e32e2-3a7c-428a-a18f-8761ef2b92d8,Namespace:calico-system,Attempt:1,}" Feb 13 15:39:41.624466 kubelet[2589]: I0213 15:39:41.624081 2589 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1366802ebaa9a45a0b2a61073284e55df3a597069127539e15d8cfea7f6ea1a6" Feb 13 15:39:41.624672 containerd[1453]: time="2025-02-13T15:39:41.624647564Z" level=info msg="StopPodSandbox for \"1366802ebaa9a45a0b2a61073284e55df3a597069127539e15d8cfea7f6ea1a6\"" Feb 13 15:39:41.624820 containerd[1453]: time="2025-02-13T15:39:41.624801813Z" level=info msg="Ensure that sandbox 1366802ebaa9a45a0b2a61073284e55df3a597069127539e15d8cfea7f6ea1a6 in task-service has been cleanup successfully" Feb 13 15:39:41.625157 containerd[1453]: time="2025-02-13T15:39:41.625133553Z" level=info msg="TearDown network for sandbox \"1366802ebaa9a45a0b2a61073284e55df3a597069127539e15d8cfea7f6ea1a6\" successfully" Feb 13 15:39:41.625199 containerd[1453]: time="2025-02-13T15:39:41.625169035Z" level=info msg="StopPodSandbox for \"1366802ebaa9a45a0b2a61073284e55df3a597069127539e15d8cfea7f6ea1a6\" returns successfully" Feb 13 15:39:41.625591 containerd[1453]: time="2025-02-13T15:39:41.625561858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-655c6976bf-p7qqq,Uid:95d7909a-cd44-4f88-af35-6de766421d4b,Namespace:calico-apiserver,Attempt:1,}" Feb 13 15:39:41.625786 kubelet[2589]: I0213 15:39:41.625627 2589 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04e549bcbe5ef51d7da81536fa57c286356e230b0a4a8de09f3b8b24843dcce6" Feb 13 15:39:41.626300 containerd[1453]: time="2025-02-13T15:39:41.626274221Z" level=info msg="StopPodSandbox for \"04e549bcbe5ef51d7da81536fa57c286356e230b0a4a8de09f3b8b24843dcce6\"" Feb 13 15:39:41.626438 containerd[1453]: time="2025-02-13T15:39:41.626418110Z" level=info msg="Ensure that sandbox 04e549bcbe5ef51d7da81536fa57c286356e230b0a4a8de09f3b8b24843dcce6 in task-service has been cleanup successfully" Feb 13 15:39:41.626716 kubelet[2589]: I0213 15:39:41.626696 2589 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cda662325bbcccb4b3b73cb9c1a4ba575d4900e6a01847f7536483bed3456eb0" Feb 13 15:39:41.626947 containerd[1453]: time="2025-02-13T15:39:41.626778011Z" level=info msg="TearDown network for sandbox \"04e549bcbe5ef51d7da81536fa57c286356e230b0a4a8de09f3b8b24843dcce6\" successfully" Feb 13 15:39:41.626947 containerd[1453]: time="2025-02-13T15:39:41.626801773Z" level=info msg="StopPodSandbox for \"04e549bcbe5ef51d7da81536fa57c286356e230b0a4a8de09f3b8b24843dcce6\" returns successfully" Feb 13 15:39:41.627471 containerd[1453]: time="2025-02-13T15:39:41.627422250Z" level=info msg="StopPodSandbox for \"cda662325bbcccb4b3b73cb9c1a4ba575d4900e6a01847f7536483bed3456eb0\"" Feb 13 15:39:41.627748 containerd[1453]: time="2025-02-13T15:39:41.627718027Z" level=info msg="Ensure that sandbox cda662325bbcccb4b3b73cb9c1a4ba575d4900e6a01847f7536483bed3456eb0 in task-service has been cleanup successfully" Feb 13 15:39:41.628567 containerd[1453]: time="2025-02-13T15:39:41.628293422Z" level=info msg="TearDown network for sandbox \"cda662325bbcccb4b3b73cb9c1a4ba575d4900e6a01847f7536483bed3456eb0\" successfully" Feb 13 15:39:41.628567 containerd[1453]: time="2025-02-13T15:39:41.628320663Z" level=info msg="StopPodSandbox for \"cda662325bbcccb4b3b73cb9c1a4ba575d4900e6a01847f7536483bed3456eb0\" returns successfully" Feb 13 15:39:41.630105 kubelet[2589]: E0213 15:39:41.630080 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:39:41.630681 containerd[1453]: time="2025-02-13T15:39:41.630648003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-tzqqh,Uid:22169313-af53-4d8b-b855-dc02e6d1e640,Namespace:kube-system,Attempt:1,}" Feb 13 15:39:41.631568 kubelet[2589]: I0213 15:39:41.631204 2589 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bbe0e585cf74fab1708cc4fe366a97dc7b448d8d7a1e597df3010cf40cbe9ec3" Feb 13 15:39:41.631861 containerd[1453]: time="2025-02-13T15:39:41.631799432Z" level=info msg="StopPodSandbox for \"bbe0e585cf74fab1708cc4fe366a97dc7b448d8d7a1e597df3010cf40cbe9ec3\"" Feb 13 15:39:41.632063 containerd[1453]: time="2025-02-13T15:39:41.632006804Z" level=info msg="Ensure that sandbox bbe0e585cf74fab1708cc4fe366a97dc7b448d8d7a1e597df3010cf40cbe9ec3 in task-service has been cleanup successfully" Feb 13 15:39:41.632274 containerd[1453]: time="2025-02-13T15:39:41.632226857Z" level=info msg="TearDown network for sandbox \"bbe0e585cf74fab1708cc4fe366a97dc7b448d8d7a1e597df3010cf40cbe9ec3\" successfully" Feb 13 15:39:41.632274 containerd[1453]: time="2025-02-13T15:39:41.632244618Z" level=info msg="StopPodSandbox for \"bbe0e585cf74fab1708cc4fe366a97dc7b448d8d7a1e597df3010cf40cbe9ec3\" returns successfully" Feb 13 15:39:41.632745 kubelet[2589]: E0213 15:39:41.632687 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:39:41.633548 containerd[1453]: time="2025-02-13T15:39:41.633479332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-m78g5,Uid:74996f45-87e3-49ee-bffd-dfcfa7bb4a84,Namespace:kube-system,Attempt:1,}" Feb 13 15:39:41.634493 kubelet[2589]: I0213 15:39:41.633766 2589 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc9238e716f505d3af42ce11bca5e656f52405b452391535ed40e201ac9642d7" Feb 13 15:39:41.634582 containerd[1453]: time="2025-02-13T15:39:41.634351144Z" level=info msg="StopPodSandbox for \"fc9238e716f505d3af42ce11bca5e656f52405b452391535ed40e201ac9642d7\"" Feb 13 15:39:41.634733 containerd[1453]: time="2025-02-13T15:39:41.634685084Z" level=info msg="Ensure that sandbox fc9238e716f505d3af42ce11bca5e656f52405b452391535ed40e201ac9642d7 in task-service has been cleanup successfully" Feb 13 15:39:41.635539 containerd[1453]: time="2025-02-13T15:39:41.635441010Z" level=info msg="TearDown network for sandbox \"fc9238e716f505d3af42ce11bca5e656f52405b452391535ed40e201ac9642d7\" successfully" Feb 13 15:39:41.635539 containerd[1453]: time="2025-02-13T15:39:41.635530495Z" level=info msg="StopPodSandbox for \"fc9238e716f505d3af42ce11bca5e656f52405b452391535ed40e201ac9642d7\" returns successfully" Feb 13 15:39:41.636211 containerd[1453]: time="2025-02-13T15:39:41.636174174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-655c6976bf-dltfc,Uid:043eaf10-8df2-4749-97a8-7923e4159aba,Namespace:calico-apiserver,Attempt:1,}" Feb 13 15:39:41.636951 containerd[1453]: time="2025-02-13T15:39:41.636879496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67d55cd4f9-c8fqd,Uid:70d03d25-2cd5-469b-b092-195e4bf21efe,Namespace:calico-system,Attempt:1,}" Feb 13 15:39:42.042227 containerd[1453]: time="2025-02-13T15:39:42.042165280Z" level=error msg="Failed to destroy network for sandbox \"50cf6045c8a1ddf321bab1e6e9a3f1fc9f3a297088fc52a6e468af940cbe1c10\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:42.042538 containerd[1453]: time="2025-02-13T15:39:42.042512380Z" level=error msg="encountered an error cleaning up failed sandbox \"50cf6045c8a1ddf321bab1e6e9a3f1fc9f3a297088fc52a6e468af940cbe1c10\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:42.042711 containerd[1453]: time="2025-02-13T15:39:42.042685870Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9jl5n,Uid:0c3e32e2-3a7c-428a-a18f-8761ef2b92d8,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"50cf6045c8a1ddf321bab1e6e9a3f1fc9f3a297088fc52a6e468af940cbe1c10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:42.042953 kubelet[2589]: E0213 15:39:42.042930 2589 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50cf6045c8a1ddf321bab1e6e9a3f1fc9f3a297088fc52a6e468af940cbe1c10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:42.043383 kubelet[2589]: E0213 15:39:42.043059 2589 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50cf6045c8a1ddf321bab1e6e9a3f1fc9f3a297088fc52a6e468af940cbe1c10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9jl5n" Feb 13 15:39:42.043383 kubelet[2589]: E0213 15:39:42.043095 2589 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50cf6045c8a1ddf321bab1e6e9a3f1fc9f3a297088fc52a6e468af940cbe1c10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9jl5n" Feb 13 15:39:42.043383 kubelet[2589]: E0213 15:39:42.043168 2589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9jl5n_calico-system(0c3e32e2-3a7c-428a-a18f-8761ef2b92d8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9jl5n_calico-system(0c3e32e2-3a7c-428a-a18f-8761ef2b92d8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"50cf6045c8a1ddf321bab1e6e9a3f1fc9f3a297088fc52a6e468af940cbe1c10\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9jl5n" podUID="0c3e32e2-3a7c-428a-a18f-8761ef2b92d8" Feb 13 15:39:42.202343 containerd[1453]: time="2025-02-13T15:39:42.202288461Z" level=error msg="Failed to destroy network for sandbox \"5877426208bb0ec6f3b0bd6273d2a61f178cd37093bfdab85cd100b691fb5424\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:42.202830 containerd[1453]: time="2025-02-13T15:39:42.202688564Z" level=error msg="encountered an error cleaning up failed sandbox \"5877426208bb0ec6f3b0bd6273d2a61f178cd37093bfdab85cd100b691fb5424\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:42.202830 containerd[1453]: time="2025-02-13T15:39:42.202754488Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-tzqqh,Uid:22169313-af53-4d8b-b855-dc02e6d1e640,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"5877426208bb0ec6f3b0bd6273d2a61f178cd37093bfdab85cd100b691fb5424\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:42.203944 kubelet[2589]: E0213 15:39:42.203024 2589 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5877426208bb0ec6f3b0bd6273d2a61f178cd37093bfdab85cd100b691fb5424\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:42.203944 kubelet[2589]: E0213 15:39:42.203086 2589 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5877426208bb0ec6f3b0bd6273d2a61f178cd37093bfdab85cd100b691fb5424\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-tzqqh" Feb 13 15:39:42.203944 kubelet[2589]: E0213 15:39:42.203107 2589 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5877426208bb0ec6f3b0bd6273d2a61f178cd37093bfdab85cd100b691fb5424\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-tzqqh" Feb 13 15:39:42.204023 kubelet[2589]: E0213 15:39:42.203205 2589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-tzqqh_kube-system(22169313-af53-4d8b-b855-dc02e6d1e640)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-tzqqh_kube-system(22169313-af53-4d8b-b855-dc02e6d1e640)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5877426208bb0ec6f3b0bd6273d2a61f178cd37093bfdab85cd100b691fb5424\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-tzqqh" podUID="22169313-af53-4d8b-b855-dc02e6d1e640" Feb 13 15:39:42.211217 containerd[1453]: time="2025-02-13T15:39:42.211171697Z" level=error msg="Failed to destroy network for sandbox \"c7b8464e6806d7515d6d160d433c08824e6e2de7760100550e98ff9d2b183f53\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:42.212698 containerd[1453]: time="2025-02-13T15:39:42.212611221Z" level=error msg="encountered an error cleaning up failed sandbox \"c7b8464e6806d7515d6d160d433c08824e6e2de7760100550e98ff9d2b183f53\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:42.212698 containerd[1453]: time="2025-02-13T15:39:42.212680665Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67d55cd4f9-c8fqd,Uid:70d03d25-2cd5-469b-b092-195e4bf21efe,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"c7b8464e6806d7515d6d160d433c08824e6e2de7760100550e98ff9d2b183f53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:42.212971 kubelet[2589]: E0213 15:39:42.212935 2589 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7b8464e6806d7515d6d160d433c08824e6e2de7760100550e98ff9d2b183f53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:42.213046 kubelet[2589]: E0213 15:39:42.212991 2589 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7b8464e6806d7515d6d160d433c08824e6e2de7760100550e98ff9d2b183f53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67d55cd4f9-c8fqd" Feb 13 15:39:42.213046 kubelet[2589]: E0213 15:39:42.213014 2589 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7b8464e6806d7515d6d160d433c08824e6e2de7760100550e98ff9d2b183f53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67d55cd4f9-c8fqd" Feb 13 15:39:42.213179 kubelet[2589]: E0213 15:39:42.213089 2589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-67d55cd4f9-c8fqd_calico-system(70d03d25-2cd5-469b-b092-195e4bf21efe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-67d55cd4f9-c8fqd_calico-system(70d03d25-2cd5-469b-b092-195e4bf21efe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c7b8464e6806d7515d6d160d433c08824e6e2de7760100550e98ff9d2b183f53\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-67d55cd4f9-c8fqd" podUID="70d03d25-2cd5-469b-b092-195e4bf21efe" Feb 13 15:39:42.214400 containerd[1453]: time="2025-02-13T15:39:42.214368363Z" level=error msg="Failed to destroy network for sandbox \"49e61bdc9174d5527e1c7974cc761bf4ead3dd292c49b47e6a610d38ca0b74ec\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:42.214896 containerd[1453]: time="2025-02-13T15:39:42.214753305Z" level=error msg="encountered an error cleaning up failed sandbox \"49e61bdc9174d5527e1c7974cc761bf4ead3dd292c49b47e6a610d38ca0b74ec\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:42.214896 containerd[1453]: time="2025-02-13T15:39:42.214810388Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-m78g5,Uid:74996f45-87e3-49ee-bffd-dfcfa7bb4a84,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"49e61bdc9174d5527e1c7974cc761bf4ead3dd292c49b47e6a610d38ca0b74ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:42.215671 kubelet[2589]: E0213 15:39:42.215641 2589 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49e61bdc9174d5527e1c7974cc761bf4ead3dd292c49b47e6a610d38ca0b74ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:42.215742 kubelet[2589]: E0213 15:39:42.215732 2589 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49e61bdc9174d5527e1c7974cc761bf4ead3dd292c49b47e6a610d38ca0b74ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-m78g5" Feb 13 15:39:42.215766 kubelet[2589]: E0213 15:39:42.215755 2589 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49e61bdc9174d5527e1c7974cc761bf4ead3dd292c49b47e6a610d38ca0b74ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-m78g5" Feb 13 15:39:42.215874 kubelet[2589]: E0213 15:39:42.215829 2589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-m78g5_kube-system(74996f45-87e3-49ee-bffd-dfcfa7bb4a84)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-m78g5_kube-system(74996f45-87e3-49ee-bffd-dfcfa7bb4a84)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"49e61bdc9174d5527e1c7974cc761bf4ead3dd292c49b47e6a610d38ca0b74ec\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-m78g5" podUID="74996f45-87e3-49ee-bffd-dfcfa7bb4a84" Feb 13 15:39:42.234182 containerd[1453]: time="2025-02-13T15:39:42.234123030Z" level=error msg="Failed to destroy network for sandbox \"2782e5c6315792cdee1e8b9d3b9b6dea34d36948f0bc75d3eaf32e17a3b39779\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:42.235348 containerd[1453]: time="2025-02-13T15:39:42.235297898Z" level=error msg="encountered an error cleaning up failed sandbox \"2782e5c6315792cdee1e8b9d3b9b6dea34d36948f0bc75d3eaf32e17a3b39779\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:42.235394 containerd[1453]: time="2025-02-13T15:39:42.235368583Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-655c6976bf-dltfc,Uid:043eaf10-8df2-4749-97a8-7923e4159aba,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"2782e5c6315792cdee1e8b9d3b9b6dea34d36948f0bc75d3eaf32e17a3b39779\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:42.235961 kubelet[2589]: E0213 15:39:42.235605 2589 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2782e5c6315792cdee1e8b9d3b9b6dea34d36948f0bc75d3eaf32e17a3b39779\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:42.235961 kubelet[2589]: E0213 15:39:42.235655 2589 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2782e5c6315792cdee1e8b9d3b9b6dea34d36948f0bc75d3eaf32e17a3b39779\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-655c6976bf-dltfc" Feb 13 15:39:42.235961 kubelet[2589]: E0213 15:39:42.235675 2589 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2782e5c6315792cdee1e8b9d3b9b6dea34d36948f0bc75d3eaf32e17a3b39779\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-655c6976bf-dltfc" Feb 13 15:39:42.236067 kubelet[2589]: E0213 15:39:42.235730 2589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-655c6976bf-dltfc_calico-apiserver(043eaf10-8df2-4749-97a8-7923e4159aba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-655c6976bf-dltfc_calico-apiserver(043eaf10-8df2-4749-97a8-7923e4159aba)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2782e5c6315792cdee1e8b9d3b9b6dea34d36948f0bc75d3eaf32e17a3b39779\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-655c6976bf-dltfc" podUID="043eaf10-8df2-4749-97a8-7923e4159aba" Feb 13 15:39:42.244367 containerd[1453]: time="2025-02-13T15:39:42.244320423Z" level=error msg="Failed to destroy network for sandbox \"5c6df23e15af22b2b3475eb65ba94a3e2b7eca755b23c91b1d64f81c31949671\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:42.244668 containerd[1453]: time="2025-02-13T15:39:42.244639201Z" level=error msg="encountered an error cleaning up failed sandbox \"5c6df23e15af22b2b3475eb65ba94a3e2b7eca755b23c91b1d64f81c31949671\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:42.244719 containerd[1453]: time="2025-02-13T15:39:42.244699725Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-655c6976bf-p7qqq,Uid:95d7909a-cd44-4f88-af35-6de766421d4b,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"5c6df23e15af22b2b3475eb65ba94a3e2b7eca755b23c91b1d64f81c31949671\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:42.244955 kubelet[2589]: E0213 15:39:42.244931 2589 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c6df23e15af22b2b3475eb65ba94a3e2b7eca755b23c91b1d64f81c31949671\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:42.245385 kubelet[2589]: E0213 15:39:42.245076 2589 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c6df23e15af22b2b3475eb65ba94a3e2b7eca755b23c91b1d64f81c31949671\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-655c6976bf-p7qqq" Feb 13 15:39:42.245385 kubelet[2589]: E0213 15:39:42.245113 2589 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c6df23e15af22b2b3475eb65ba94a3e2b7eca755b23c91b1d64f81c31949671\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-655c6976bf-p7qqq" Feb 13 15:39:42.245385 kubelet[2589]: E0213 15:39:42.245185 2589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-655c6976bf-p7qqq_calico-apiserver(95d7909a-cd44-4f88-af35-6de766421d4b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-655c6976bf-p7qqq_calico-apiserver(95d7909a-cd44-4f88-af35-6de766421d4b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5c6df23e15af22b2b3475eb65ba94a3e2b7eca755b23c91b1d64f81c31949671\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-655c6976bf-p7qqq" podUID="95d7909a-cd44-4f88-af35-6de766421d4b" Feb 13 15:39:42.274073 systemd[1]: run-netns-cni\x2d32a44e4c\x2d1a1a\x2dd2ff\x2d9426\x2d57d84a3de9cf.mount: Deactivated successfully. Feb 13 15:39:42.274184 systemd[1]: run-netns-cni\x2d1410e16c\x2d0875\x2d25b5\x2d7e9d\x2df3d075e49357.mount: Deactivated successfully. Feb 13 15:39:42.274234 systemd[1]: run-netns-cni\x2dfb503f6e\x2d0a52\x2d1dd9\x2dd1c5\x2d7763fbf2d1de.mount: Deactivated successfully. Feb 13 15:39:42.274279 systemd[1]: run-netns-cni\x2d2a508025\x2d2506\x2df0d3\x2d2c82\x2df71c3ffde564.mount: Deactivated successfully. Feb 13 15:39:42.274335 systemd[1]: run-netns-cni\x2d4fb0f492\x2d48a1\x2d97d1\x2d37af\x2d65ba292b6fa1.mount: Deactivated successfully. Feb 13 15:39:42.637124 kubelet[2589]: I0213 15:39:42.637060 2589 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5877426208bb0ec6f3b0bd6273d2a61f178cd37093bfdab85cd100b691fb5424" Feb 13 15:39:42.638050 containerd[1453]: time="2025-02-13T15:39:42.637761197Z" level=info msg="StopPodSandbox for \"5877426208bb0ec6f3b0bd6273d2a61f178cd37093bfdab85cd100b691fb5424\"" Feb 13 15:39:42.638508 containerd[1453]: time="2025-02-13T15:39:42.638247705Z" level=info msg="Ensure that sandbox 5877426208bb0ec6f3b0bd6273d2a61f178cd37093bfdab85cd100b691fb5424 in task-service has been cleanup successfully" Feb 13 15:39:42.639579 containerd[1453]: time="2025-02-13T15:39:42.639552101Z" level=info msg="TearDown network for sandbox \"5877426208bb0ec6f3b0bd6273d2a61f178cd37093bfdab85cd100b691fb5424\" successfully" Feb 13 15:39:42.639764 containerd[1453]: time="2025-02-13T15:39:42.639668548Z" level=info msg="StopPodSandbox for \"5877426208bb0ec6f3b0bd6273d2a61f178cd37093bfdab85cd100b691fb5424\" returns successfully" Feb 13 15:39:42.640670 systemd[1]: run-netns-cni\x2d24412adc\x2d00e5\x2d1f56\x2d7bc5\x2d7e722aef7c22.mount: Deactivated successfully. Feb 13 15:39:42.642148 containerd[1453]: time="2025-02-13T15:39:42.642105689Z" level=info msg="StopPodSandbox for \"cda662325bbcccb4b3b73cb9c1a4ba575d4900e6a01847f7536483bed3456eb0\"" Feb 13 15:39:42.642249 containerd[1453]: time="2025-02-13T15:39:42.642237737Z" level=info msg="TearDown network for sandbox \"cda662325bbcccb4b3b73cb9c1a4ba575d4900e6a01847f7536483bed3456eb0\" successfully" Feb 13 15:39:42.642502 containerd[1453]: time="2025-02-13T15:39:42.642249578Z" level=info msg="StopPodSandbox for \"cda662325bbcccb4b3b73cb9c1a4ba575d4900e6a01847f7536483bed3456eb0\" returns successfully" Feb 13 15:39:42.642888 kubelet[2589]: I0213 15:39:42.642594 2589 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49e61bdc9174d5527e1c7974cc761bf4ead3dd292c49b47e6a610d38ca0b74ec" Feb 13 15:39:42.644093 kubelet[2589]: E0213 15:39:42.642989 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:39:42.644222 containerd[1453]: time="2025-02-13T15:39:42.643392484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-tzqqh,Uid:22169313-af53-4d8b-b855-dc02e6d1e640,Namespace:kube-system,Attempt:2,}" Feb 13 15:39:42.644816 containerd[1453]: time="2025-02-13T15:39:42.644789725Z" level=info msg="StopPodSandbox for \"49e61bdc9174d5527e1c7974cc761bf4ead3dd292c49b47e6a610d38ca0b74ec\"" Feb 13 15:39:42.644986 containerd[1453]: time="2025-02-13T15:39:42.644968856Z" level=info msg="Ensure that sandbox 49e61bdc9174d5527e1c7974cc761bf4ead3dd292c49b47e6a610d38ca0b74ec in task-service has been cleanup successfully" Feb 13 15:39:42.646534 containerd[1453]: time="2025-02-13T15:39:42.646465623Z" level=info msg="TearDown network for sandbox \"49e61bdc9174d5527e1c7974cc761bf4ead3dd292c49b47e6a610d38ca0b74ec\" successfully" Feb 13 15:39:42.646632 containerd[1453]: time="2025-02-13T15:39:42.646553668Z" level=info msg="StopPodSandbox for \"49e61bdc9174d5527e1c7974cc761bf4ead3dd292c49b47e6a610d38ca0b74ec\" returns successfully" Feb 13 15:39:42.647440 kubelet[2589]: I0213 15:39:42.647406 2589 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2782e5c6315792cdee1e8b9d3b9b6dea34d36948f0bc75d3eaf32e17a3b39779" Feb 13 15:39:42.648061 containerd[1453]: time="2025-02-13T15:39:42.647703214Z" level=info msg="StopPodSandbox for \"bbe0e585cf74fab1708cc4fe366a97dc7b448d8d7a1e597df3010cf40cbe9ec3\"" Feb 13 15:39:42.648061 containerd[1453]: time="2025-02-13T15:39:42.647800300Z" level=info msg="TearDown network for sandbox \"bbe0e585cf74fab1708cc4fe366a97dc7b448d8d7a1e597df3010cf40cbe9ec3\" successfully" Feb 13 15:39:42.648061 containerd[1453]: time="2025-02-13T15:39:42.647811821Z" level=info msg="StopPodSandbox for \"bbe0e585cf74fab1708cc4fe366a97dc7b448d8d7a1e597df3010cf40cbe9ec3\" returns successfully" Feb 13 15:39:42.648399 systemd[1]: run-netns-cni\x2d62e17ce0\x2dab7e\x2df85e\x2db235\x2dab24357b3250.mount: Deactivated successfully. Feb 13 15:39:42.648878 kubelet[2589]: E0213 15:39:42.648709 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:39:42.649841 containerd[1453]: time="2025-02-13T15:39:42.649770015Z" level=info msg="StopPodSandbox for \"2782e5c6315792cdee1e8b9d3b9b6dea34d36948f0bc75d3eaf32e17a3b39779\"" Feb 13 15:39:42.649963 containerd[1453]: time="2025-02-13T15:39:42.649936344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-m78g5,Uid:74996f45-87e3-49ee-bffd-dfcfa7bb4a84,Namespace:kube-system,Attempt:2,}" Feb 13 15:39:42.650290 containerd[1453]: time="2025-02-13T15:39:42.649952745Z" level=info msg="Ensure that sandbox 2782e5c6315792cdee1e8b9d3b9b6dea34d36948f0bc75d3eaf32e17a3b39779 in task-service has been cleanup successfully" Feb 13 15:39:42.650647 containerd[1453]: time="2025-02-13T15:39:42.650621104Z" level=info msg="TearDown network for sandbox \"2782e5c6315792cdee1e8b9d3b9b6dea34d36948f0bc75d3eaf32e17a3b39779\" successfully" Feb 13 15:39:42.651165 containerd[1453]: time="2025-02-13T15:39:42.650824796Z" level=info msg="StopPodSandbox for \"2782e5c6315792cdee1e8b9d3b9b6dea34d36948f0bc75d3eaf32e17a3b39779\" returns successfully" Feb 13 15:39:42.652303 containerd[1453]: time="2025-02-13T15:39:42.652255759Z" level=info msg="StopPodSandbox for \"fc9238e716f505d3af42ce11bca5e656f52405b452391535ed40e201ac9642d7\"" Feb 13 15:39:42.652363 systemd[1]: run-netns-cni\x2d17cb8efc\x2d1af0\x2d5cee\x2d925f\x2de937285e52a2.mount: Deactivated successfully. Feb 13 15:39:42.654137 kubelet[2589]: I0213 15:39:42.653248 2589 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50cf6045c8a1ddf321bab1e6e9a3f1fc9f3a297088fc52a6e468af940cbe1c10" Feb 13 15:39:42.654240 containerd[1453]: time="2025-02-13T15:39:42.654074665Z" level=info msg="StopPodSandbox for \"50cf6045c8a1ddf321bab1e6e9a3f1fc9f3a297088fc52a6e468af940cbe1c10\"" Feb 13 15:39:42.654455 containerd[1453]: time="2025-02-13T15:39:42.654283277Z" level=info msg="Ensure that sandbox 50cf6045c8a1ddf321bab1e6e9a3f1fc9f3a297088fc52a6e468af940cbe1c10 in task-service has been cleanup successfully" Feb 13 15:39:42.656148 systemd[1]: run-netns-cni\x2d79503aab\x2da221\x2d3419\x2d05b0\x2d0e5eb39dee88.mount: Deactivated successfully. Feb 13 15:39:42.657194 containerd[1453]: time="2025-02-13T15:39:42.657149003Z" level=info msg="TearDown network for sandbox \"50cf6045c8a1ddf321bab1e6e9a3f1fc9f3a297088fc52a6e468af940cbe1c10\" successfully" Feb 13 15:39:42.657194 containerd[1453]: time="2025-02-13T15:39:42.657191486Z" level=info msg="StopPodSandbox for \"50cf6045c8a1ddf321bab1e6e9a3f1fc9f3a297088fc52a6e468af940cbe1c10\" returns successfully" Feb 13 15:39:42.657563 containerd[1453]: time="2025-02-13T15:39:42.657430179Z" level=info msg="TearDown network for sandbox \"fc9238e716f505d3af42ce11bca5e656f52405b452391535ed40e201ac9642d7\" successfully" Feb 13 15:39:42.657563 containerd[1453]: time="2025-02-13T15:39:42.657471142Z" level=info msg="StopPodSandbox for \"fc9238e716f505d3af42ce11bca5e656f52405b452391535ed40e201ac9642d7\" returns successfully" Feb 13 15:39:42.658251 containerd[1453]: time="2025-02-13T15:39:42.658142661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-655c6976bf-dltfc,Uid:043eaf10-8df2-4749-97a8-7923e4159aba,Namespace:calico-apiserver,Attempt:2,}" Feb 13 15:39:42.660476 kubelet[2589]: I0213 15:39:42.660430 2589 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c6df23e15af22b2b3475eb65ba94a3e2b7eca755b23c91b1d64f81c31949671" Feb 13 15:39:42.661302 containerd[1453]: time="2025-02-13T15:39:42.661022628Z" level=info msg="StopPodSandbox for \"5c6df23e15af22b2b3475eb65ba94a3e2b7eca755b23c91b1d64f81c31949671\"" Feb 13 15:39:42.661302 containerd[1453]: time="2025-02-13T15:39:42.661201959Z" level=info msg="Ensure that sandbox 5c6df23e15af22b2b3475eb65ba94a3e2b7eca755b23c91b1d64f81c31949671 in task-service has been cleanup successfully" Feb 13 15:39:42.662513 containerd[1453]: time="2025-02-13T15:39:42.662474312Z" level=info msg="TearDown network for sandbox \"5c6df23e15af22b2b3475eb65ba94a3e2b7eca755b23c91b1d64f81c31949671\" successfully" Feb 13 15:39:42.662714 containerd[1453]: time="2025-02-13T15:39:42.662659963Z" level=info msg="StopPodSandbox for \"5c6df23e15af22b2b3475eb65ba94a3e2b7eca755b23c91b1d64f81c31949671\" returns successfully" Feb 13 15:39:42.663503 containerd[1453]: time="2025-02-13T15:39:42.663194434Z" level=info msg="StopPodSandbox for \"1366802ebaa9a45a0b2a61073284e55df3a597069127539e15d8cfea7f6ea1a6\"" Feb 13 15:39:42.663689 containerd[1453]: time="2025-02-13T15:39:42.663581577Z" level=info msg="TearDown network for sandbox \"1366802ebaa9a45a0b2a61073284e55df3a597069127539e15d8cfea7f6ea1a6\" successfully" Feb 13 15:39:42.663689 containerd[1453]: time="2025-02-13T15:39:42.663614859Z" level=info msg="StopPodSandbox for \"1366802ebaa9a45a0b2a61073284e55df3a597069127539e15d8cfea7f6ea1a6\" returns successfully" Feb 13 15:39:42.664657 kubelet[2589]: I0213 15:39:42.664145 2589 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7b8464e6806d7515d6d160d433c08824e6e2de7760100550e98ff9d2b183f53" Feb 13 15:39:42.665916 containerd[1453]: time="2025-02-13T15:39:42.665849949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-655c6976bf-p7qqq,Uid:95d7909a-cd44-4f88-af35-6de766421d4b,Namespace:calico-apiserver,Attempt:2,}" Feb 13 15:39:42.666325 containerd[1453]: time="2025-02-13T15:39:42.666291734Z" level=info msg="StopPodSandbox for \"c7b8464e6806d7515d6d160d433c08824e6e2de7760100550e98ff9d2b183f53\"" Feb 13 15:39:42.666507 containerd[1453]: time="2025-02-13T15:39:42.666485265Z" level=info msg="Ensure that sandbox c7b8464e6806d7515d6d160d433c08824e6e2de7760100550e98ff9d2b183f53 in task-service has been cleanup successfully" Feb 13 15:39:42.666733 containerd[1453]: time="2025-02-13T15:39:42.666711719Z" level=info msg="StopPodSandbox for \"323440efec752a3eb15eea898bfaffb5e63aebd2a734fd29578ba11bb1b33be8\"" Feb 13 15:39:42.666799 containerd[1453]: time="2025-02-13T15:39:42.666786643Z" level=info msg="TearDown network for sandbox \"323440efec752a3eb15eea898bfaffb5e63aebd2a734fd29578ba11bb1b33be8\" successfully" Feb 13 15:39:42.666820 containerd[1453]: time="2025-02-13T15:39:42.666799724Z" level=info msg="StopPodSandbox for \"323440efec752a3eb15eea898bfaffb5e63aebd2a734fd29578ba11bb1b33be8\" returns successfully" Feb 13 15:39:42.667679 containerd[1453]: time="2025-02-13T15:39:42.667269991Z" level=info msg="TearDown network for sandbox \"c7b8464e6806d7515d6d160d433c08824e6e2de7760100550e98ff9d2b183f53\" successfully" Feb 13 15:39:42.667679 containerd[1453]: time="2025-02-13T15:39:42.667297113Z" level=info msg="StopPodSandbox for \"c7b8464e6806d7515d6d160d433c08824e6e2de7760100550e98ff9d2b183f53\" returns successfully" Feb 13 15:39:42.667679 containerd[1453]: time="2025-02-13T15:39:42.667559048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9jl5n,Uid:0c3e32e2-3a7c-428a-a18f-8761ef2b92d8,Namespace:calico-system,Attempt:2,}" Feb 13 15:39:42.669156 containerd[1453]: time="2025-02-13T15:39:42.668897846Z" level=info msg="StopPodSandbox for \"04e549bcbe5ef51d7da81536fa57c286356e230b0a4a8de09f3b8b24843dcce6\"" Feb 13 15:39:42.669237 containerd[1453]: time="2025-02-13T15:39:42.669218904Z" level=info msg="TearDown network for sandbox \"04e549bcbe5ef51d7da81536fa57c286356e230b0a4a8de09f3b8b24843dcce6\" successfully" Feb 13 15:39:42.669237 containerd[1453]: time="2025-02-13T15:39:42.669235985Z" level=info msg="StopPodSandbox for \"04e549bcbe5ef51d7da81536fa57c286356e230b0a4a8de09f3b8b24843dcce6\" returns successfully" Feb 13 15:39:42.670480 containerd[1453]: time="2025-02-13T15:39:42.670336009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67d55cd4f9-c8fqd,Uid:70d03d25-2cd5-469b-b092-195e4bf21efe,Namespace:calico-system,Attempt:2,}" Feb 13 15:39:42.905840 containerd[1453]: time="2025-02-13T15:39:42.905681960Z" level=error msg="Failed to destroy network for sandbox \"396a2480d2093d2717a34e8bc7fb57dabb533a866511490272198d8e3429975d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:42.907229 containerd[1453]: time="2025-02-13T15:39:42.907178127Z" level=error msg="encountered an error cleaning up failed sandbox \"396a2480d2093d2717a34e8bc7fb57dabb533a866511490272198d8e3429975d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:42.907343 containerd[1453]: time="2025-02-13T15:39:42.907263292Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-m78g5,Uid:74996f45-87e3-49ee-bffd-dfcfa7bb4a84,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"396a2480d2093d2717a34e8bc7fb57dabb533a866511490272198d8e3429975d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:42.907966 kubelet[2589]: E0213 15:39:42.907574 2589 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"396a2480d2093d2717a34e8bc7fb57dabb533a866511490272198d8e3429975d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:42.907966 kubelet[2589]: E0213 15:39:42.907638 2589 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"396a2480d2093d2717a34e8bc7fb57dabb533a866511490272198d8e3429975d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-m78g5" Feb 13 15:39:42.907966 kubelet[2589]: E0213 15:39:42.907659 2589 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"396a2480d2093d2717a34e8bc7fb57dabb533a866511490272198d8e3429975d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-m78g5" Feb 13 15:39:42.908091 kubelet[2589]: E0213 15:39:42.907720 2589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-m78g5_kube-system(74996f45-87e3-49ee-bffd-dfcfa7bb4a84)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-m78g5_kube-system(74996f45-87e3-49ee-bffd-dfcfa7bb4a84)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"396a2480d2093d2717a34e8bc7fb57dabb533a866511490272198d8e3429975d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-m78g5" podUID="74996f45-87e3-49ee-bffd-dfcfa7bb4a84" Feb 13 15:39:42.937744 containerd[1453]: time="2025-02-13T15:39:42.937676259Z" level=error msg="Failed to destroy network for sandbox \"008c18afaab252b50ccfad53679086b78620acbae6366a596e3e004153702937\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:42.938477 containerd[1453]: time="2025-02-13T15:39:42.938347698Z" level=error msg="encountered an error cleaning up failed sandbox \"008c18afaab252b50ccfad53679086b78620acbae6366a596e3e004153702937\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:42.938477 containerd[1453]: time="2025-02-13T15:39:42.938413301Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-655c6976bf-dltfc,Uid:043eaf10-8df2-4749-97a8-7923e4159aba,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"008c18afaab252b50ccfad53679086b78620acbae6366a596e3e004153702937\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:42.939556 kubelet[2589]: E0213 15:39:42.939049 2589 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"008c18afaab252b50ccfad53679086b78620acbae6366a596e3e004153702937\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:42.939556 kubelet[2589]: E0213 15:39:42.939107 2589 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"008c18afaab252b50ccfad53679086b78620acbae6366a596e3e004153702937\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-655c6976bf-dltfc" Feb 13 15:39:42.939556 kubelet[2589]: E0213 15:39:42.939131 2589 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"008c18afaab252b50ccfad53679086b78620acbae6366a596e3e004153702937\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-655c6976bf-dltfc" Feb 13 15:39:42.939727 kubelet[2589]: E0213 15:39:42.939201 2589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-655c6976bf-dltfc_calico-apiserver(043eaf10-8df2-4749-97a8-7923e4159aba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-655c6976bf-dltfc_calico-apiserver(043eaf10-8df2-4749-97a8-7923e4159aba)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"008c18afaab252b50ccfad53679086b78620acbae6366a596e3e004153702937\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-655c6976bf-dltfc" podUID="043eaf10-8df2-4749-97a8-7923e4159aba" Feb 13 15:39:42.946625 containerd[1453]: time="2025-02-13T15:39:42.946573975Z" level=error msg="Failed to destroy network for sandbox \"ad5a92ae00ae485f6dedd5b8be0955d3e9faa55341d054bfa14ae18b47ab5efd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:42.947080 containerd[1453]: time="2025-02-13T15:39:42.946996880Z" level=error msg="encountered an error cleaning up failed sandbox \"ad5a92ae00ae485f6dedd5b8be0955d3e9faa55341d054bfa14ae18b47ab5efd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:42.947080 containerd[1453]: time="2025-02-13T15:39:42.947058524Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9jl5n,Uid:0c3e32e2-3a7c-428a-a18f-8761ef2b92d8,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"ad5a92ae00ae485f6dedd5b8be0955d3e9faa55341d054bfa14ae18b47ab5efd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:42.947712 kubelet[2589]: E0213 15:39:42.947548 2589 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad5a92ae00ae485f6dedd5b8be0955d3e9faa55341d054bfa14ae18b47ab5efd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:42.947712 kubelet[2589]: E0213 15:39:42.947606 2589 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad5a92ae00ae485f6dedd5b8be0955d3e9faa55341d054bfa14ae18b47ab5efd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9jl5n" Feb 13 15:39:42.947712 kubelet[2589]: E0213 15:39:42.947627 2589 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad5a92ae00ae485f6dedd5b8be0955d3e9faa55341d054bfa14ae18b47ab5efd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9jl5n" Feb 13 15:39:42.947843 kubelet[2589]: E0213 15:39:42.947684 2589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9jl5n_calico-system(0c3e32e2-3a7c-428a-a18f-8761ef2b92d8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9jl5n_calico-system(0c3e32e2-3a7c-428a-a18f-8761ef2b92d8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ad5a92ae00ae485f6dedd5b8be0955d3e9faa55341d054bfa14ae18b47ab5efd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9jl5n" podUID="0c3e32e2-3a7c-428a-a18f-8761ef2b92d8" Feb 13 15:39:42.948503 containerd[1453]: time="2025-02-13T15:39:42.948385241Z" level=error msg="Failed to destroy network for sandbox \"7ea59ef2557339d5092fba62282ec76fa969b68ad495c317bb92c97891321274\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:42.948865 containerd[1453]: time="2025-02-13T15:39:42.948793944Z" level=error msg="encountered an error cleaning up failed sandbox \"7ea59ef2557339d5092fba62282ec76fa969b68ad495c317bb92c97891321274\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:42.948982 containerd[1453]: time="2025-02-13T15:39:42.948960114Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-655c6976bf-p7qqq,Uid:95d7909a-cd44-4f88-af35-6de766421d4b,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"7ea59ef2557339d5092fba62282ec76fa969b68ad495c317bb92c97891321274\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:42.949489 kubelet[2589]: E0213 15:39:42.949326 2589 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ea59ef2557339d5092fba62282ec76fa969b68ad495c317bb92c97891321274\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:42.949489 kubelet[2589]: E0213 15:39:42.949377 2589 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ea59ef2557339d5092fba62282ec76fa969b68ad495c317bb92c97891321274\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-655c6976bf-p7qqq" Feb 13 15:39:42.949489 kubelet[2589]: E0213 15:39:42.949403 2589 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ea59ef2557339d5092fba62282ec76fa969b68ad495c317bb92c97891321274\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-655c6976bf-p7qqq" Feb 13 15:39:42.949610 kubelet[2589]: E0213 15:39:42.949461 2589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-655c6976bf-p7qqq_calico-apiserver(95d7909a-cd44-4f88-af35-6de766421d4b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-655c6976bf-p7qqq_calico-apiserver(95d7909a-cd44-4f88-af35-6de766421d4b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7ea59ef2557339d5092fba62282ec76fa969b68ad495c317bb92c97891321274\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-655c6976bf-p7qqq" podUID="95d7909a-cd44-4f88-af35-6de766421d4b" Feb 13 15:39:42.955658 containerd[1453]: time="2025-02-13T15:39:42.955615861Z" level=error msg="Failed to destroy network for sandbox \"5b053525623b9497da7e1b2e770d86673655bd7d93820efc534aa58222203703\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:42.955958 containerd[1453]: time="2025-02-13T15:39:42.955931039Z" level=error msg="encountered an error cleaning up failed sandbox \"5b053525623b9497da7e1b2e770d86673655bd7d93820efc534aa58222203703\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:42.956016 containerd[1453]: time="2025-02-13T15:39:42.955998723Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-tzqqh,Uid:22169313-af53-4d8b-b855-dc02e6d1e640,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"5b053525623b9497da7e1b2e770d86673655bd7d93820efc534aa58222203703\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:42.956392 kubelet[2589]: E0213 15:39:42.956235 2589 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b053525623b9497da7e1b2e770d86673655bd7d93820efc534aa58222203703\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:42.956392 kubelet[2589]: E0213 15:39:42.956301 2589 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b053525623b9497da7e1b2e770d86673655bd7d93820efc534aa58222203703\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-tzqqh" Feb 13 15:39:42.956392 kubelet[2589]: E0213 15:39:42.956319 2589 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b053525623b9497da7e1b2e770d86673655bd7d93820efc534aa58222203703\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-tzqqh" Feb 13 15:39:42.956534 kubelet[2589]: E0213 15:39:42.956365 2589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-tzqqh_kube-system(22169313-af53-4d8b-b855-dc02e6d1e640)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-tzqqh_kube-system(22169313-af53-4d8b-b855-dc02e6d1e640)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5b053525623b9497da7e1b2e770d86673655bd7d93820efc534aa58222203703\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-tzqqh" podUID="22169313-af53-4d8b-b855-dc02e6d1e640" Feb 13 15:39:42.973841 containerd[1453]: time="2025-02-13T15:39:42.973782236Z" level=error msg="Failed to destroy network for sandbox \"ee6732b5b77113a6205089d42119dad4c5eee465a9929a1fc3fc3e114a8fca23\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:42.974175 containerd[1453]: time="2025-02-13T15:39:42.974149097Z" level=error msg="encountered an error cleaning up failed sandbox \"ee6732b5b77113a6205089d42119dad4c5eee465a9929a1fc3fc3e114a8fca23\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:42.974247 containerd[1453]: time="2025-02-13T15:39:42.974221621Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67d55cd4f9-c8fqd,Uid:70d03d25-2cd5-469b-b092-195e4bf21efe,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"ee6732b5b77113a6205089d42119dad4c5eee465a9929a1fc3fc3e114a8fca23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:42.974938 kubelet[2589]: E0213 15:39:42.974912 2589 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee6732b5b77113a6205089d42119dad4c5eee465a9929a1fc3fc3e114a8fca23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:42.974991 kubelet[2589]: E0213 15:39:42.974972 2589 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee6732b5b77113a6205089d42119dad4c5eee465a9929a1fc3fc3e114a8fca23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67d55cd4f9-c8fqd" Feb 13 15:39:42.975042 kubelet[2589]: E0213 15:39:42.974993 2589 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee6732b5b77113a6205089d42119dad4c5eee465a9929a1fc3fc3e114a8fca23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67d55cd4f9-c8fqd" Feb 13 15:39:42.975072 kubelet[2589]: E0213 15:39:42.975052 2589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-67d55cd4f9-c8fqd_calico-system(70d03d25-2cd5-469b-b092-195e4bf21efe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-67d55cd4f9-c8fqd_calico-system(70d03d25-2cd5-469b-b092-195e4bf21efe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ee6732b5b77113a6205089d42119dad4c5eee465a9929a1fc3fc3e114a8fca23\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-67d55cd4f9-c8fqd" podUID="70d03d25-2cd5-469b-b092-195e4bf21efe" Feb 13 15:39:43.275478 systemd[1]: run-netns-cni\x2d0a84d460\x2d4809\x2d204a\x2d4d87\x2dc6d15ce4a603.mount: Deactivated successfully. Feb 13 15:39:43.275562 systemd[1]: run-netns-cni\x2d95e8af24\x2df7d9\x2d2aa9\x2d00a2\x2d80951a1cb6f5.mount: Deactivated successfully. Feb 13 15:39:43.668031 kubelet[2589]: I0213 15:39:43.668000 2589 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="008c18afaab252b50ccfad53679086b78620acbae6366a596e3e004153702937" Feb 13 15:39:43.670116 containerd[1453]: time="2025-02-13T15:39:43.668684819Z" level=info msg="StopPodSandbox for \"008c18afaab252b50ccfad53679086b78620acbae6366a596e3e004153702937\"" Feb 13 15:39:43.670116 containerd[1453]: time="2025-02-13T15:39:43.669919489Z" level=info msg="Ensure that sandbox 008c18afaab252b50ccfad53679086b78620acbae6366a596e3e004153702937 in task-service has been cleanup successfully" Feb 13 15:39:43.672759 kubelet[2589]: I0213 15:39:43.670744 2589 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad5a92ae00ae485f6dedd5b8be0955d3e9faa55341d054bfa14ae18b47ab5efd" Feb 13 15:39:43.672848 containerd[1453]: time="2025-02-13T15:39:43.671280645Z" level=info msg="StopPodSandbox for \"ad5a92ae00ae485f6dedd5b8be0955d3e9faa55341d054bfa14ae18b47ab5efd\"" Feb 13 15:39:43.672848 containerd[1453]: time="2025-02-13T15:39:43.671422093Z" level=info msg="Ensure that sandbox ad5a92ae00ae485f6dedd5b8be0955d3e9faa55341d054bfa14ae18b47ab5efd in task-service has been cleanup successfully" Feb 13 15:39:43.672848 containerd[1453]: time="2025-02-13T15:39:43.671436254Z" level=info msg="TearDown network for sandbox \"008c18afaab252b50ccfad53679086b78620acbae6366a596e3e004153702937\" successfully" Feb 13 15:39:43.672848 containerd[1453]: time="2025-02-13T15:39:43.672712566Z" level=info msg="StopPodSandbox for \"008c18afaab252b50ccfad53679086b78620acbae6366a596e3e004153702937\" returns successfully" Feb 13 15:39:43.672848 containerd[1453]: time="2025-02-13T15:39:43.672005086Z" level=info msg="TearDown network for sandbox \"ad5a92ae00ae485f6dedd5b8be0955d3e9faa55341d054bfa14ae18b47ab5efd\" successfully" Feb 13 15:39:43.672848 containerd[1453]: time="2025-02-13T15:39:43.672772610Z" level=info msg="StopPodSandbox for \"ad5a92ae00ae485f6dedd5b8be0955d3e9faa55341d054bfa14ae18b47ab5efd\" returns successfully" Feb 13 15:39:43.674308 containerd[1453]: time="2025-02-13T15:39:43.673763426Z" level=info msg="StopPodSandbox for \"2782e5c6315792cdee1e8b9d3b9b6dea34d36948f0bc75d3eaf32e17a3b39779\"" Feb 13 15:39:43.674308 containerd[1453]: time="2025-02-13T15:39:43.673846390Z" level=info msg="TearDown network for sandbox \"2782e5c6315792cdee1e8b9d3b9b6dea34d36948f0bc75d3eaf32e17a3b39779\" successfully" Feb 13 15:39:43.674308 containerd[1453]: time="2025-02-13T15:39:43.673856111Z" level=info msg="StopPodSandbox for \"2782e5c6315792cdee1e8b9d3b9b6dea34d36948f0bc75d3eaf32e17a3b39779\" returns successfully" Feb 13 15:39:43.674308 containerd[1453]: time="2025-02-13T15:39:43.673992799Z" level=info msg="StopPodSandbox for \"7ea59ef2557339d5092fba62282ec76fa969b68ad495c317bb92c97891321274\"" Feb 13 15:39:43.674308 containerd[1453]: time="2025-02-13T15:39:43.674111405Z" level=info msg="Ensure that sandbox 7ea59ef2557339d5092fba62282ec76fa969b68ad495c317bb92c97891321274 in task-service has been cleanup successfully" Feb 13 15:39:43.675374 kubelet[2589]: I0213 15:39:43.673124 2589 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ea59ef2557339d5092fba62282ec76fa969b68ad495c317bb92c97891321274" Feb 13 15:39:43.675499 containerd[1453]: time="2025-02-13T15:39:43.674632595Z" level=info msg="StopPodSandbox for \"fc9238e716f505d3af42ce11bca5e656f52405b452391535ed40e201ac9642d7\"" Feb 13 15:39:43.675499 containerd[1453]: time="2025-02-13T15:39:43.674658756Z" level=info msg="StopPodSandbox for \"50cf6045c8a1ddf321bab1e6e9a3f1fc9f3a297088fc52a6e468af940cbe1c10\"" Feb 13 15:39:43.675499 containerd[1453]: time="2025-02-13T15:39:43.674711679Z" level=info msg="TearDown network for sandbox \"fc9238e716f505d3af42ce11bca5e656f52405b452391535ed40e201ac9642d7\" successfully" Feb 13 15:39:43.675499 containerd[1453]: time="2025-02-13T15:39:43.674722560Z" level=info msg="StopPodSandbox for \"fc9238e716f505d3af42ce11bca5e656f52405b452391535ed40e201ac9642d7\" returns successfully" Feb 13 15:39:43.675499 containerd[1453]: time="2025-02-13T15:39:43.674754962Z" level=info msg="TearDown network for sandbox \"50cf6045c8a1ddf321bab1e6e9a3f1fc9f3a297088fc52a6e468af940cbe1c10\" successfully" Feb 13 15:39:43.675499 containerd[1453]: time="2025-02-13T15:39:43.674769042Z" level=info msg="StopPodSandbox for \"50cf6045c8a1ddf321bab1e6e9a3f1fc9f3a297088fc52a6e468af940cbe1c10\" returns successfully" Feb 13 15:39:43.675499 containerd[1453]: time="2025-02-13T15:39:43.675241189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-655c6976bf-dltfc,Uid:043eaf10-8df2-4749-97a8-7923e4159aba,Namespace:calico-apiserver,Attempt:3,}" Feb 13 15:39:43.677330 containerd[1453]: time="2025-02-13T15:39:43.676206123Z" level=info msg="TearDown network for sandbox \"7ea59ef2557339d5092fba62282ec76fa969b68ad495c317bb92c97891321274\" successfully" Feb 13 15:39:43.677330 containerd[1453]: time="2025-02-13T15:39:43.676237765Z" level=info msg="StopPodSandbox for \"7ea59ef2557339d5092fba62282ec76fa969b68ad495c317bb92c97891321274\" returns successfully" Feb 13 15:39:43.675988 systemd[1]: run-netns-cni\x2db304e006\x2ddf51\x2da819\x2df52a\x2da57c8ecda853.mount: Deactivated successfully. Feb 13 15:39:43.676076 systemd[1]: run-netns-cni\x2ddde26a13\x2d8a04\x2d3093\x2d3a21\x2d44e7b046893a.mount: Deactivated successfully. Feb 13 15:39:43.677894 containerd[1453]: time="2025-02-13T15:39:43.677855456Z" level=info msg="StopPodSandbox for \"323440efec752a3eb15eea898bfaffb5e63aebd2a734fd29578ba11bb1b33be8\"" Feb 13 15:39:43.677977 containerd[1453]: time="2025-02-13T15:39:43.677961182Z" level=info msg="TearDown network for sandbox \"323440efec752a3eb15eea898bfaffb5e63aebd2a734fd29578ba11bb1b33be8\" successfully" Feb 13 15:39:43.678015 containerd[1453]: time="2025-02-13T15:39:43.677975023Z" level=info msg="StopPodSandbox for \"323440efec752a3eb15eea898bfaffb5e63aebd2a734fd29578ba11bb1b33be8\" returns successfully" Feb 13 15:39:43.678903 containerd[1453]: time="2025-02-13T15:39:43.678873714Z" level=info msg="StopPodSandbox for \"5c6df23e15af22b2b3475eb65ba94a3e2b7eca755b23c91b1d64f81c31949671\"" Feb 13 15:39:43.678984 containerd[1453]: time="2025-02-13T15:39:43.678968239Z" level=info msg="TearDown network for sandbox \"5c6df23e15af22b2b3475eb65ba94a3e2b7eca755b23c91b1d64f81c31949671\" successfully" Feb 13 15:39:43.679009 containerd[1453]: time="2025-02-13T15:39:43.678983120Z" level=info msg="StopPodSandbox for \"5c6df23e15af22b2b3475eb65ba94a3e2b7eca755b23c91b1d64f81c31949671\" returns successfully" Feb 13 15:39:43.679141 containerd[1453]: time="2025-02-13T15:39:43.679122648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9jl5n,Uid:0c3e32e2-3a7c-428a-a18f-8761ef2b92d8,Namespace:calico-system,Attempt:3,}" Feb 13 15:39:43.679899 kubelet[2589]: I0213 15:39:43.679877 2589 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee6732b5b77113a6205089d42119dad4c5eee465a9929a1fc3fc3e114a8fca23" Feb 13 15:39:43.680298 containerd[1453]: time="2025-02-13T15:39:43.680270113Z" level=info msg="StopPodSandbox for \"1366802ebaa9a45a0b2a61073284e55df3a597069127539e15d8cfea7f6ea1a6\"" Feb 13 15:39:43.680370 containerd[1453]: time="2025-02-13T15:39:43.680356438Z" level=info msg="TearDown network for sandbox \"1366802ebaa9a45a0b2a61073284e55df3a597069127539e15d8cfea7f6ea1a6\" successfully" Feb 13 15:39:43.680395 containerd[1453]: time="2025-02-13T15:39:43.680370238Z" level=info msg="StopPodSandbox for \"1366802ebaa9a45a0b2a61073284e55df3a597069127539e15d8cfea7f6ea1a6\" returns successfully" Feb 13 15:39:43.681214 systemd[1]: run-netns-cni\x2df1728c40\x2dad8a\x2dfc89\x2d9bcb\x2d10761bacebd6.mount: Deactivated successfully. Feb 13 15:39:43.682684 containerd[1453]: time="2025-02-13T15:39:43.682655367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-655c6976bf-p7qqq,Uid:95d7909a-cd44-4f88-af35-6de766421d4b,Namespace:calico-apiserver,Attempt:3,}" Feb 13 15:39:43.682962 containerd[1453]: time="2025-02-13T15:39:43.682819417Z" level=info msg="StopPodSandbox for \"ee6732b5b77113a6205089d42119dad4c5eee465a9929a1fc3fc3e114a8fca23\"" Feb 13 15:39:43.683023 containerd[1453]: time="2025-02-13T15:39:43.682977786Z" level=info msg="Ensure that sandbox ee6732b5b77113a6205089d42119dad4c5eee465a9929a1fc3fc3e114a8fca23 in task-service has been cleanup successfully" Feb 13 15:39:43.683588 containerd[1453]: time="2025-02-13T15:39:43.683552058Z" level=info msg="TearDown network for sandbox \"ee6732b5b77113a6205089d42119dad4c5eee465a9929a1fc3fc3e114a8fca23\" successfully" Feb 13 15:39:43.683588 containerd[1453]: time="2025-02-13T15:39:43.683577459Z" level=info msg="StopPodSandbox for \"ee6732b5b77113a6205089d42119dad4c5eee465a9929a1fc3fc3e114a8fca23\" returns successfully" Feb 13 15:39:43.686240 containerd[1453]: time="2025-02-13T15:39:43.686191007Z" level=info msg="StopPodSandbox for \"c7b8464e6806d7515d6d160d433c08824e6e2de7760100550e98ff9d2b183f53\"" Feb 13 15:39:43.686468 containerd[1453]: time="2025-02-13T15:39:43.686374017Z" level=info msg="TearDown network for sandbox \"c7b8464e6806d7515d6d160d433c08824e6e2de7760100550e98ff9d2b183f53\" successfully" Feb 13 15:39:43.686468 containerd[1453]: time="2025-02-13T15:39:43.686394178Z" level=info msg="StopPodSandbox for \"c7b8464e6806d7515d6d160d433c08824e6e2de7760100550e98ff9d2b183f53\" returns successfully" Feb 13 15:39:43.686951 containerd[1453]: time="2025-02-13T15:39:43.686919648Z" level=info msg="StopPodSandbox for \"04e549bcbe5ef51d7da81536fa57c286356e230b0a4a8de09f3b8b24843dcce6\"" Feb 13 15:39:43.686996 systemd[1]: run-netns-cni\x2d7589b9e4\x2d1985\x2d120f\x2d49f6\x2d53d153885415.mount: Deactivated successfully. Feb 13 15:39:43.687583 containerd[1453]: time="2025-02-13T15:39:43.687090978Z" level=info msg="TearDown network for sandbox \"04e549bcbe5ef51d7da81536fa57c286356e230b0a4a8de09f3b8b24843dcce6\" successfully" Feb 13 15:39:43.687583 containerd[1453]: time="2025-02-13T15:39:43.687107779Z" level=info msg="StopPodSandbox for \"04e549bcbe5ef51d7da81536fa57c286356e230b0a4a8de09f3b8b24843dcce6\" returns successfully" Feb 13 15:39:43.688076 kubelet[2589]: I0213 15:39:43.687505 2589 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b053525623b9497da7e1b2e770d86673655bd7d93820efc534aa58222203703" Feb 13 15:39:43.689749 containerd[1453]: time="2025-02-13T15:39:43.688505338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67d55cd4f9-c8fqd,Uid:70d03d25-2cd5-469b-b092-195e4bf21efe,Namespace:calico-system,Attempt:3,}" Feb 13 15:39:43.689749 containerd[1453]: time="2025-02-13T15:39:43.688647866Z" level=info msg="StopPodSandbox for \"5b053525623b9497da7e1b2e770d86673655bd7d93820efc534aa58222203703\"" Feb 13 15:39:43.689749 containerd[1453]: time="2025-02-13T15:39:43.688781033Z" level=info msg="Ensure that sandbox 5b053525623b9497da7e1b2e770d86673655bd7d93820efc534aa58222203703 in task-service has been cleanup successfully" Feb 13 15:39:43.689962 containerd[1453]: time="2025-02-13T15:39:43.689937338Z" level=info msg="TearDown network for sandbox \"5b053525623b9497da7e1b2e770d86673655bd7d93820efc534aa58222203703\" successfully" Feb 13 15:39:43.690022 containerd[1453]: time="2025-02-13T15:39:43.690008942Z" level=info msg="StopPodSandbox for \"5b053525623b9497da7e1b2e770d86673655bd7d93820efc534aa58222203703\" returns successfully" Feb 13 15:39:43.690832 containerd[1453]: time="2025-02-13T15:39:43.690803467Z" level=info msg="StopPodSandbox for \"5877426208bb0ec6f3b0bd6273d2a61f178cd37093bfdab85cd100b691fb5424\"" Feb 13 15:39:43.691042 containerd[1453]: time="2025-02-13T15:39:43.691025000Z" level=info msg="TearDown network for sandbox \"5877426208bb0ec6f3b0bd6273d2a61f178cd37093bfdab85cd100b691fb5424\" successfully" Feb 13 15:39:43.691106 kubelet[2589]: I0213 15:39:43.691079 2589 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="396a2480d2093d2717a34e8bc7fb57dabb533a866511490272198d8e3429975d" Feb 13 15:39:43.691166 containerd[1453]: time="2025-02-13T15:39:43.691088643Z" level=info msg="StopPodSandbox for \"5877426208bb0ec6f3b0bd6273d2a61f178cd37093bfdab85cd100b691fb5424\" returns successfully" Feb 13 15:39:43.691468 systemd[1]: run-netns-cni\x2d297b18d3\x2dd62c\x2d8069\x2da10c\x2d6b0283947d12.mount: Deactivated successfully. Feb 13 15:39:43.691868 containerd[1453]: time="2025-02-13T15:39:43.691841246Z" level=info msg="StopPodSandbox for \"396a2480d2093d2717a34e8bc7fb57dabb533a866511490272198d8e3429975d\"" Feb 13 15:39:43.692143 containerd[1453]: time="2025-02-13T15:39:43.692121542Z" level=info msg="Ensure that sandbox 396a2480d2093d2717a34e8bc7fb57dabb533a866511490272198d8e3429975d in task-service has been cleanup successfully" Feb 13 15:39:43.692492 containerd[1453]: time="2025-02-13T15:39:43.692462121Z" level=info msg="StopPodSandbox for \"cda662325bbcccb4b3b73cb9c1a4ba575d4900e6a01847f7536483bed3456eb0\"" Feb 13 15:39:43.692567 containerd[1453]: time="2025-02-13T15:39:43.692549406Z" level=info msg="TearDown network for sandbox \"cda662325bbcccb4b3b73cb9c1a4ba575d4900e6a01847f7536483bed3456eb0\" successfully" Feb 13 15:39:43.692567 containerd[1453]: time="2025-02-13T15:39:43.692565527Z" level=info msg="StopPodSandbox for \"cda662325bbcccb4b3b73cb9c1a4ba575d4900e6a01847f7536483bed3456eb0\" returns successfully" Feb 13 15:39:43.693267 containerd[1453]: time="2025-02-13T15:39:43.693038633Z" level=info msg="TearDown network for sandbox \"396a2480d2093d2717a34e8bc7fb57dabb533a866511490272198d8e3429975d\" successfully" Feb 13 15:39:43.693267 containerd[1453]: time="2025-02-13T15:39:43.693063675Z" level=info msg="StopPodSandbox for \"396a2480d2093d2717a34e8bc7fb57dabb533a866511490272198d8e3429975d\" returns successfully" Feb 13 15:39:43.693675 kubelet[2589]: E0213 15:39:43.693637 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:39:43.693875 containerd[1453]: time="2025-02-13T15:39:43.693850679Z" level=info msg="StopPodSandbox for \"49e61bdc9174d5527e1c7974cc761bf4ead3dd292c49b47e6a610d38ca0b74ec\"" Feb 13 15:39:43.694018 containerd[1453]: time="2025-02-13T15:39:43.694002688Z" level=info msg="TearDown network for sandbox \"49e61bdc9174d5527e1c7974cc761bf4ead3dd292c49b47e6a610d38ca0b74ec\" successfully" Feb 13 15:39:43.694114 containerd[1453]: time="2025-02-13T15:39:43.694097933Z" level=info msg="StopPodSandbox for \"49e61bdc9174d5527e1c7974cc761bf4ead3dd292c49b47e6a610d38ca0b74ec\" returns successfully" Feb 13 15:39:43.694776 containerd[1453]: time="2025-02-13T15:39:43.694667605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-tzqqh,Uid:22169313-af53-4d8b-b855-dc02e6d1e640,Namespace:kube-system,Attempt:3,}" Feb 13 15:39:43.694968 containerd[1453]: time="2025-02-13T15:39:43.694942021Z" level=info msg="StopPodSandbox for \"bbe0e585cf74fab1708cc4fe366a97dc7b448d8d7a1e597df3010cf40cbe9ec3\"" Feb 13 15:39:43.695119 containerd[1453]: time="2025-02-13T15:39:43.695101110Z" level=info msg="TearDown network for sandbox \"bbe0e585cf74fab1708cc4fe366a97dc7b448d8d7a1e597df3010cf40cbe9ec3\" successfully" Feb 13 15:39:43.695179 containerd[1453]: time="2025-02-13T15:39:43.695166153Z" level=info msg="StopPodSandbox for \"bbe0e585cf74fab1708cc4fe366a97dc7b448d8d7a1e597df3010cf40cbe9ec3\" returns successfully" Feb 13 15:39:43.695783 kubelet[2589]: E0213 15:39:43.695753 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:39:43.696044 containerd[1453]: time="2025-02-13T15:39:43.696015841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-m78g5,Uid:74996f45-87e3-49ee-bffd-dfcfa7bb4a84,Namespace:kube-system,Attempt:3,}" Feb 13 15:39:43.835910 containerd[1453]: time="2025-02-13T15:39:43.835780969Z" level=error msg="Failed to destroy network for sandbox \"c0109edb01872ea6ddbb1270e554366cbd4b5a05c3fb263f4739b76a9f131107\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:43.836918 containerd[1453]: time="2025-02-13T15:39:43.836884791Z" level=error msg="encountered an error cleaning up failed sandbox \"c0109edb01872ea6ddbb1270e554366cbd4b5a05c3fb263f4739b76a9f131107\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:43.837459 containerd[1453]: time="2025-02-13T15:39:43.837375179Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9jl5n,Uid:0c3e32e2-3a7c-428a-a18f-8761ef2b92d8,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"c0109edb01872ea6ddbb1270e554366cbd4b5a05c3fb263f4739b76a9f131107\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:43.838404 kubelet[2589]: E0213 15:39:43.838373 2589 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0109edb01872ea6ddbb1270e554366cbd4b5a05c3fb263f4739b76a9f131107\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:43.838539 kubelet[2589]: E0213 15:39:43.838488 2589 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0109edb01872ea6ddbb1270e554366cbd4b5a05c3fb263f4739b76a9f131107\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9jl5n" Feb 13 15:39:43.838539 kubelet[2589]: E0213 15:39:43.838511 2589 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0109edb01872ea6ddbb1270e554366cbd4b5a05c3fb263f4739b76a9f131107\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9jl5n" Feb 13 15:39:43.838608 kubelet[2589]: E0213 15:39:43.838591 2589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9jl5n_calico-system(0c3e32e2-3a7c-428a-a18f-8761ef2b92d8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9jl5n_calico-system(0c3e32e2-3a7c-428a-a18f-8761ef2b92d8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c0109edb01872ea6ddbb1270e554366cbd4b5a05c3fb263f4739b76a9f131107\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9jl5n" podUID="0c3e32e2-3a7c-428a-a18f-8761ef2b92d8" Feb 13 15:39:43.842952 containerd[1453]: time="2025-02-13T15:39:43.842573032Z" level=error msg="Failed to destroy network for sandbox \"d86444dd64a16082496dff8bcc4382e23ca50d04c358f741bc26b8a391d6c550\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:43.843857 containerd[1453]: time="2025-02-13T15:39:43.843408640Z" level=error msg="encountered an error cleaning up failed sandbox \"d86444dd64a16082496dff8bcc4382e23ca50d04c358f741bc26b8a391d6c550\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:43.843857 containerd[1453]: time="2025-02-13T15:39:43.843515766Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-655c6976bf-dltfc,Uid:043eaf10-8df2-4749-97a8-7923e4159aba,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"d86444dd64a16082496dff8bcc4382e23ca50d04c358f741bc26b8a391d6c550\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:43.844251 kubelet[2589]: E0213 15:39:43.844139 2589 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d86444dd64a16082496dff8bcc4382e23ca50d04c358f741bc26b8a391d6c550\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:43.844251 kubelet[2589]: E0213 15:39:43.844192 2589 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d86444dd64a16082496dff8bcc4382e23ca50d04c358f741bc26b8a391d6c550\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-655c6976bf-dltfc" Feb 13 15:39:43.844251 kubelet[2589]: E0213 15:39:43.844212 2589 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d86444dd64a16082496dff8bcc4382e23ca50d04c358f741bc26b8a391d6c550\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-655c6976bf-dltfc" Feb 13 15:39:43.844373 kubelet[2589]: E0213 15:39:43.844269 2589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-655c6976bf-dltfc_calico-apiserver(043eaf10-8df2-4749-97a8-7923e4159aba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-655c6976bf-dltfc_calico-apiserver(043eaf10-8df2-4749-97a8-7923e4159aba)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d86444dd64a16082496dff8bcc4382e23ca50d04c358f741bc26b8a391d6c550\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-655c6976bf-dltfc" podUID="043eaf10-8df2-4749-97a8-7923e4159aba" Feb 13 15:39:43.847726 containerd[1453]: time="2025-02-13T15:39:43.847678681Z" level=error msg="Failed to destroy network for sandbox \"83168c2b6aaee4a8905d06ddc03e782efd545d713d6d0b790b340a1e7610b3b8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:43.848082 containerd[1453]: time="2025-02-13T15:39:43.848051182Z" level=error msg="encountered an error cleaning up failed sandbox \"83168c2b6aaee4a8905d06ddc03e782efd545d713d6d0b790b340a1e7610b3b8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:43.848138 containerd[1453]: time="2025-02-13T15:39:43.848121186Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67d55cd4f9-c8fqd,Uid:70d03d25-2cd5-469b-b092-195e4bf21efe,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"83168c2b6aaee4a8905d06ddc03e782efd545d713d6d0b790b340a1e7610b3b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:43.849438 kubelet[2589]: E0213 15:39:43.849407 2589 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83168c2b6aaee4a8905d06ddc03e782efd545d713d6d0b790b340a1e7610b3b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:43.849576 kubelet[2589]: E0213 15:39:43.849494 2589 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83168c2b6aaee4a8905d06ddc03e782efd545d713d6d0b790b340a1e7610b3b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67d55cd4f9-c8fqd" Feb 13 15:39:43.849576 kubelet[2589]: E0213 15:39:43.849519 2589 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83168c2b6aaee4a8905d06ddc03e782efd545d713d6d0b790b340a1e7610b3b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67d55cd4f9-c8fqd" Feb 13 15:39:43.849576 kubelet[2589]: E0213 15:39:43.849569 2589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-67d55cd4f9-c8fqd_calico-system(70d03d25-2cd5-469b-b092-195e4bf21efe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-67d55cd4f9-c8fqd_calico-system(70d03d25-2cd5-469b-b092-195e4bf21efe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"83168c2b6aaee4a8905d06ddc03e782efd545d713d6d0b790b340a1e7610b3b8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-67d55cd4f9-c8fqd" podUID="70d03d25-2cd5-469b-b092-195e4bf21efe" Feb 13 15:39:43.896718 containerd[1453]: time="2025-02-13T15:39:43.896674886Z" level=error msg="Failed to destroy network for sandbox \"d707feadb9268fa011363e35e4a444418fcd0f37bc28decb570859962b6aafb6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:43.897085 containerd[1453]: time="2025-02-13T15:39:43.897032986Z" level=error msg="encountered an error cleaning up failed sandbox \"d707feadb9268fa011363e35e4a444418fcd0f37bc28decb570859962b6aafb6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:43.897166 containerd[1453]: time="2025-02-13T15:39:43.897104230Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-655c6976bf-p7qqq,Uid:95d7909a-cd44-4f88-af35-6de766421d4b,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"d707feadb9268fa011363e35e4a444418fcd0f37bc28decb570859962b6aafb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:43.897484 kubelet[2589]: E0213 15:39:43.897313 2589 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d707feadb9268fa011363e35e4a444418fcd0f37bc28decb570859962b6aafb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:43.897574 kubelet[2589]: E0213 15:39:43.897503 2589 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d707feadb9268fa011363e35e4a444418fcd0f37bc28decb570859962b6aafb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-655c6976bf-p7qqq" Feb 13 15:39:43.897574 kubelet[2589]: E0213 15:39:43.897526 2589 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d707feadb9268fa011363e35e4a444418fcd0f37bc28decb570859962b6aafb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-655c6976bf-p7qqq" Feb 13 15:39:43.897691 kubelet[2589]: E0213 15:39:43.897585 2589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-655c6976bf-p7qqq_calico-apiserver(95d7909a-cd44-4f88-af35-6de766421d4b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-655c6976bf-p7qqq_calico-apiserver(95d7909a-cd44-4f88-af35-6de766421d4b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d707feadb9268fa011363e35e4a444418fcd0f37bc28decb570859962b6aafb6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-655c6976bf-p7qqq" podUID="95d7909a-cd44-4f88-af35-6de766421d4b" Feb 13 15:39:43.920295 containerd[1453]: time="2025-02-13T15:39:43.920086927Z" level=error msg="Failed to destroy network for sandbox \"757cc0d2ff1044664c04e58b47b1a5006b422f31425ed9c06bacf1bc76f6ad3e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:43.921340 containerd[1453]: time="2025-02-13T15:39:43.921299435Z" level=error msg="encountered an error cleaning up failed sandbox \"757cc0d2ff1044664c04e58b47b1a5006b422f31425ed9c06bacf1bc76f6ad3e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:43.921903 containerd[1453]: time="2025-02-13T15:39:43.921741100Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-m78g5,Uid:74996f45-87e3-49ee-bffd-dfcfa7bb4a84,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"757cc0d2ff1044664c04e58b47b1a5006b422f31425ed9c06bacf1bc76f6ad3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:43.922645 kubelet[2589]: E0213 15:39:43.922609 2589 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"757cc0d2ff1044664c04e58b47b1a5006b422f31425ed9c06bacf1bc76f6ad3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:43.922729 kubelet[2589]: E0213 15:39:43.922668 2589 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"757cc0d2ff1044664c04e58b47b1a5006b422f31425ed9c06bacf1bc76f6ad3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-m78g5" Feb 13 15:39:43.922729 kubelet[2589]: E0213 15:39:43.922690 2589 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"757cc0d2ff1044664c04e58b47b1a5006b422f31425ed9c06bacf1bc76f6ad3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-m78g5" Feb 13 15:39:43.922782 kubelet[2589]: E0213 15:39:43.922742 2589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-m78g5_kube-system(74996f45-87e3-49ee-bffd-dfcfa7bb4a84)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-m78g5_kube-system(74996f45-87e3-49ee-bffd-dfcfa7bb4a84)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"757cc0d2ff1044664c04e58b47b1a5006b422f31425ed9c06bacf1bc76f6ad3e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-m78g5" podUID="74996f45-87e3-49ee-bffd-dfcfa7bb4a84" Feb 13 15:39:43.924626 containerd[1453]: time="2025-02-13T15:39:43.924592621Z" level=error msg="Failed to destroy network for sandbox \"71de9d911f60af43c2548ea926c32ea6d591d73f40411c67079cd659cd718877\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:43.925206 containerd[1453]: time="2025-02-13T15:39:43.925175254Z" level=error msg="encountered an error cleaning up failed sandbox \"71de9d911f60af43c2548ea926c32ea6d591d73f40411c67079cd659cd718877\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:43.925365 containerd[1453]: time="2025-02-13T15:39:43.925303861Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-tzqqh,Uid:22169313-af53-4d8b-b855-dc02e6d1e640,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"71de9d911f60af43c2548ea926c32ea6d591d73f40411c67079cd659cd718877\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:43.925562 kubelet[2589]: E0213 15:39:43.925536 2589 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71de9d911f60af43c2548ea926c32ea6d591d73f40411c67079cd659cd718877\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:43.925615 kubelet[2589]: E0213 15:39:43.925585 2589 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71de9d911f60af43c2548ea926c32ea6d591d73f40411c67079cd659cd718877\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-tzqqh" Feb 13 15:39:43.925615 kubelet[2589]: E0213 15:39:43.925604 2589 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71de9d911f60af43c2548ea926c32ea6d591d73f40411c67079cd659cd718877\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-tzqqh" Feb 13 15:39:43.925687 kubelet[2589]: E0213 15:39:43.925661 2589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-tzqqh_kube-system(22169313-af53-4d8b-b855-dc02e6d1e640)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-tzqqh_kube-system(22169313-af53-4d8b-b855-dc02e6d1e640)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"71de9d911f60af43c2548ea926c32ea6d591d73f40411c67079cd659cd718877\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-tzqqh" podUID="22169313-af53-4d8b-b855-dc02e6d1e640" Feb 13 15:39:44.274497 systemd[1]: run-netns-cni\x2deddfd85d\x2de8ae\x2dbd58\x2d9f17\x2daa9d5ce7220f.mount: Deactivated successfully. Feb 13 15:39:44.695362 kubelet[2589]: I0213 15:39:44.695334 2589 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d86444dd64a16082496dff8bcc4382e23ca50d04c358f741bc26b8a391d6c550" Feb 13 15:39:44.700627 containerd[1453]: time="2025-02-13T15:39:44.696644755Z" level=info msg="StopPodSandbox for \"d86444dd64a16082496dff8bcc4382e23ca50d04c358f741bc26b8a391d6c550\"" Feb 13 15:39:44.700627 containerd[1453]: time="2025-02-13T15:39:44.696838286Z" level=info msg="Ensure that sandbox d86444dd64a16082496dff8bcc4382e23ca50d04c358f741bc26b8a391d6c550 in task-service has been cleanup successfully" Feb 13 15:39:44.700627 containerd[1453]: time="2025-02-13T15:39:44.697123822Z" level=info msg="TearDown network for sandbox \"d86444dd64a16082496dff8bcc4382e23ca50d04c358f741bc26b8a391d6c550\" successfully" Feb 13 15:39:44.700627 containerd[1453]: time="2025-02-13T15:39:44.697140703Z" level=info msg="StopPodSandbox for \"d86444dd64a16082496dff8bcc4382e23ca50d04c358f741bc26b8a391d6c550\" returns successfully" Feb 13 15:39:44.700627 containerd[1453]: time="2025-02-13T15:39:44.697377996Z" level=info msg="StopPodSandbox for \"008c18afaab252b50ccfad53679086b78620acbae6366a596e3e004153702937\"" Feb 13 15:39:44.700627 containerd[1453]: time="2025-02-13T15:39:44.697457760Z" level=info msg="TearDown network for sandbox \"008c18afaab252b50ccfad53679086b78620acbae6366a596e3e004153702937\" successfully" Feb 13 15:39:44.700627 containerd[1453]: time="2025-02-13T15:39:44.697469761Z" level=info msg="StopPodSandbox for \"008c18afaab252b50ccfad53679086b78620acbae6366a596e3e004153702937\" returns successfully" Feb 13 15:39:44.700627 containerd[1453]: time="2025-02-13T15:39:44.697660691Z" level=info msg="StopPodSandbox for \"2782e5c6315792cdee1e8b9d3b9b6dea34d36948f0bc75d3eaf32e17a3b39779\"" Feb 13 15:39:44.700627 containerd[1453]: time="2025-02-13T15:39:44.697718214Z" level=info msg="TearDown network for sandbox \"2782e5c6315792cdee1e8b9d3b9b6dea34d36948f0bc75d3eaf32e17a3b39779\" successfully" Feb 13 15:39:44.700627 containerd[1453]: time="2025-02-13T15:39:44.697729215Z" level=info msg="StopPodSandbox for \"2782e5c6315792cdee1e8b9d3b9b6dea34d36948f0bc75d3eaf32e17a3b39779\" returns successfully" Feb 13 15:39:44.700627 containerd[1453]: time="2025-02-13T15:39:44.700354799Z" level=info msg="StopPodSandbox for \"fc9238e716f505d3af42ce11bca5e656f52405b452391535ed40e201ac9642d7\"" Feb 13 15:39:44.700160 systemd[1]: run-netns-cni\x2d465c393b\x2d9892\x2d202a\x2d4a4f\x2decce5bfa1dc9.mount: Deactivated successfully. Feb 13 15:39:44.701314 containerd[1453]: time="2025-02-13T15:39:44.700889308Z" level=info msg="TearDown network for sandbox \"fc9238e716f505d3af42ce11bca5e656f52405b452391535ed40e201ac9642d7\" successfully" Feb 13 15:39:44.701314 containerd[1453]: time="2025-02-13T15:39:44.700906909Z" level=info msg="StopPodSandbox for \"fc9238e716f505d3af42ce11bca5e656f52405b452391535ed40e201ac9642d7\" returns successfully" Feb 13 15:39:44.702376 containerd[1453]: time="2025-02-13T15:39:44.702331988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-655c6976bf-dltfc,Uid:043eaf10-8df2-4749-97a8-7923e4159aba,Namespace:calico-apiserver,Attempt:4,}" Feb 13 15:39:44.704293 kubelet[2589]: I0213 15:39:44.704242 2589 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d707feadb9268fa011363e35e4a444418fcd0f37bc28decb570859962b6aafb6" Feb 13 15:39:44.706835 containerd[1453]: time="2025-02-13T15:39:44.706807713Z" level=info msg="StopPodSandbox for \"d707feadb9268fa011363e35e4a444418fcd0f37bc28decb570859962b6aafb6\"" Feb 13 15:39:44.706980 containerd[1453]: time="2025-02-13T15:39:44.706961282Z" level=info msg="Ensure that sandbox d707feadb9268fa011363e35e4a444418fcd0f37bc28decb570859962b6aafb6 in task-service has been cleanup successfully" Feb 13 15:39:44.707398 kubelet[2589]: I0213 15:39:44.707380 2589 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83168c2b6aaee4a8905d06ddc03e782efd545d713d6d0b790b340a1e7610b3b8" Feb 13 15:39:44.707885 containerd[1453]: time="2025-02-13T15:39:44.707838930Z" level=info msg="StopPodSandbox for \"83168c2b6aaee4a8905d06ddc03e782efd545d713d6d0b790b340a1e7610b3b8\"" Feb 13 15:39:44.708052 containerd[1453]: time="2025-02-13T15:39:44.708028860Z" level=info msg="Ensure that sandbox 83168c2b6aaee4a8905d06ddc03e782efd545d713d6d0b790b340a1e7610b3b8 in task-service has been cleanup successfully" Feb 13 15:39:44.708288 containerd[1453]: time="2025-02-13T15:39:44.708255113Z" level=info msg="TearDown network for sandbox \"83168c2b6aaee4a8905d06ddc03e782efd545d713d6d0b790b340a1e7610b3b8\" successfully" Feb 13 15:39:44.708288 containerd[1453]: time="2025-02-13T15:39:44.708273994Z" level=info msg="StopPodSandbox for \"83168c2b6aaee4a8905d06ddc03e782efd545d713d6d0b790b340a1e7610b3b8\" returns successfully" Feb 13 15:39:44.708880 containerd[1453]: time="2025-02-13T15:39:44.708838705Z" level=info msg="StopPodSandbox for \"ee6732b5b77113a6205089d42119dad4c5eee465a9929a1fc3fc3e114a8fca23\"" Feb 13 15:39:44.708939 containerd[1453]: time="2025-02-13T15:39:44.708916109Z" level=info msg="TearDown network for sandbox \"ee6732b5b77113a6205089d42119dad4c5eee465a9929a1fc3fc3e114a8fca23\" successfully" Feb 13 15:39:44.708939 containerd[1453]: time="2025-02-13T15:39:44.708929630Z" level=info msg="StopPodSandbox for \"ee6732b5b77113a6205089d42119dad4c5eee465a9929a1fc3fc3e114a8fca23\" returns successfully" Feb 13 15:39:44.709300 containerd[1453]: time="2025-02-13T15:39:44.709260888Z" level=info msg="TearDown network for sandbox \"d707feadb9268fa011363e35e4a444418fcd0f37bc28decb570859962b6aafb6\" successfully" Feb 13 15:39:44.709300 containerd[1453]: time="2025-02-13T15:39:44.709291890Z" level=info msg="StopPodSandbox for \"d707feadb9268fa011363e35e4a444418fcd0f37bc28decb570859962b6aafb6\" returns successfully" Feb 13 15:39:44.709371 containerd[1453]: time="2025-02-13T15:39:44.709357773Z" level=info msg="StopPodSandbox for \"c7b8464e6806d7515d6d160d433c08824e6e2de7760100550e98ff9d2b183f53\"" Feb 13 15:39:44.709367 systemd[1]: run-netns-cni\x2d3382f11b\x2d2995\x2dbc37\x2d0af1\x2d07156491f4b1.mount: Deactivated successfully. Feb 13 15:39:44.709479 containerd[1453]: time="2025-02-13T15:39:44.709420377Z" level=info msg="TearDown network for sandbox \"c7b8464e6806d7515d6d160d433c08824e6e2de7760100550e98ff9d2b183f53\" successfully" Feb 13 15:39:44.709479 containerd[1453]: time="2025-02-13T15:39:44.709429337Z" level=info msg="StopPodSandbox for \"c7b8464e6806d7515d6d160d433c08824e6e2de7760100550e98ff9d2b183f53\" returns successfully" Feb 13 15:39:44.710139 containerd[1453]: time="2025-02-13T15:39:44.710054891Z" level=info msg="StopPodSandbox for \"7ea59ef2557339d5092fba62282ec76fa969b68ad495c317bb92c97891321274\"" Feb 13 15:39:44.710139 containerd[1453]: time="2025-02-13T15:39:44.710082493Z" level=info msg="StopPodSandbox for \"04e549bcbe5ef51d7da81536fa57c286356e230b0a4a8de09f3b8b24843dcce6\"" Feb 13 15:39:44.710139 containerd[1453]: time="2025-02-13T15:39:44.710132736Z" level=info msg="TearDown network for sandbox \"7ea59ef2557339d5092fba62282ec76fa969b68ad495c317bb92c97891321274\" successfully" Feb 13 15:39:44.710139 containerd[1453]: time="2025-02-13T15:39:44.710142136Z" level=info msg="StopPodSandbox for \"7ea59ef2557339d5092fba62282ec76fa969b68ad495c317bb92c97891321274\" returns successfully" Feb 13 15:39:44.711571 containerd[1453]: time="2025-02-13T15:39:44.710159937Z" level=info msg="TearDown network for sandbox \"04e549bcbe5ef51d7da81536fa57c286356e230b0a4a8de09f3b8b24843dcce6\" successfully" Feb 13 15:39:44.711571 containerd[1453]: time="2025-02-13T15:39:44.710171098Z" level=info msg="StopPodSandbox for \"04e549bcbe5ef51d7da81536fa57c286356e230b0a4a8de09f3b8b24843dcce6\" returns successfully" Feb 13 15:39:44.711571 containerd[1453]: time="2025-02-13T15:39:44.710944580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67d55cd4f9-c8fqd,Uid:70d03d25-2cd5-469b-b092-195e4bf21efe,Namespace:calico-system,Attempt:4,}" Feb 13 15:39:44.711571 containerd[1453]: time="2025-02-13T15:39:44.711050146Z" level=info msg="StopPodSandbox for \"5c6df23e15af22b2b3475eb65ba94a3e2b7eca755b23c91b1d64f81c31949671\"" Feb 13 15:39:44.711571 containerd[1453]: time="2025-02-13T15:39:44.711124670Z" level=info msg="TearDown network for sandbox \"5c6df23e15af22b2b3475eb65ba94a3e2b7eca755b23c91b1d64f81c31949671\" successfully" Feb 13 15:39:44.711571 containerd[1453]: time="2025-02-13T15:39:44.711134151Z" level=info msg="StopPodSandbox for \"5c6df23e15af22b2b3475eb65ba94a3e2b7eca755b23c91b1d64f81c31949671\" returns successfully" Feb 13 15:39:44.711732 kubelet[2589]: I0213 15:39:44.711354 2589 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71de9d911f60af43c2548ea926c32ea6d591d73f40411c67079cd659cd718877" Feb 13 15:39:44.712140 containerd[1453]: time="2025-02-13T15:39:44.712099364Z" level=info msg="StopPodSandbox for \"71de9d911f60af43c2548ea926c32ea6d591d73f40411c67079cd659cd718877\"" Feb 13 15:39:44.712248 containerd[1453]: time="2025-02-13T15:39:44.712221930Z" level=info msg="StopPodSandbox for \"1366802ebaa9a45a0b2a61073284e55df3a597069127539e15d8cfea7f6ea1a6\"" Feb 13 15:39:44.712305 containerd[1453]: time="2025-02-13T15:39:44.712249172Z" level=info msg="Ensure that sandbox 71de9d911f60af43c2548ea926c32ea6d591d73f40411c67079cd659cd718877 in task-service has been cleanup successfully" Feb 13 15:39:44.712336 containerd[1453]: time="2025-02-13T15:39:44.712317736Z" level=info msg="TearDown network for sandbox \"1366802ebaa9a45a0b2a61073284e55df3a597069127539e15d8cfea7f6ea1a6\" successfully" Feb 13 15:39:44.712336 containerd[1453]: time="2025-02-13T15:39:44.712329296Z" level=info msg="StopPodSandbox for \"1366802ebaa9a45a0b2a61073284e55df3a597069127539e15d8cfea7f6ea1a6\" returns successfully" Feb 13 15:39:44.712882 containerd[1453]: time="2025-02-13T15:39:44.712853525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-655c6976bf-p7qqq,Uid:95d7909a-cd44-4f88-af35-6de766421d4b,Namespace:calico-apiserver,Attempt:4,}" Feb 13 15:39:44.713119 containerd[1453]: time="2025-02-13T15:39:44.713076457Z" level=info msg="TearDown network for sandbox \"71de9d911f60af43c2548ea926c32ea6d591d73f40411c67079cd659cd718877\" successfully" Feb 13 15:39:44.713119 containerd[1453]: time="2025-02-13T15:39:44.713100539Z" level=info msg="StopPodSandbox for \"71de9d911f60af43c2548ea926c32ea6d591d73f40411c67079cd659cd718877\" returns successfully" Feb 13 15:39:44.713304 systemd[1]: run-netns-cni\x2d8c181ef5\x2ddf6b\x2db6f7\x2d448c\x2daf9280cad5b1.mount: Deactivated successfully. Feb 13 15:39:44.713613 containerd[1453]: time="2025-02-13T15:39:44.713381354Z" level=info msg="StopPodSandbox for \"5b053525623b9497da7e1b2e770d86673655bd7d93820efc534aa58222203703\"" Feb 13 15:39:44.713613 containerd[1453]: time="2025-02-13T15:39:44.713476679Z" level=info msg="TearDown network for sandbox \"5b053525623b9497da7e1b2e770d86673655bd7d93820efc534aa58222203703\" successfully" Feb 13 15:39:44.713613 containerd[1453]: time="2025-02-13T15:39:44.713488680Z" level=info msg="StopPodSandbox for \"5b053525623b9497da7e1b2e770d86673655bd7d93820efc534aa58222203703\" returns successfully" Feb 13 15:39:44.714156 containerd[1453]: time="2025-02-13T15:39:44.713907503Z" level=info msg="StopPodSandbox for \"5877426208bb0ec6f3b0bd6273d2a61f178cd37093bfdab85cd100b691fb5424\"" Feb 13 15:39:44.714156 containerd[1453]: time="2025-02-13T15:39:44.713997948Z" level=info msg="TearDown network for sandbox \"5877426208bb0ec6f3b0bd6273d2a61f178cd37093bfdab85cd100b691fb5424\" successfully" Feb 13 15:39:44.714156 containerd[1453]: time="2025-02-13T15:39:44.714009828Z" level=info msg="StopPodSandbox for \"5877426208bb0ec6f3b0bd6273d2a61f178cd37093bfdab85cd100b691fb5424\" returns successfully" Feb 13 15:39:44.714523 containerd[1453]: time="2025-02-13T15:39:44.714479734Z" level=info msg="StopPodSandbox for \"cda662325bbcccb4b3b73cb9c1a4ba575d4900e6a01847f7536483bed3456eb0\"" Feb 13 15:39:44.715014 containerd[1453]: time="2025-02-13T15:39:44.714571779Z" level=info msg="TearDown network for sandbox \"cda662325bbcccb4b3b73cb9c1a4ba575d4900e6a01847f7536483bed3456eb0\" successfully" Feb 13 15:39:44.715014 containerd[1453]: time="2025-02-13T15:39:44.714589820Z" level=info msg="StopPodSandbox for \"cda662325bbcccb4b3b73cb9c1a4ba575d4900e6a01847f7536483bed3456eb0\" returns successfully" Feb 13 15:39:44.715259 kubelet[2589]: E0213 15:39:44.715240 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:39:44.715577 containerd[1453]: time="2025-02-13T15:39:44.715544153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-tzqqh,Uid:22169313-af53-4d8b-b855-dc02e6d1e640,Namespace:kube-system,Attempt:4,}" Feb 13 15:39:44.715790 kubelet[2589]: I0213 15:39:44.715769 2589 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0109edb01872ea6ddbb1270e554366cbd4b5a05c3fb263f4739b76a9f131107" Feb 13 15:39:44.717057 systemd[1]: run-netns-cni\x2d997e3a55\x2d7293\x2d20b4\x2dd5aa\x2deefb1636b4de.mount: Deactivated successfully. Feb 13 15:39:44.717671 containerd[1453]: time="2025-02-13T15:39:44.717570984Z" level=info msg="StopPodSandbox for \"c0109edb01872ea6ddbb1270e554366cbd4b5a05c3fb263f4739b76a9f131107\"" Feb 13 15:39:44.718329 containerd[1453]: time="2025-02-13T15:39:44.717754434Z" level=info msg="Ensure that sandbox c0109edb01872ea6ddbb1270e554366cbd4b5a05c3fb263f4739b76a9f131107 in task-service has been cleanup successfully" Feb 13 15:39:44.718329 containerd[1453]: time="2025-02-13T15:39:44.718234740Z" level=info msg="TearDown network for sandbox \"c0109edb01872ea6ddbb1270e554366cbd4b5a05c3fb263f4739b76a9f131107\" successfully" Feb 13 15:39:44.718329 containerd[1453]: time="2025-02-13T15:39:44.718250021Z" level=info msg="StopPodSandbox for \"c0109edb01872ea6ddbb1270e554366cbd4b5a05c3fb263f4739b76a9f131107\" returns successfully" Feb 13 15:39:44.718588 containerd[1453]: time="2025-02-13T15:39:44.718556478Z" level=info msg="StopPodSandbox for \"ad5a92ae00ae485f6dedd5b8be0955d3e9faa55341d054bfa14ae18b47ab5efd\"" Feb 13 15:39:44.718679 containerd[1453]: time="2025-02-13T15:39:44.718654643Z" level=info msg="TearDown network for sandbox \"ad5a92ae00ae485f6dedd5b8be0955d3e9faa55341d054bfa14ae18b47ab5efd\" successfully" Feb 13 15:39:44.718679 containerd[1453]: time="2025-02-13T15:39:44.718669324Z" level=info msg="StopPodSandbox for \"ad5a92ae00ae485f6dedd5b8be0955d3e9faa55341d054bfa14ae18b47ab5efd\" returns successfully" Feb 13 15:39:44.719195 kubelet[2589]: I0213 15:39:44.718890 2589 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="757cc0d2ff1044664c04e58b47b1a5006b422f31425ed9c06bacf1bc76f6ad3e" Feb 13 15:39:44.719389 containerd[1453]: time="2025-02-13T15:39:44.719330761Z" level=info msg="StopPodSandbox for \"757cc0d2ff1044664c04e58b47b1a5006b422f31425ed9c06bacf1bc76f6ad3e\"" Feb 13 15:39:44.719641 containerd[1453]: time="2025-02-13T15:39:44.719485249Z" level=info msg="Ensure that sandbox 757cc0d2ff1044664c04e58b47b1a5006b422f31425ed9c06bacf1bc76f6ad3e in task-service has been cleanup successfully" Feb 13 15:39:44.719693 containerd[1453]: time="2025-02-13T15:39:44.719648618Z" level=info msg="TearDown network for sandbox \"757cc0d2ff1044664c04e58b47b1a5006b422f31425ed9c06bacf1bc76f6ad3e\" successfully" Feb 13 15:39:44.719693 containerd[1453]: time="2025-02-13T15:39:44.719662979Z" level=info msg="StopPodSandbox for \"757cc0d2ff1044664c04e58b47b1a5006b422f31425ed9c06bacf1bc76f6ad3e\" returns successfully" Feb 13 15:39:44.719740 containerd[1453]: time="2025-02-13T15:39:44.719707221Z" level=info msg="StopPodSandbox for \"50cf6045c8a1ddf321bab1e6e9a3f1fc9f3a297088fc52a6e468af940cbe1c10\"" Feb 13 15:39:44.720062 containerd[1453]: time="2025-02-13T15:39:44.719786386Z" level=info msg="TearDown network for sandbox \"50cf6045c8a1ddf321bab1e6e9a3f1fc9f3a297088fc52a6e468af940cbe1c10\" successfully" Feb 13 15:39:44.720062 containerd[1453]: time="2025-02-13T15:39:44.719801346Z" level=info msg="StopPodSandbox for \"50cf6045c8a1ddf321bab1e6e9a3f1fc9f3a297088fc52a6e468af940cbe1c10\" returns successfully" Feb 13 15:39:44.720118 containerd[1453]: time="2025-02-13T15:39:44.720086122Z" level=info msg="StopPodSandbox for \"396a2480d2093d2717a34e8bc7fb57dabb533a866511490272198d8e3429975d\"" Feb 13 15:39:44.722468 containerd[1453]: time="2025-02-13T15:39:44.720171487Z" level=info msg="TearDown network for sandbox \"396a2480d2093d2717a34e8bc7fb57dabb533a866511490272198d8e3429975d\" successfully" Feb 13 15:39:44.722468 containerd[1453]: time="2025-02-13T15:39:44.720193288Z" level=info msg="StopPodSandbox for \"396a2480d2093d2717a34e8bc7fb57dabb533a866511490272198d8e3429975d\" returns successfully" Feb 13 15:39:44.722468 containerd[1453]: time="2025-02-13T15:39:44.720677034Z" level=info msg="StopPodSandbox for \"49e61bdc9174d5527e1c7974cc761bf4ead3dd292c49b47e6a610d38ca0b74ec\"" Feb 13 15:39:44.722468 containerd[1453]: time="2025-02-13T15:39:44.720713916Z" level=info msg="StopPodSandbox for \"323440efec752a3eb15eea898bfaffb5e63aebd2a734fd29578ba11bb1b33be8\"" Feb 13 15:39:44.722468 containerd[1453]: time="2025-02-13T15:39:44.720749478Z" level=info msg="TearDown network for sandbox \"49e61bdc9174d5527e1c7974cc761bf4ead3dd292c49b47e6a610d38ca0b74ec\" successfully" Feb 13 15:39:44.722468 containerd[1453]: time="2025-02-13T15:39:44.720760479Z" level=info msg="StopPodSandbox for \"49e61bdc9174d5527e1c7974cc761bf4ead3dd292c49b47e6a610d38ca0b74ec\" returns successfully" Feb 13 15:39:44.722468 containerd[1453]: time="2025-02-13T15:39:44.720786880Z" level=info msg="TearDown network for sandbox \"323440efec752a3eb15eea898bfaffb5e63aebd2a734fd29578ba11bb1b33be8\" successfully" Feb 13 15:39:44.722468 containerd[1453]: time="2025-02-13T15:39:44.721119139Z" level=info msg="StopPodSandbox for \"bbe0e585cf74fab1708cc4fe366a97dc7b448d8d7a1e597df3010cf40cbe9ec3\"" Feb 13 15:39:44.722468 containerd[1453]: time="2025-02-13T15:39:44.721271227Z" level=info msg="TearDown network for sandbox \"bbe0e585cf74fab1708cc4fe366a97dc7b448d8d7a1e597df3010cf40cbe9ec3\" successfully" Feb 13 15:39:44.722468 containerd[1453]: time="2025-02-13T15:39:44.721293108Z" level=info msg="StopPodSandbox for \"bbe0e585cf74fab1708cc4fe366a97dc7b448d8d7a1e597df3010cf40cbe9ec3\" returns successfully" Feb 13 15:39:44.722468 containerd[1453]: time="2025-02-13T15:39:44.721469078Z" level=info msg="StopPodSandbox for \"323440efec752a3eb15eea898bfaffb5e63aebd2a734fd29578ba11bb1b33be8\" returns successfully" Feb 13 15:39:44.722468 containerd[1453]: time="2025-02-13T15:39:44.721834298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-m78g5,Uid:74996f45-87e3-49ee-bffd-dfcfa7bb4a84,Namespace:kube-system,Attempt:4,}" Feb 13 15:39:44.722468 containerd[1453]: time="2025-02-13T15:39:44.722111553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9jl5n,Uid:0c3e32e2-3a7c-428a-a18f-8761ef2b92d8,Namespace:calico-system,Attempt:4,}" Feb 13 15:39:44.722785 kubelet[2589]: E0213 15:39:44.721573 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:39:45.079362 containerd[1453]: time="2025-02-13T15:39:45.079288604Z" level=error msg="Failed to destroy network for sandbox \"993517d1fc77735746720ed6d1052b7f574a391e2aa3dd3c4e1ac176cb243e90\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:45.080665 containerd[1453]: time="2025-02-13T15:39:45.080408904Z" level=error msg="encountered an error cleaning up failed sandbox \"993517d1fc77735746720ed6d1052b7f574a391e2aa3dd3c4e1ac176cb243e90\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:45.080665 containerd[1453]: time="2025-02-13T15:39:45.080509549Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-655c6976bf-dltfc,Uid:043eaf10-8df2-4749-97a8-7923e4159aba,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"993517d1fc77735746720ed6d1052b7f574a391e2aa3dd3c4e1ac176cb243e90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:45.080817 kubelet[2589]: E0213 15:39:45.080756 2589 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"993517d1fc77735746720ed6d1052b7f574a391e2aa3dd3c4e1ac176cb243e90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:45.080906 kubelet[2589]: E0213 15:39:45.080821 2589 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"993517d1fc77735746720ed6d1052b7f574a391e2aa3dd3c4e1ac176cb243e90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-655c6976bf-dltfc" Feb 13 15:39:45.080906 kubelet[2589]: E0213 15:39:45.080842 2589 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"993517d1fc77735746720ed6d1052b7f574a391e2aa3dd3c4e1ac176cb243e90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-655c6976bf-dltfc" Feb 13 15:39:45.080906 kubelet[2589]: E0213 15:39:45.080896 2589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-655c6976bf-dltfc_calico-apiserver(043eaf10-8df2-4749-97a8-7923e4159aba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-655c6976bf-dltfc_calico-apiserver(043eaf10-8df2-4749-97a8-7923e4159aba)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"993517d1fc77735746720ed6d1052b7f574a391e2aa3dd3c4e1ac176cb243e90\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-655c6976bf-dltfc" podUID="043eaf10-8df2-4749-97a8-7923e4159aba" Feb 13 15:39:45.088258 containerd[1453]: time="2025-02-13T15:39:45.087923946Z" level=error msg="Failed to destroy network for sandbox \"eafde1813a5db1480e193a3196ba6983d3672af483e22d08b7ea3d97fbf2a9c2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:45.089849 containerd[1453]: time="2025-02-13T15:39:45.089795126Z" level=error msg="encountered an error cleaning up failed sandbox \"eafde1813a5db1480e193a3196ba6983d3672af483e22d08b7ea3d97fbf2a9c2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:45.089956 containerd[1453]: time="2025-02-13T15:39:45.089878010Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-tzqqh,Uid:22169313-af53-4d8b-b855-dc02e6d1e640,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"eafde1813a5db1480e193a3196ba6983d3672af483e22d08b7ea3d97fbf2a9c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:45.090142 kubelet[2589]: E0213 15:39:45.090114 2589 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eafde1813a5db1480e193a3196ba6983d3672af483e22d08b7ea3d97fbf2a9c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:45.090223 kubelet[2589]: E0213 15:39:45.090170 2589 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eafde1813a5db1480e193a3196ba6983d3672af483e22d08b7ea3d97fbf2a9c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-tzqqh" Feb 13 15:39:45.090223 kubelet[2589]: E0213 15:39:45.090191 2589 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eafde1813a5db1480e193a3196ba6983d3672af483e22d08b7ea3d97fbf2a9c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-tzqqh" Feb 13 15:39:45.090293 kubelet[2589]: E0213 15:39:45.090240 2589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-tzqqh_kube-system(22169313-af53-4d8b-b855-dc02e6d1e640)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-tzqqh_kube-system(22169313-af53-4d8b-b855-dc02e6d1e640)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eafde1813a5db1480e193a3196ba6983d3672af483e22d08b7ea3d97fbf2a9c2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-tzqqh" podUID="22169313-af53-4d8b-b855-dc02e6d1e640" Feb 13 15:39:45.103398 containerd[1453]: time="2025-02-13T15:39:45.103324849Z" level=error msg="Failed to destroy network for sandbox \"39a88ace6d7473b18090855153397f0fb6353c83591be1df80145ca5ac8b441d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:45.104466 containerd[1453]: time="2025-02-13T15:39:45.103746031Z" level=error msg="encountered an error cleaning up failed sandbox \"39a88ace6d7473b18090855153397f0fb6353c83591be1df80145ca5ac8b441d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:45.104466 containerd[1453]: time="2025-02-13T15:39:45.103813635Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67d55cd4f9-c8fqd,Uid:70d03d25-2cd5-469b-b092-195e4bf21efe,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"39a88ace6d7473b18090855153397f0fb6353c83591be1df80145ca5ac8b441d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:45.104642 kubelet[2589]: E0213 15:39:45.104601 2589 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39a88ace6d7473b18090855153397f0fb6353c83591be1df80145ca5ac8b441d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:45.104716 kubelet[2589]: E0213 15:39:45.104674 2589 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39a88ace6d7473b18090855153397f0fb6353c83591be1df80145ca5ac8b441d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67d55cd4f9-c8fqd" Feb 13 15:39:45.104716 kubelet[2589]: E0213 15:39:45.104703 2589 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39a88ace6d7473b18090855153397f0fb6353c83591be1df80145ca5ac8b441d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67d55cd4f9-c8fqd" Feb 13 15:39:45.104909 kubelet[2589]: E0213 15:39:45.104893 2589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-67d55cd4f9-c8fqd_calico-system(70d03d25-2cd5-469b-b092-195e4bf21efe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-67d55cd4f9-c8fqd_calico-system(70d03d25-2cd5-469b-b092-195e4bf21efe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"39a88ace6d7473b18090855153397f0fb6353c83591be1df80145ca5ac8b441d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-67d55cd4f9-c8fqd" podUID="70d03d25-2cd5-469b-b092-195e4bf21efe" Feb 13 15:39:45.109742 containerd[1453]: time="2025-02-13T15:39:45.109575783Z" level=error msg="Failed to destroy network for sandbox \"30a389c08acdff94511aaf20ccbee0dfab1cf917dd7515609cef1dd2a3cb4d98\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:45.110700 containerd[1453]: time="2025-02-13T15:39:45.110572276Z" level=error msg="encountered an error cleaning up failed sandbox \"30a389c08acdff94511aaf20ccbee0dfab1cf917dd7515609cef1dd2a3cb4d98\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:45.110965 containerd[1453]: time="2025-02-13T15:39:45.110897053Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-m78g5,Uid:74996f45-87e3-49ee-bffd-dfcfa7bb4a84,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"30a389c08acdff94511aaf20ccbee0dfab1cf917dd7515609cef1dd2a3cb4d98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:45.111492 kubelet[2589]: E0213 15:39:45.111440 2589 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30a389c08acdff94511aaf20ccbee0dfab1cf917dd7515609cef1dd2a3cb4d98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:45.111492 kubelet[2589]: E0213 15:39:45.111521 2589 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30a389c08acdff94511aaf20ccbee0dfab1cf917dd7515609cef1dd2a3cb4d98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-m78g5" Feb 13 15:39:45.111672 kubelet[2589]: E0213 15:39:45.111544 2589 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30a389c08acdff94511aaf20ccbee0dfab1cf917dd7515609cef1dd2a3cb4d98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-m78g5" Feb 13 15:39:45.111672 kubelet[2589]: E0213 15:39:45.111593 2589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-m78g5_kube-system(74996f45-87e3-49ee-bffd-dfcfa7bb4a84)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-m78g5_kube-system(74996f45-87e3-49ee-bffd-dfcfa7bb4a84)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"30a389c08acdff94511aaf20ccbee0dfab1cf917dd7515609cef1dd2a3cb4d98\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-m78g5" podUID="74996f45-87e3-49ee-bffd-dfcfa7bb4a84" Feb 13 15:39:45.121903 containerd[1453]: time="2025-02-13T15:39:45.121854679Z" level=error msg="Failed to destroy network for sandbox \"3a525c6ec978a5a69c0212d35cd2b17b293d11491a62157f6c3ed38426d316ae\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:45.122310 containerd[1453]: time="2025-02-13T15:39:45.122273261Z" level=error msg="encountered an error cleaning up failed sandbox \"3a525c6ec978a5a69c0212d35cd2b17b293d11491a62157f6c3ed38426d316ae\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:45.122370 containerd[1453]: time="2025-02-13T15:39:45.122346585Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-655c6976bf-p7qqq,Uid:95d7909a-cd44-4f88-af35-6de766421d4b,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"3a525c6ec978a5a69c0212d35cd2b17b293d11491a62157f6c3ed38426d316ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:45.122617 kubelet[2589]: E0213 15:39:45.122592 2589 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a525c6ec978a5a69c0212d35cd2b17b293d11491a62157f6c3ed38426d316ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:45.122688 kubelet[2589]: E0213 15:39:45.122648 2589 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a525c6ec978a5a69c0212d35cd2b17b293d11491a62157f6c3ed38426d316ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-655c6976bf-p7qqq" Feb 13 15:39:45.122688 kubelet[2589]: E0213 15:39:45.122668 2589 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a525c6ec978a5a69c0212d35cd2b17b293d11491a62157f6c3ed38426d316ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-655c6976bf-p7qqq" Feb 13 15:39:45.122805 kubelet[2589]: E0213 15:39:45.122787 2589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-655c6976bf-p7qqq_calico-apiserver(95d7909a-cd44-4f88-af35-6de766421d4b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-655c6976bf-p7qqq_calico-apiserver(95d7909a-cd44-4f88-af35-6de766421d4b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3a525c6ec978a5a69c0212d35cd2b17b293d11491a62157f6c3ed38426d316ae\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-655c6976bf-p7qqq" podUID="95d7909a-cd44-4f88-af35-6de766421d4b" Feb 13 15:39:45.125148 containerd[1453]: time="2025-02-13T15:39:45.125040929Z" level=error msg="Failed to destroy network for sandbox \"6eb389d8019cf13e9ce3bc3cf2e15a9d423f6abd94bc80f011fe56a3be87a4b4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:45.125696 containerd[1453]: time="2025-02-13T15:39:45.125543676Z" level=error msg="encountered an error cleaning up failed sandbox \"6eb389d8019cf13e9ce3bc3cf2e15a9d423f6abd94bc80f011fe56a3be87a4b4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:45.125696 containerd[1453]: time="2025-02-13T15:39:45.125606279Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9jl5n,Uid:0c3e32e2-3a7c-428a-a18f-8761ef2b92d8,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"6eb389d8019cf13e9ce3bc3cf2e15a9d423f6abd94bc80f011fe56a3be87a4b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:45.125873 kubelet[2589]: E0213 15:39:45.125847 2589 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6eb389d8019cf13e9ce3bc3cf2e15a9d423f6abd94bc80f011fe56a3be87a4b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:45.125985 kubelet[2589]: E0213 15:39:45.125932 2589 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6eb389d8019cf13e9ce3bc3cf2e15a9d423f6abd94bc80f011fe56a3be87a4b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9jl5n" Feb 13 15:39:45.125985 kubelet[2589]: E0213 15:39:45.125977 2589 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6eb389d8019cf13e9ce3bc3cf2e15a9d423f6abd94bc80f011fe56a3be87a4b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9jl5n" Feb 13 15:39:45.126196 kubelet[2589]: E0213 15:39:45.126040 2589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9jl5n_calico-system(0c3e32e2-3a7c-428a-a18f-8761ef2b92d8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9jl5n_calico-system(0c3e32e2-3a7c-428a-a18f-8761ef2b92d8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6eb389d8019cf13e9ce3bc3cf2e15a9d423f6abd94bc80f011fe56a3be87a4b4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9jl5n" podUID="0c3e32e2-3a7c-428a-a18f-8761ef2b92d8" Feb 13 15:39:45.274293 systemd[1]: run-netns-cni\x2df5526c5b\x2ddd0d\x2dec64\x2da3fc\x2da25d7f95c35c.mount: Deactivated successfully. Feb 13 15:39:45.274611 systemd[1]: run-netns-cni\x2df5d86bb8\x2d6eb0\x2dee58\x2dc292\x2da459eadd61b1.mount: Deactivated successfully. Feb 13 15:39:45.360129 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount152127925.mount: Deactivated successfully. Feb 13 15:39:45.388082 containerd[1453]: time="2025-02-13T15:39:45.388020501Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Feb 13 15:39:45.392474 containerd[1453]: time="2025-02-13T15:39:45.391896148Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 4.775200714s" Feb 13 15:39:45.392474 containerd[1453]: time="2025-02-13T15:39:45.391936510Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Feb 13 15:39:45.403488 containerd[1453]: time="2025-02-13T15:39:45.403427364Z" level=info msg="CreateContainer within sandbox \"7d42b31540996f15e1b192650239e666ab57fe20e14c2fc34a523a35cc8b7fc4\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 15:39:45.410920 containerd[1453]: time="2025-02-13T15:39:45.410858721Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:45.411713 containerd[1453]: time="2025-02-13T15:39:45.411668204Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:45.412226 containerd[1453]: time="2025-02-13T15:39:45.412191272Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:45.714667 containerd[1453]: time="2025-02-13T15:39:45.714470704Z" level=info msg="CreateContainer within sandbox \"7d42b31540996f15e1b192650239e666ab57fe20e14c2fc34a523a35cc8b7fc4\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"db08ac59712247809f17eed47cc5d497b5d5902b602a7b1246df04d618c92e1c\"" Feb 13 15:39:45.715402 containerd[1453]: time="2025-02-13T15:39:45.715251586Z" level=info msg="StartContainer for \"db08ac59712247809f17eed47cc5d497b5d5902b602a7b1246df04d618c92e1c\"" Feb 13 15:39:45.723779 kubelet[2589]: I0213 15:39:45.722934 2589 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39a88ace6d7473b18090855153397f0fb6353c83591be1df80145ca5ac8b441d" Feb 13 15:39:45.724109 containerd[1453]: time="2025-02-13T15:39:45.723435783Z" level=info msg="StopPodSandbox for \"39a88ace6d7473b18090855153397f0fb6353c83591be1df80145ca5ac8b441d\"" Feb 13 15:39:45.724109 containerd[1453]: time="2025-02-13T15:39:45.723608993Z" level=info msg="Ensure that sandbox 39a88ace6d7473b18090855153397f0fb6353c83591be1df80145ca5ac8b441d in task-service has been cleanup successfully" Feb 13 15:39:45.724288 containerd[1453]: time="2025-02-13T15:39:45.724261387Z" level=info msg="TearDown network for sandbox \"39a88ace6d7473b18090855153397f0fb6353c83591be1df80145ca5ac8b441d\" successfully" Feb 13 15:39:45.724560 containerd[1453]: time="2025-02-13T15:39:45.724541482Z" level=info msg="StopPodSandbox for \"39a88ace6d7473b18090855153397f0fb6353c83591be1df80145ca5ac8b441d\" returns successfully" Feb 13 15:39:45.725123 containerd[1453]: time="2025-02-13T15:39:45.725097952Z" level=info msg="StopPodSandbox for \"83168c2b6aaee4a8905d06ddc03e782efd545d713d6d0b790b340a1e7610b3b8\"" Feb 13 15:39:45.725567 containerd[1453]: time="2025-02-13T15:39:45.725510654Z" level=info msg="TearDown network for sandbox \"83168c2b6aaee4a8905d06ddc03e782efd545d713d6d0b790b340a1e7610b3b8\" successfully" Feb 13 15:39:45.725567 containerd[1453]: time="2025-02-13T15:39:45.725544856Z" level=info msg="StopPodSandbox for \"83168c2b6aaee4a8905d06ddc03e782efd545d713d6d0b790b340a1e7610b3b8\" returns successfully" Feb 13 15:39:45.726002 containerd[1453]: time="2025-02-13T15:39:45.725943397Z" level=info msg="StopPodSandbox for \"ee6732b5b77113a6205089d42119dad4c5eee465a9929a1fc3fc3e114a8fca23\"" Feb 13 15:39:45.726366 kubelet[2589]: I0213 15:39:45.726344 2589 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eafde1813a5db1480e193a3196ba6983d3672af483e22d08b7ea3d97fbf2a9c2" Feb 13 15:39:45.726825 containerd[1453]: time="2025-02-13T15:39:45.726720279Z" level=info msg="TearDown network for sandbox \"ee6732b5b77113a6205089d42119dad4c5eee465a9929a1fc3fc3e114a8fca23\" successfully" Feb 13 15:39:45.726825 containerd[1453]: time="2025-02-13T15:39:45.726744520Z" level=info msg="StopPodSandbox for \"ee6732b5b77113a6205089d42119dad4c5eee465a9929a1fc3fc3e114a8fca23\" returns successfully" Feb 13 15:39:45.726903 containerd[1453]: time="2025-02-13T15:39:45.726823124Z" level=info msg="StopPodSandbox for \"eafde1813a5db1480e193a3196ba6983d3672af483e22d08b7ea3d97fbf2a9c2\"" Feb 13 15:39:45.726993 containerd[1453]: time="2025-02-13T15:39:45.726968412Z" level=info msg="Ensure that sandbox eafde1813a5db1480e193a3196ba6983d3672af483e22d08b7ea3d97fbf2a9c2 in task-service has been cleanup successfully" Feb 13 15:39:45.727126 containerd[1453]: time="2025-02-13T15:39:45.727102819Z" level=info msg="StopPodSandbox for \"c7b8464e6806d7515d6d160d433c08824e6e2de7760100550e98ff9d2b183f53\"" Feb 13 15:39:45.727317 containerd[1453]: time="2025-02-13T15:39:45.727141021Z" level=info msg="TearDown network for sandbox \"eafde1813a5db1480e193a3196ba6983d3672af483e22d08b7ea3d97fbf2a9c2\" successfully" Feb 13 15:39:45.727317 containerd[1453]: time="2025-02-13T15:39:45.727234906Z" level=info msg="StopPodSandbox for \"eafde1813a5db1480e193a3196ba6983d3672af483e22d08b7ea3d97fbf2a9c2\" returns successfully" Feb 13 15:39:45.727317 containerd[1453]: time="2025-02-13T15:39:45.727254907Z" level=info msg="TearDown network for sandbox \"c7b8464e6806d7515d6d160d433c08824e6e2de7760100550e98ff9d2b183f53\" successfully" Feb 13 15:39:45.727317 containerd[1453]: time="2025-02-13T15:39:45.727268828Z" level=info msg="StopPodSandbox for \"c7b8464e6806d7515d6d160d433c08824e6e2de7760100550e98ff9d2b183f53\" returns successfully" Feb 13 15:39:45.727719 containerd[1453]: time="2025-02-13T15:39:45.727597046Z" level=info msg="StopPodSandbox for \"04e549bcbe5ef51d7da81536fa57c286356e230b0a4a8de09f3b8b24843dcce6\"" Feb 13 15:39:45.727719 containerd[1453]: time="2025-02-13T15:39:45.727646808Z" level=info msg="StopPodSandbox for \"71de9d911f60af43c2548ea926c32ea6d591d73f40411c67079cd659cd718877\"" Feb 13 15:39:45.727719 containerd[1453]: time="2025-02-13T15:39:45.727675810Z" level=info msg="TearDown network for sandbox \"04e549bcbe5ef51d7da81536fa57c286356e230b0a4a8de09f3b8b24843dcce6\" successfully" Feb 13 15:39:45.727719 containerd[1453]: time="2025-02-13T15:39:45.727686050Z" level=info msg="StopPodSandbox for \"04e549bcbe5ef51d7da81536fa57c286356e230b0a4a8de09f3b8b24843dcce6\" returns successfully" Feb 13 15:39:45.727847 containerd[1453]: time="2025-02-13T15:39:45.727729613Z" level=info msg="TearDown network for sandbox \"71de9d911f60af43c2548ea926c32ea6d591d73f40411c67079cd659cd718877\" successfully" Feb 13 15:39:45.727847 containerd[1453]: time="2025-02-13T15:39:45.727740013Z" level=info msg="StopPodSandbox for \"71de9d911f60af43c2548ea926c32ea6d591d73f40411c67079cd659cd718877\" returns successfully" Feb 13 15:39:45.728396 containerd[1453]: time="2025-02-13T15:39:45.728088152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67d55cd4f9-c8fqd,Uid:70d03d25-2cd5-469b-b092-195e4bf21efe,Namespace:calico-system,Attempt:5,}" Feb 13 15:39:45.728674 containerd[1453]: time="2025-02-13T15:39:45.728651422Z" level=info msg="StopPodSandbox for \"5b053525623b9497da7e1b2e770d86673655bd7d93820efc534aa58222203703\"" Feb 13 15:39:45.728765 containerd[1453]: time="2025-02-13T15:39:45.728723746Z" level=info msg="TearDown network for sandbox \"5b053525623b9497da7e1b2e770d86673655bd7d93820efc534aa58222203703\" successfully" Feb 13 15:39:45.728765 containerd[1453]: time="2025-02-13T15:39:45.728733066Z" level=info msg="StopPodSandbox for \"5b053525623b9497da7e1b2e770d86673655bd7d93820efc534aa58222203703\" returns successfully" Feb 13 15:39:45.729221 containerd[1453]: time="2025-02-13T15:39:45.729197891Z" level=info msg="StopPodSandbox for \"5877426208bb0ec6f3b0bd6273d2a61f178cd37093bfdab85cd100b691fb5424\"" Feb 13 15:39:45.729291 containerd[1453]: time="2025-02-13T15:39:45.729278816Z" level=info msg="TearDown network for sandbox \"5877426208bb0ec6f3b0bd6273d2a61f178cd37093bfdab85cd100b691fb5424\" successfully" Feb 13 15:39:45.729315 containerd[1453]: time="2025-02-13T15:39:45.729291776Z" level=info msg="StopPodSandbox for \"5877426208bb0ec6f3b0bd6273d2a61f178cd37093bfdab85cd100b691fb5424\" returns successfully" Feb 13 15:39:45.729738 containerd[1453]: time="2025-02-13T15:39:45.729715719Z" level=info msg="StopPodSandbox for \"cda662325bbcccb4b3b73cb9c1a4ba575d4900e6a01847f7536483bed3456eb0\"" Feb 13 15:39:45.729819 containerd[1453]: time="2025-02-13T15:39:45.729804444Z" level=info msg="TearDown network for sandbox \"cda662325bbcccb4b3b73cb9c1a4ba575d4900e6a01847f7536483bed3456eb0\" successfully" Feb 13 15:39:45.729879 containerd[1453]: time="2025-02-13T15:39:45.729817924Z" level=info msg="StopPodSandbox for \"cda662325bbcccb4b3b73cb9c1a4ba575d4900e6a01847f7536483bed3456eb0\" returns successfully" Feb 13 15:39:45.730120 kubelet[2589]: E0213 15:39:45.730102 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:39:45.730653 kubelet[2589]: I0213 15:39:45.730634 2589 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30a389c08acdff94511aaf20ccbee0dfab1cf917dd7515609cef1dd2a3cb4d98" Feb 13 15:39:45.730817 containerd[1453]: time="2025-02-13T15:39:45.730503521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-tzqqh,Uid:22169313-af53-4d8b-b855-dc02e6d1e640,Namespace:kube-system,Attempt:5,}" Feb 13 15:39:45.731534 containerd[1453]: time="2025-02-13T15:39:45.731505495Z" level=info msg="StopPodSandbox for \"30a389c08acdff94511aaf20ccbee0dfab1cf917dd7515609cef1dd2a3cb4d98\"" Feb 13 15:39:45.731718 containerd[1453]: time="2025-02-13T15:39:45.731691784Z" level=info msg="Ensure that sandbox 30a389c08acdff94511aaf20ccbee0dfab1cf917dd7515609cef1dd2a3cb4d98 in task-service has been cleanup successfully" Feb 13 15:39:45.732042 containerd[1453]: time="2025-02-13T15:39:45.731968279Z" level=info msg="TearDown network for sandbox \"30a389c08acdff94511aaf20ccbee0dfab1cf917dd7515609cef1dd2a3cb4d98\" successfully" Feb 13 15:39:45.732069 containerd[1453]: time="2025-02-13T15:39:45.732037883Z" level=info msg="StopPodSandbox for \"30a389c08acdff94511aaf20ccbee0dfab1cf917dd7515609cef1dd2a3cb4d98\" returns successfully" Feb 13 15:39:45.732859 containerd[1453]: time="2025-02-13T15:39:45.732327298Z" level=info msg="StopPodSandbox for \"757cc0d2ff1044664c04e58b47b1a5006b422f31425ed9c06bacf1bc76f6ad3e\"" Feb 13 15:39:45.732964 containerd[1453]: time="2025-02-13T15:39:45.732945891Z" level=info msg="TearDown network for sandbox \"757cc0d2ff1044664c04e58b47b1a5006b422f31425ed9c06bacf1bc76f6ad3e\" successfully" Feb 13 15:39:45.732991 containerd[1453]: time="2025-02-13T15:39:45.732965413Z" level=info msg="StopPodSandbox for \"757cc0d2ff1044664c04e58b47b1a5006b422f31425ed9c06bacf1bc76f6ad3e\" returns successfully" Feb 13 15:39:45.736097 containerd[1453]: time="2025-02-13T15:39:45.736062538Z" level=info msg="StopPodSandbox for \"396a2480d2093d2717a34e8bc7fb57dabb533a866511490272198d8e3429975d\"" Feb 13 15:39:45.736296 containerd[1453]: time="2025-02-13T15:39:45.736267029Z" level=info msg="TearDown network for sandbox \"396a2480d2093d2717a34e8bc7fb57dabb533a866511490272198d8e3429975d\" successfully" Feb 13 15:39:45.736296 containerd[1453]: time="2025-02-13T15:39:45.736288950Z" level=info msg="StopPodSandbox for \"396a2480d2093d2717a34e8bc7fb57dabb533a866511490272198d8e3429975d\" returns successfully" Feb 13 15:39:45.737227 kubelet[2589]: I0213 15:39:45.737200 2589 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="993517d1fc77735746720ed6d1052b7f574a391e2aa3dd3c4e1ac176cb243e90" Feb 13 15:39:45.737557 containerd[1453]: time="2025-02-13T15:39:45.737529736Z" level=info msg="StopPodSandbox for \"49e61bdc9174d5527e1c7974cc761bf4ead3dd292c49b47e6a610d38ca0b74ec\"" Feb 13 15:39:45.737674 containerd[1453]: time="2025-02-13T15:39:45.737633022Z" level=info msg="TearDown network for sandbox \"49e61bdc9174d5527e1c7974cc761bf4ead3dd292c49b47e6a610d38ca0b74ec\" successfully" Feb 13 15:39:45.737674 containerd[1453]: time="2025-02-13T15:39:45.737651263Z" level=info msg="StopPodSandbox for \"49e61bdc9174d5527e1c7974cc761bf4ead3dd292c49b47e6a610d38ca0b74ec\" returns successfully" Feb 13 15:39:45.738308 containerd[1453]: time="2025-02-13T15:39:45.738131169Z" level=info msg="StopPodSandbox for \"bbe0e585cf74fab1708cc4fe366a97dc7b448d8d7a1e597df3010cf40cbe9ec3\"" Feb 13 15:39:45.738308 containerd[1453]: time="2025-02-13T15:39:45.738227934Z" level=info msg="TearDown network for sandbox \"bbe0e585cf74fab1708cc4fe366a97dc7b448d8d7a1e597df3010cf40cbe9ec3\" successfully" Feb 13 15:39:45.738308 containerd[1453]: time="2025-02-13T15:39:45.738240494Z" level=info msg="StopPodSandbox for \"bbe0e585cf74fab1708cc4fe366a97dc7b448d8d7a1e597df3010cf40cbe9ec3\" returns successfully" Feb 13 15:39:45.738308 containerd[1453]: time="2025-02-13T15:39:45.738140329Z" level=info msg="StopPodSandbox for \"993517d1fc77735746720ed6d1052b7f574a391e2aa3dd3c4e1ac176cb243e90\"" Feb 13 15:39:45.738430 kubelet[2589]: E0213 15:39:45.738407 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:39:45.738934 containerd[1453]: time="2025-02-13T15:39:45.738750722Z" level=info msg="Ensure that sandbox 993517d1fc77735746720ed6d1052b7f574a391e2aa3dd3c4e1ac176cb243e90 in task-service has been cleanup successfully" Feb 13 15:39:45.738934 containerd[1453]: time="2025-02-13T15:39:45.738796604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-m78g5,Uid:74996f45-87e3-49ee-bffd-dfcfa7bb4a84,Namespace:kube-system,Attempt:5,}" Feb 13 15:39:45.739073 containerd[1453]: time="2025-02-13T15:39:45.739047818Z" level=info msg="TearDown network for sandbox \"993517d1fc77735746720ed6d1052b7f574a391e2aa3dd3c4e1ac176cb243e90\" successfully" Feb 13 15:39:45.739110 containerd[1453]: time="2025-02-13T15:39:45.739087500Z" level=info msg="StopPodSandbox for \"993517d1fc77735746720ed6d1052b7f574a391e2aa3dd3c4e1ac176cb243e90\" returns successfully" Feb 13 15:39:45.739492 containerd[1453]: time="2025-02-13T15:39:45.739469560Z" level=info msg="StopPodSandbox for \"d86444dd64a16082496dff8bcc4382e23ca50d04c358f741bc26b8a391d6c550\"" Feb 13 15:39:45.739576 containerd[1453]: time="2025-02-13T15:39:45.739560685Z" level=info msg="TearDown network for sandbox \"d86444dd64a16082496dff8bcc4382e23ca50d04c358f741bc26b8a391d6c550\" successfully" Feb 13 15:39:45.739605 containerd[1453]: time="2025-02-13T15:39:45.739574406Z" level=info msg="StopPodSandbox for \"d86444dd64a16082496dff8bcc4382e23ca50d04c358f741bc26b8a391d6c550\" returns successfully" Feb 13 15:39:45.740016 containerd[1453]: time="2025-02-13T15:39:45.739843620Z" level=info msg="StopPodSandbox for \"008c18afaab252b50ccfad53679086b78620acbae6366a596e3e004153702937\"" Feb 13 15:39:45.740016 containerd[1453]: time="2025-02-13T15:39:45.739930305Z" level=info msg="TearDown network for sandbox \"008c18afaab252b50ccfad53679086b78620acbae6366a596e3e004153702937\" successfully" Feb 13 15:39:45.740016 containerd[1453]: time="2025-02-13T15:39:45.739947666Z" level=info msg="StopPodSandbox for \"008c18afaab252b50ccfad53679086b78620acbae6366a596e3e004153702937\" returns successfully" Feb 13 15:39:45.740318 containerd[1453]: time="2025-02-13T15:39:45.740282323Z" level=info msg="StopPodSandbox for \"2782e5c6315792cdee1e8b9d3b9b6dea34d36948f0bc75d3eaf32e17a3b39779\"" Feb 13 15:39:45.740405 containerd[1453]: time="2025-02-13T15:39:45.740379169Z" level=info msg="TearDown network for sandbox \"2782e5c6315792cdee1e8b9d3b9b6dea34d36948f0bc75d3eaf32e17a3b39779\" successfully" Feb 13 15:39:45.740405 containerd[1453]: time="2025-02-13T15:39:45.740401530Z" level=info msg="StopPodSandbox for \"2782e5c6315792cdee1e8b9d3b9b6dea34d36948f0bc75d3eaf32e17a3b39779\" returns successfully" Feb 13 15:39:45.740777 containerd[1453]: time="2025-02-13T15:39:45.740758789Z" level=info msg="StopPodSandbox for \"fc9238e716f505d3af42ce11bca5e656f52405b452391535ed40e201ac9642d7\"" Feb 13 15:39:45.740862 containerd[1453]: time="2025-02-13T15:39:45.740839953Z" level=info msg="TearDown network for sandbox \"fc9238e716f505d3af42ce11bca5e656f52405b452391535ed40e201ac9642d7\" successfully" Feb 13 15:39:45.740893 containerd[1453]: time="2025-02-13T15:39:45.740860514Z" level=info msg="StopPodSandbox for \"fc9238e716f505d3af42ce11bca5e656f52405b452391535ed40e201ac9642d7\" returns successfully" Feb 13 15:39:45.741713 containerd[1453]: time="2025-02-13T15:39:45.741519870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-655c6976bf-dltfc,Uid:043eaf10-8df2-4749-97a8-7923e4159aba,Namespace:calico-apiserver,Attempt:5,}" Feb 13 15:39:45.744898 kubelet[2589]: I0213 15:39:45.744873 2589 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6eb389d8019cf13e9ce3bc3cf2e15a9d423f6abd94bc80f011fe56a3be87a4b4" Feb 13 15:39:45.745684 containerd[1453]: time="2025-02-13T15:39:45.745588127Z" level=info msg="StopPodSandbox for \"6eb389d8019cf13e9ce3bc3cf2e15a9d423f6abd94bc80f011fe56a3be87a4b4\"" Feb 13 15:39:45.746011 containerd[1453]: time="2025-02-13T15:39:45.745783057Z" level=info msg="Ensure that sandbox 6eb389d8019cf13e9ce3bc3cf2e15a9d423f6abd94bc80f011fe56a3be87a4b4 in task-service has been cleanup successfully" Feb 13 15:39:45.747313 containerd[1453]: time="2025-02-13T15:39:45.747246616Z" level=info msg="TearDown network for sandbox \"6eb389d8019cf13e9ce3bc3cf2e15a9d423f6abd94bc80f011fe56a3be87a4b4\" successfully" Feb 13 15:39:45.747313 containerd[1453]: time="2025-02-13T15:39:45.747309419Z" level=info msg="StopPodSandbox for \"6eb389d8019cf13e9ce3bc3cf2e15a9d423f6abd94bc80f011fe56a3be87a4b4\" returns successfully" Feb 13 15:39:45.747692 containerd[1453]: time="2025-02-13T15:39:45.747667478Z" level=info msg="StopPodSandbox for \"c0109edb01872ea6ddbb1270e554366cbd4b5a05c3fb263f4739b76a9f131107\"" Feb 13 15:39:45.747747 kubelet[2589]: I0213 15:39:45.747692 2589 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a525c6ec978a5a69c0212d35cd2b17b293d11491a62157f6c3ed38426d316ae" Feb 13 15:39:45.747825 containerd[1453]: time="2025-02-13T15:39:45.747778524Z" level=info msg="TearDown network for sandbox \"c0109edb01872ea6ddbb1270e554366cbd4b5a05c3fb263f4739b76a9f131107\" successfully" Feb 13 15:39:45.747825 containerd[1453]: time="2025-02-13T15:39:45.747803085Z" level=info msg="StopPodSandbox for \"c0109edb01872ea6ddbb1270e554366cbd4b5a05c3fb263f4739b76a9f131107\" returns successfully" Feb 13 15:39:45.749499 containerd[1453]: time="2025-02-13T15:39:45.749431892Z" level=info msg="StopPodSandbox for \"3a525c6ec978a5a69c0212d35cd2b17b293d11491a62157f6c3ed38426d316ae\"" Feb 13 15:39:45.749783 containerd[1453]: time="2025-02-13T15:39:45.749597341Z" level=info msg="StopPodSandbox for \"ad5a92ae00ae485f6dedd5b8be0955d3e9faa55341d054bfa14ae18b47ab5efd\"" Feb 13 15:39:45.749783 containerd[1453]: time="2025-02-13T15:39:45.749631863Z" level=info msg="Ensure that sandbox 3a525c6ec978a5a69c0212d35cd2b17b293d11491a62157f6c3ed38426d316ae in task-service has been cleanup successfully" Feb 13 15:39:45.749783 containerd[1453]: time="2025-02-13T15:39:45.749686466Z" level=info msg="TearDown network for sandbox \"ad5a92ae00ae485f6dedd5b8be0955d3e9faa55341d054bfa14ae18b47ab5efd\" successfully" Feb 13 15:39:45.749783 containerd[1453]: time="2025-02-13T15:39:45.749705907Z" level=info msg="StopPodSandbox for \"ad5a92ae00ae485f6dedd5b8be0955d3e9faa55341d054bfa14ae18b47ab5efd\" returns successfully" Feb 13 15:39:45.749905 containerd[1453]: time="2025-02-13T15:39:45.749794272Z" level=info msg="TearDown network for sandbox \"3a525c6ec978a5a69c0212d35cd2b17b293d11491a62157f6c3ed38426d316ae\" successfully" Feb 13 15:39:45.749905 containerd[1453]: time="2025-02-13T15:39:45.749818033Z" level=info msg="StopPodSandbox for \"3a525c6ec978a5a69c0212d35cd2b17b293d11491a62157f6c3ed38426d316ae\" returns successfully" Feb 13 15:39:45.750114 containerd[1453]: time="2025-02-13T15:39:45.750091048Z" level=info msg="StopPodSandbox for \"d707feadb9268fa011363e35e4a444418fcd0f37bc28decb570859962b6aafb6\"" Feb 13 15:39:45.750257 containerd[1453]: time="2025-02-13T15:39:45.750233295Z" level=info msg="StopPodSandbox for \"50cf6045c8a1ddf321bab1e6e9a3f1fc9f3a297088fc52a6e468af940cbe1c10\"" Feb 13 15:39:45.750435 containerd[1453]: time="2025-02-13T15:39:45.750341141Z" level=info msg="TearDown network for sandbox \"50cf6045c8a1ddf321bab1e6e9a3f1fc9f3a297088fc52a6e468af940cbe1c10\" successfully" Feb 13 15:39:45.750435 containerd[1453]: time="2025-02-13T15:39:45.750359062Z" level=info msg="StopPodSandbox for \"50cf6045c8a1ddf321bab1e6e9a3f1fc9f3a297088fc52a6e468af940cbe1c10\" returns successfully" Feb 13 15:39:45.750854 containerd[1453]: time="2025-02-13T15:39:45.750415945Z" level=info msg="TearDown network for sandbox \"d707feadb9268fa011363e35e4a444418fcd0f37bc28decb570859962b6aafb6\" successfully" Feb 13 15:39:45.750854 containerd[1453]: time="2025-02-13T15:39:45.750657998Z" level=info msg="StopPodSandbox for \"d707feadb9268fa011363e35e4a444418fcd0f37bc28decb570859962b6aafb6\" returns successfully" Feb 13 15:39:45.750854 containerd[1453]: time="2025-02-13T15:39:45.750606355Z" level=info msg="StopPodSandbox for \"323440efec752a3eb15eea898bfaffb5e63aebd2a734fd29578ba11bb1b33be8\"" Feb 13 15:39:45.750854 containerd[1453]: time="2025-02-13T15:39:45.750786725Z" level=info msg="TearDown network for sandbox \"323440efec752a3eb15eea898bfaffb5e63aebd2a734fd29578ba11bb1b33be8\" successfully" Feb 13 15:39:45.750854 containerd[1453]: time="2025-02-13T15:39:45.750796365Z" level=info msg="StopPodSandbox for \"323440efec752a3eb15eea898bfaffb5e63aebd2a734fd29578ba11bb1b33be8\" returns successfully" Feb 13 15:39:45.751226 containerd[1453]: time="2025-02-13T15:39:45.751184466Z" level=info msg="StopPodSandbox for \"7ea59ef2557339d5092fba62282ec76fa969b68ad495c317bb92c97891321274\"" Feb 13 15:39:45.751302 containerd[1453]: time="2025-02-13T15:39:45.751282391Z" level=info msg="TearDown network for sandbox \"7ea59ef2557339d5092fba62282ec76fa969b68ad495c317bb92c97891321274\" successfully" Feb 13 15:39:45.751302 containerd[1453]: time="2025-02-13T15:39:45.751300832Z" level=info msg="StopPodSandbox for \"7ea59ef2557339d5092fba62282ec76fa969b68ad495c317bb92c97891321274\" returns successfully" Feb 13 15:39:45.751393 containerd[1453]: time="2025-02-13T15:39:45.751193066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9jl5n,Uid:0c3e32e2-3a7c-428a-a18f-8761ef2b92d8,Namespace:calico-system,Attempt:5,}" Feb 13 15:39:45.751842 containerd[1453]: time="2025-02-13T15:39:45.751688453Z" level=info msg="StopPodSandbox for \"5c6df23e15af22b2b3475eb65ba94a3e2b7eca755b23c91b1d64f81c31949671\"" Feb 13 15:39:45.764462 containerd[1453]: time="2025-02-13T15:39:45.764387331Z" level=info msg="TearDown network for sandbox \"5c6df23e15af22b2b3475eb65ba94a3e2b7eca755b23c91b1d64f81c31949671\" successfully" Feb 13 15:39:45.764462 containerd[1453]: time="2025-02-13T15:39:45.764418053Z" level=info msg="StopPodSandbox for \"5c6df23e15af22b2b3475eb65ba94a3e2b7eca755b23c91b1d64f81c31949671\" returns successfully" Feb 13 15:39:45.765470 containerd[1453]: time="2025-02-13T15:39:45.764919680Z" level=info msg="StopPodSandbox for \"1366802ebaa9a45a0b2a61073284e55df3a597069127539e15d8cfea7f6ea1a6\"" Feb 13 15:39:45.765470 containerd[1453]: time="2025-02-13T15:39:45.765003084Z" level=info msg="TearDown network for sandbox \"1366802ebaa9a45a0b2a61073284e55df3a597069127539e15d8cfea7f6ea1a6\" successfully" Feb 13 15:39:45.765470 containerd[1453]: time="2025-02-13T15:39:45.765014085Z" level=info msg="StopPodSandbox for \"1366802ebaa9a45a0b2a61073284e55df3a597069127539e15d8cfea7f6ea1a6\" returns successfully" Feb 13 15:39:45.765470 containerd[1453]: time="2025-02-13T15:39:45.765461629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-655c6976bf-p7qqq,Uid:95d7909a-cd44-4f88-af35-6de766421d4b,Namespace:calico-apiserver,Attempt:5,}" Feb 13 15:39:45.791623 systemd[1]: Started cri-containerd-db08ac59712247809f17eed47cc5d497b5d5902b602a7b1246df04d618c92e1c.scope - libcontainer container db08ac59712247809f17eed47cc5d497b5d5902b602a7b1246df04d618c92e1c. Feb 13 15:39:45.876228 containerd[1453]: time="2025-02-13T15:39:45.876155584Z" level=info msg="StartContainer for \"db08ac59712247809f17eed47cc5d497b5d5902b602a7b1246df04d618c92e1c\" returns successfully" Feb 13 15:39:46.014497 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 15:39:46.014618 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 15:39:46.070175 containerd[1453]: time="2025-02-13T15:39:46.070119934Z" level=error msg="Failed to destroy network for sandbox \"b01e65f44a177baaf5f7f69dc8b5d4d3eb04ec8d0624506da181fde0fb48605a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:46.071376 containerd[1453]: time="2025-02-13T15:39:46.070798730Z" level=error msg="encountered an error cleaning up failed sandbox \"b01e65f44a177baaf5f7f69dc8b5d4d3eb04ec8d0624506da181fde0fb48605a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:46.071376 containerd[1453]: time="2025-02-13T15:39:46.070883854Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-m78g5,Uid:74996f45-87e3-49ee-bffd-dfcfa7bb4a84,Namespace:kube-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"b01e65f44a177baaf5f7f69dc8b5d4d3eb04ec8d0624506da181fde0fb48605a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:46.071723 kubelet[2589]: E0213 15:39:46.071576 2589 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b01e65f44a177baaf5f7f69dc8b5d4d3eb04ec8d0624506da181fde0fb48605a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:46.071723 kubelet[2589]: E0213 15:39:46.071654 2589 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b01e65f44a177baaf5f7f69dc8b5d4d3eb04ec8d0624506da181fde0fb48605a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-m78g5" Feb 13 15:39:46.071723 kubelet[2589]: E0213 15:39:46.071687 2589 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b01e65f44a177baaf5f7f69dc8b5d4d3eb04ec8d0624506da181fde0fb48605a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-m78g5" Feb 13 15:39:46.072185 kubelet[2589]: E0213 15:39:46.071743 2589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-m78g5_kube-system(74996f45-87e3-49ee-bffd-dfcfa7bb4a84)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-m78g5_kube-system(74996f45-87e3-49ee-bffd-dfcfa7bb4a84)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b01e65f44a177baaf5f7f69dc8b5d4d3eb04ec8d0624506da181fde0fb48605a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-m78g5" podUID="74996f45-87e3-49ee-bffd-dfcfa7bb4a84" Feb 13 15:39:46.092003 containerd[1453]: time="2025-02-13T15:39:46.090822932Z" level=error msg="Failed to destroy network for sandbox \"101df1b5a2a631f5f92c5c2b39a4a8b148fad8cca3a6df2bdc90e67be4cca1a7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:46.099298 containerd[1453]: time="2025-02-13T15:39:46.095797031Z" level=error msg="encountered an error cleaning up failed sandbox \"101df1b5a2a631f5f92c5c2b39a4a8b148fad8cca3a6df2bdc90e67be4cca1a7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:46.099298 containerd[1453]: time="2025-02-13T15:39:46.095877236Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-tzqqh,Uid:22169313-af53-4d8b-b855-dc02e6d1e640,Namespace:kube-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"101df1b5a2a631f5f92c5c2b39a4a8b148fad8cca3a6df2bdc90e67be4cca1a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:46.099538 kubelet[2589]: E0213 15:39:46.096249 2589 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"101df1b5a2a631f5f92c5c2b39a4a8b148fad8cca3a6df2bdc90e67be4cca1a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:46.099538 kubelet[2589]: E0213 15:39:46.096395 2589 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"101df1b5a2a631f5f92c5c2b39a4a8b148fad8cca3a6df2bdc90e67be4cca1a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-tzqqh" Feb 13 15:39:46.099538 kubelet[2589]: E0213 15:39:46.096419 2589 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"101df1b5a2a631f5f92c5c2b39a4a8b148fad8cca3a6df2bdc90e67be4cca1a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-tzqqh" Feb 13 15:39:46.099619 kubelet[2589]: E0213 15:39:46.096495 2589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-tzqqh_kube-system(22169313-af53-4d8b-b855-dc02e6d1e640)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-tzqqh_kube-system(22169313-af53-4d8b-b855-dc02e6d1e640)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"101df1b5a2a631f5f92c5c2b39a4a8b148fad8cca3a6df2bdc90e67be4cca1a7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-tzqqh" podUID="22169313-af53-4d8b-b855-dc02e6d1e640" Feb 13 15:39:46.104151 systemd[1]: Started sshd@9-10.0.0.113:22-10.0.0.1:58692.service - OpenSSH per-connection server daemon (10.0.0.1:58692). Feb 13 15:39:46.108343 containerd[1453]: time="2025-02-13T15:39:46.108208078Z" level=error msg="Failed to destroy network for sandbox \"3bf6fab8387c3ed056be1d602ef5aaa086404c97feb59ddfe547152a64bb840b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:46.109135 containerd[1453]: time="2025-02-13T15:39:46.109087883Z" level=error msg="encountered an error cleaning up failed sandbox \"3bf6fab8387c3ed056be1d602ef5aaa086404c97feb59ddfe547152a64bb840b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:46.109423 containerd[1453]: time="2025-02-13T15:39:46.109396420Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67d55cd4f9-c8fqd,Uid:70d03d25-2cd5-469b-b092-195e4bf21efe,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"3bf6fab8387c3ed056be1d602ef5aaa086404c97feb59ddfe547152a64bb840b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:46.111314 kubelet[2589]: E0213 15:39:46.110804 2589 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3bf6fab8387c3ed056be1d602ef5aaa086404c97feb59ddfe547152a64bb840b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:46.111314 kubelet[2589]: E0213 15:39:46.110862 2589 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3bf6fab8387c3ed056be1d602ef5aaa086404c97feb59ddfe547152a64bb840b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67d55cd4f9-c8fqd" Feb 13 15:39:46.111314 kubelet[2589]: E0213 15:39:46.110880 2589 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3bf6fab8387c3ed056be1d602ef5aaa086404c97feb59ddfe547152a64bb840b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67d55cd4f9-c8fqd" Feb 13 15:39:46.112642 kubelet[2589]: E0213 15:39:46.110931 2589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-67d55cd4f9-c8fqd_calico-system(70d03d25-2cd5-469b-b092-195e4bf21efe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-67d55cd4f9-c8fqd_calico-system(70d03d25-2cd5-469b-b092-195e4bf21efe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3bf6fab8387c3ed056be1d602ef5aaa086404c97feb59ddfe547152a64bb840b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-67d55cd4f9-c8fqd" podUID="70d03d25-2cd5-469b-b092-195e4bf21efe" Feb 13 15:39:46.128010 containerd[1453]: time="2025-02-13T15:39:46.127899103Z" level=error msg="Failed to destroy network for sandbox \"ba7317d179593a0699866447e61ca8280457f65066e5eed9890fb1749b2f40b0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:46.128467 containerd[1453]: time="2025-02-13T15:39:46.128418930Z" level=error msg="encountered an error cleaning up failed sandbox \"ba7317d179593a0699866447e61ca8280457f65066e5eed9890fb1749b2f40b0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:46.128608 containerd[1453]: time="2025-02-13T15:39:46.128583979Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-655c6976bf-p7qqq,Uid:95d7909a-cd44-4f88-af35-6de766421d4b,Namespace:calico-apiserver,Attempt:5,} failed, error" error="failed to setup network for sandbox \"ba7317d179593a0699866447e61ca8280457f65066e5eed9890fb1749b2f40b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:46.129804 kubelet[2589]: E0213 15:39:46.129389 2589 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba7317d179593a0699866447e61ca8280457f65066e5eed9890fb1749b2f40b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:46.129804 kubelet[2589]: E0213 15:39:46.129483 2589 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba7317d179593a0699866447e61ca8280457f65066e5eed9890fb1749b2f40b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-655c6976bf-p7qqq" Feb 13 15:39:46.129804 kubelet[2589]: E0213 15:39:46.129506 2589 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba7317d179593a0699866447e61ca8280457f65066e5eed9890fb1749b2f40b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-655c6976bf-p7qqq" Feb 13 15:39:46.129960 kubelet[2589]: E0213 15:39:46.129566 2589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-655c6976bf-p7qqq_calico-apiserver(95d7909a-cd44-4f88-af35-6de766421d4b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-655c6976bf-p7qqq_calico-apiserver(95d7909a-cd44-4f88-af35-6de766421d4b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ba7317d179593a0699866447e61ca8280457f65066e5eed9890fb1749b2f40b0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-655c6976bf-p7qqq" podUID="95d7909a-cd44-4f88-af35-6de766421d4b" Feb 13 15:39:46.133510 containerd[1453]: time="2025-02-13T15:39:46.132427019Z" level=error msg="Failed to destroy network for sandbox \"b40c7454fd2776f6e99fb15ae912eedbe2585ed2954a98ab18eca3781785c734\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:46.134283 containerd[1453]: time="2025-02-13T15:39:46.134235633Z" level=error msg="encountered an error cleaning up failed sandbox \"b40c7454fd2776f6e99fb15ae912eedbe2585ed2954a98ab18eca3781785c734\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:46.134380 containerd[1453]: time="2025-02-13T15:39:46.134313717Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-655c6976bf-dltfc,Uid:043eaf10-8df2-4749-97a8-7923e4159aba,Namespace:calico-apiserver,Attempt:5,} failed, error" error="failed to setup network for sandbox \"b40c7454fd2776f6e99fb15ae912eedbe2585ed2954a98ab18eca3781785c734\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:46.135486 kubelet[2589]: E0213 15:39:46.134540 2589 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b40c7454fd2776f6e99fb15ae912eedbe2585ed2954a98ab18eca3781785c734\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:46.135486 kubelet[2589]: E0213 15:39:46.134595 2589 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b40c7454fd2776f6e99fb15ae912eedbe2585ed2954a98ab18eca3781785c734\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-655c6976bf-dltfc" Feb 13 15:39:46.135486 kubelet[2589]: E0213 15:39:46.134620 2589 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b40c7454fd2776f6e99fb15ae912eedbe2585ed2954a98ab18eca3781785c734\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-655c6976bf-dltfc" Feb 13 15:39:46.135658 kubelet[2589]: E0213 15:39:46.134668 2589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-655c6976bf-dltfc_calico-apiserver(043eaf10-8df2-4749-97a8-7923e4159aba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-655c6976bf-dltfc_calico-apiserver(043eaf10-8df2-4749-97a8-7923e4159aba)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b40c7454fd2776f6e99fb15ae912eedbe2585ed2954a98ab18eca3781785c734\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-655c6976bf-dltfc" podUID="043eaf10-8df2-4749-97a8-7923e4159aba" Feb 13 15:39:46.174920 sshd[4785]: Accepted publickey for core from 10.0.0.1 port 58692 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:39:46.176948 sshd-session[4785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:39:46.183646 systemd-logind[1429]: New session 10 of user core. Feb 13 15:39:46.191641 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 15:39:46.278407 systemd[1]: run-netns-cni\x2d6580b254\x2d6f1c\x2dc632\x2db7d8\x2d2b1abca0f571.mount: Deactivated successfully. Feb 13 15:39:46.280013 systemd[1]: run-netns-cni\x2d5fb8362b\x2d9bd8\x2d67af\x2d12f6\x2de7b8fe2e4303.mount: Deactivated successfully. Feb 13 15:39:46.280080 systemd[1]: run-netns-cni\x2dca5a19cb\x2db377\x2dccaf\x2d74e7\x2d47c03ab53df0.mount: Deactivated successfully. Feb 13 15:39:46.280130 systemd[1]: run-netns-cni\x2d2f522c9e\x2da277\x2df6e3\x2dea6a\x2d5aaa43269fca.mount: Deactivated successfully. Feb 13 15:39:46.280173 systemd[1]: run-netns-cni\x2d18b15266\x2d5ac5\x2d863b\x2d7127\x2df0c076137f6f.mount: Deactivated successfully. Feb 13 15:39:46.280220 systemd[1]: run-netns-cni\x2d13b59a44\x2da18f\x2d272d\x2deec0\x2d24fd6a87763d.mount: Deactivated successfully. Feb 13 15:39:46.363635 sshd[4818]: Connection closed by 10.0.0.1 port 58692 Feb 13 15:39:46.365636 containerd[1453]: 2025-02-13 15:39:46.217 [INFO][4796] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="60cf06410fd9c9ca5c673d26685d4894a1eb222be4019b1ca5b2eecaadc7d1fc" Feb 13 15:39:46.365636 containerd[1453]: 2025-02-13 15:39:46.218 [INFO][4796] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="60cf06410fd9c9ca5c673d26685d4894a1eb222be4019b1ca5b2eecaadc7d1fc" iface="eth0" netns="/var/run/netns/cni-64163747-8418-90f1-e837-4edfb36221cb" Feb 13 15:39:46.365636 containerd[1453]: 2025-02-13 15:39:46.218 [INFO][4796] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="60cf06410fd9c9ca5c673d26685d4894a1eb222be4019b1ca5b2eecaadc7d1fc" iface="eth0" netns="/var/run/netns/cni-64163747-8418-90f1-e837-4edfb36221cb" Feb 13 15:39:46.365636 containerd[1453]: 2025-02-13 15:39:46.224 [INFO][4796] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="60cf06410fd9c9ca5c673d26685d4894a1eb222be4019b1ca5b2eecaadc7d1fc" iface="eth0" netns="/var/run/netns/cni-64163747-8418-90f1-e837-4edfb36221cb" Feb 13 15:39:46.365636 containerd[1453]: 2025-02-13 15:39:46.224 [INFO][4796] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="60cf06410fd9c9ca5c673d26685d4894a1eb222be4019b1ca5b2eecaadc7d1fc" Feb 13 15:39:46.365636 containerd[1453]: 2025-02-13 15:39:46.224 [INFO][4796] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="60cf06410fd9c9ca5c673d26685d4894a1eb222be4019b1ca5b2eecaadc7d1fc" Feb 13 15:39:46.365636 containerd[1453]: 2025-02-13 15:39:46.342 [INFO][4819] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="60cf06410fd9c9ca5c673d26685d4894a1eb222be4019b1ca5b2eecaadc7d1fc" HandleID="k8s-pod-network.60cf06410fd9c9ca5c673d26685d4894a1eb222be4019b1ca5b2eecaadc7d1fc" Workload="localhost-k8s-csi--node--driver--9jl5n-eth0" Feb 13 15:39:46.365636 containerd[1453]: 2025-02-13 15:39:46.342 [INFO][4819] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:39:46.365636 containerd[1453]: 2025-02-13 15:39:46.342 [INFO][4819] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:39:46.365636 containerd[1453]: 2025-02-13 15:39:46.354 [WARNING][4819] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="60cf06410fd9c9ca5c673d26685d4894a1eb222be4019b1ca5b2eecaadc7d1fc" HandleID="k8s-pod-network.60cf06410fd9c9ca5c673d26685d4894a1eb222be4019b1ca5b2eecaadc7d1fc" Workload="localhost-k8s-csi--node--driver--9jl5n-eth0" Feb 13 15:39:46.365636 containerd[1453]: 2025-02-13 15:39:46.354 [INFO][4819] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="60cf06410fd9c9ca5c673d26685d4894a1eb222be4019b1ca5b2eecaadc7d1fc" HandleID="k8s-pod-network.60cf06410fd9c9ca5c673d26685d4894a1eb222be4019b1ca5b2eecaadc7d1fc" Workload="localhost-k8s-csi--node--driver--9jl5n-eth0" Feb 13 15:39:46.365636 containerd[1453]: 2025-02-13 15:39:46.359 [INFO][4819] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:39:46.365636 containerd[1453]: 2025-02-13 15:39:46.362 [INFO][4796] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="60cf06410fd9c9ca5c673d26685d4894a1eb222be4019b1ca5b2eecaadc7d1fc" Feb 13 15:39:46.365533 sshd-session[4785]: pam_unix(sshd:session): session closed for user core Feb 13 15:39:46.372719 systemd[1]: run-netns-cni\x2d64163747\x2d8418\x2d90f1\x2de837\x2d4edfb36221cb.mount: Deactivated successfully. Feb 13 15:39:46.372819 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-60cf06410fd9c9ca5c673d26685d4894a1eb222be4019b1ca5b2eecaadc7d1fc-shm.mount: Deactivated successfully. Feb 13 15:39:46.373936 systemd[1]: sshd@9-10.0.0.113:22-10.0.0.1:58692.service: Deactivated successfully. Feb 13 15:39:46.375604 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 15:39:46.376138 containerd[1453]: time="2025-02-13T15:39:46.376024663Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9jl5n,Uid:0c3e32e2-3a7c-428a-a18f-8761ef2b92d8,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"60cf06410fd9c9ca5c673d26685d4894a1eb222be4019b1ca5b2eecaadc7d1fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:46.376380 kubelet[2589]: E0213 15:39:46.376340 2589 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60cf06410fd9c9ca5c673d26685d4894a1eb222be4019b1ca5b2eecaadc7d1fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:39:46.376431 kubelet[2589]: E0213 15:39:46.376414 2589 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60cf06410fd9c9ca5c673d26685d4894a1eb222be4019b1ca5b2eecaadc7d1fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9jl5n" Feb 13 15:39:46.376507 kubelet[2589]: E0213 15:39:46.376436 2589 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60cf06410fd9c9ca5c673d26685d4894a1eb222be4019b1ca5b2eecaadc7d1fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9jl5n" Feb 13 15:39:46.376554 kubelet[2589]: E0213 15:39:46.376536 2589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9jl5n_calico-system(0c3e32e2-3a7c-428a-a18f-8761ef2b92d8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9jl5n_calico-system(0c3e32e2-3a7c-428a-a18f-8761ef2b92d8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"60cf06410fd9c9ca5c673d26685d4894a1eb222be4019b1ca5b2eecaadc7d1fc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9jl5n" podUID="0c3e32e2-3a7c-428a-a18f-8761ef2b92d8" Feb 13 15:39:46.378056 systemd-logind[1429]: Session 10 logged out. Waiting for processes to exit. Feb 13 15:39:46.385783 systemd[1]: Started sshd@10-10.0.0.113:22-10.0.0.1:58696.service - OpenSSH per-connection server daemon (10.0.0.1:58696). Feb 13 15:39:46.386696 systemd-logind[1429]: Removed session 10. Feb 13 15:39:46.422592 sshd[4851]: Accepted publickey for core from 10.0.0.1 port 58696 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:39:46.423914 sshd-session[4851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:39:46.428143 systemd-logind[1429]: New session 11 of user core. Feb 13 15:39:46.439650 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 15:39:46.581665 sshd[4853]: Connection closed by 10.0.0.1 port 58696 Feb 13 15:39:46.582112 sshd-session[4851]: pam_unix(sshd:session): session closed for user core Feb 13 15:39:46.590045 systemd[1]: sshd@10-10.0.0.113:22-10.0.0.1:58696.service: Deactivated successfully. Feb 13 15:39:46.591988 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 15:39:46.594314 systemd-logind[1429]: Session 11 logged out. Waiting for processes to exit. Feb 13 15:39:46.600954 systemd[1]: Started sshd@11-10.0.0.113:22-10.0.0.1:58704.service - OpenSSH per-connection server daemon (10.0.0.1:58704). Feb 13 15:39:46.604638 systemd-logind[1429]: Removed session 11. Feb 13 15:39:46.641939 sshd[4864]: Accepted publickey for core from 10.0.0.1 port 58704 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:39:46.643503 sshd-session[4864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:39:46.648217 systemd-logind[1429]: New session 12 of user core. Feb 13 15:39:46.655630 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 15:39:46.754909 kubelet[2589]: I0213 15:39:46.754878 2589 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba7317d179593a0699866447e61ca8280457f65066e5eed9890fb1749b2f40b0" Feb 13 15:39:46.755967 containerd[1453]: time="2025-02-13T15:39:46.755919445Z" level=info msg="StopPodSandbox for \"ba7317d179593a0699866447e61ca8280457f65066e5eed9890fb1749b2f40b0\"" Feb 13 15:39:46.756203 containerd[1453]: time="2025-02-13T15:39:46.756124736Z" level=info msg="Ensure that sandbox ba7317d179593a0699866447e61ca8280457f65066e5eed9890fb1749b2f40b0 in task-service has been cleanup successfully" Feb 13 15:39:46.757878 containerd[1453]: time="2025-02-13T15:39:46.756495795Z" level=info msg="TearDown network for sandbox \"ba7317d179593a0699866447e61ca8280457f65066e5eed9890fb1749b2f40b0\" successfully" Feb 13 15:39:46.757878 containerd[1453]: time="2025-02-13T15:39:46.756741328Z" level=info msg="StopPodSandbox for \"ba7317d179593a0699866447e61ca8280457f65066e5eed9890fb1749b2f40b0\" returns successfully" Feb 13 15:39:46.757878 containerd[1453]: time="2025-02-13T15:39:46.757676857Z" level=info msg="StopPodSandbox for \"3a525c6ec978a5a69c0212d35cd2b17b293d11491a62157f6c3ed38426d316ae\"" Feb 13 15:39:46.757878 containerd[1453]: time="2025-02-13T15:39:46.757754301Z" level=info msg="TearDown network for sandbox \"3a525c6ec978a5a69c0212d35cd2b17b293d11491a62157f6c3ed38426d316ae\" successfully" Feb 13 15:39:46.757878 containerd[1453]: time="2025-02-13T15:39:46.757764541Z" level=info msg="StopPodSandbox for \"3a525c6ec978a5a69c0212d35cd2b17b293d11491a62157f6c3ed38426d316ae\" returns successfully" Feb 13 15:39:46.758218 systemd[1]: run-netns-cni\x2dd06a08f8\x2df1fe\x2db301\x2d9334\x2dd65a6ddf280d.mount: Deactivated successfully. Feb 13 15:39:46.760103 containerd[1453]: time="2025-02-13T15:39:46.759741364Z" level=info msg="StopPodSandbox for \"d707feadb9268fa011363e35e4a444418fcd0f37bc28decb570859962b6aafb6\"" Feb 13 15:39:46.760103 containerd[1453]: time="2025-02-13T15:39:46.759853250Z" level=info msg="TearDown network for sandbox \"d707feadb9268fa011363e35e4a444418fcd0f37bc28decb570859962b6aafb6\" successfully" Feb 13 15:39:46.760103 containerd[1453]: time="2025-02-13T15:39:46.759863851Z" level=info msg="StopPodSandbox for \"d707feadb9268fa011363e35e4a444418fcd0f37bc28decb570859962b6aafb6\" returns successfully" Feb 13 15:39:46.760854 containerd[1453]: time="2025-02-13T15:39:46.760794619Z" level=info msg="StopPodSandbox for \"7ea59ef2557339d5092fba62282ec76fa969b68ad495c317bb92c97891321274\"" Feb 13 15:39:46.760948 containerd[1453]: time="2025-02-13T15:39:46.760929346Z" level=info msg="TearDown network for sandbox \"7ea59ef2557339d5092fba62282ec76fa969b68ad495c317bb92c97891321274\" successfully" Feb 13 15:39:46.760948 containerd[1453]: time="2025-02-13T15:39:46.760945747Z" level=info msg="StopPodSandbox for \"7ea59ef2557339d5092fba62282ec76fa969b68ad495c317bb92c97891321274\" returns successfully" Feb 13 15:39:46.761799 containerd[1453]: time="2025-02-13T15:39:46.761729908Z" level=info msg="StopPodSandbox for \"5c6df23e15af22b2b3475eb65ba94a3e2b7eca755b23c91b1d64f81c31949671\"" Feb 13 15:39:46.761921 containerd[1453]: time="2025-02-13T15:39:46.761818752Z" level=info msg="TearDown network for sandbox \"5c6df23e15af22b2b3475eb65ba94a3e2b7eca755b23c91b1d64f81c31949671\" successfully" Feb 13 15:39:46.761969 containerd[1453]: time="2025-02-13T15:39:46.761922238Z" level=info msg="StopPodSandbox for \"5c6df23e15af22b2b3475eb65ba94a3e2b7eca755b23c91b1d64f81c31949671\" returns successfully" Feb 13 15:39:46.762725 containerd[1453]: time="2025-02-13T15:39:46.762303098Z" level=info msg="StopPodSandbox for \"1366802ebaa9a45a0b2a61073284e55df3a597069127539e15d8cfea7f6ea1a6\"" Feb 13 15:39:46.762725 containerd[1453]: time="2025-02-13T15:39:46.762415303Z" level=info msg="TearDown network for sandbox \"1366802ebaa9a45a0b2a61073284e55df3a597069127539e15d8cfea7f6ea1a6\" successfully" Feb 13 15:39:46.762725 containerd[1453]: time="2025-02-13T15:39:46.762426744Z" level=info msg="StopPodSandbox for \"1366802ebaa9a45a0b2a61073284e55df3a597069127539e15d8cfea7f6ea1a6\" returns successfully" Feb 13 15:39:46.763760 containerd[1453]: time="2025-02-13T15:39:46.763364553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-655c6976bf-p7qqq,Uid:95d7909a-cd44-4f88-af35-6de766421d4b,Namespace:calico-apiserver,Attempt:6,}" Feb 13 15:39:46.768142 kubelet[2589]: I0213 15:39:46.768114 2589 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bf6fab8387c3ed056be1d602ef5aaa086404c97feb59ddfe547152a64bb840b" Feb 13 15:39:46.768805 containerd[1453]: time="2025-02-13T15:39:46.768643508Z" level=info msg="StopPodSandbox for \"3bf6fab8387c3ed056be1d602ef5aaa086404c97feb59ddfe547152a64bb840b\"" Feb 13 15:39:46.769344 containerd[1453]: time="2025-02-13T15:39:46.769096851Z" level=info msg="Ensure that sandbox 3bf6fab8387c3ed056be1d602ef5aaa086404c97feb59ddfe547152a64bb840b in task-service has been cleanup successfully" Feb 13 15:39:46.769594 containerd[1453]: time="2025-02-13T15:39:46.769523274Z" level=info msg="TearDown network for sandbox \"3bf6fab8387c3ed056be1d602ef5aaa086404c97feb59ddfe547152a64bb840b\" successfully" Feb 13 15:39:46.769594 containerd[1453]: time="2025-02-13T15:39:46.769542154Z" level=info msg="StopPodSandbox for \"3bf6fab8387c3ed056be1d602ef5aaa086404c97feb59ddfe547152a64bb840b\" returns successfully" Feb 13 15:39:46.770316 containerd[1453]: time="2025-02-13T15:39:46.770278273Z" level=info msg="StopPodSandbox for \"39a88ace6d7473b18090855153397f0fb6353c83591be1df80145ca5ac8b441d\"" Feb 13 15:39:46.770484 containerd[1453]: time="2025-02-13T15:39:46.770374878Z" level=info msg="TearDown network for sandbox \"39a88ace6d7473b18090855153397f0fb6353c83591be1df80145ca5ac8b441d\" successfully" Feb 13 15:39:46.770484 containerd[1453]: time="2025-02-13T15:39:46.770399359Z" level=info msg="StopPodSandbox for \"39a88ace6d7473b18090855153397f0fb6353c83591be1df80145ca5ac8b441d\" returns successfully" Feb 13 15:39:46.771186 containerd[1453]: time="2025-02-13T15:39:46.771061514Z" level=info msg="StopPodSandbox for \"83168c2b6aaee4a8905d06ddc03e782efd545d713d6d0b790b340a1e7610b3b8\"" Feb 13 15:39:46.771186 containerd[1453]: time="2025-02-13T15:39:46.771184400Z" level=info msg="TearDown network for sandbox \"83168c2b6aaee4a8905d06ddc03e782efd545d713d6d0b790b340a1e7610b3b8\" successfully" Feb 13 15:39:46.771254 containerd[1453]: time="2025-02-13T15:39:46.771196481Z" level=info msg="StopPodSandbox for \"83168c2b6aaee4a8905d06ddc03e782efd545d713d6d0b790b340a1e7610b3b8\" returns successfully" Feb 13 15:39:46.771814 containerd[1453]: time="2025-02-13T15:39:46.771682106Z" level=info msg="StopPodSandbox for \"ee6732b5b77113a6205089d42119dad4c5eee465a9929a1fc3fc3e114a8fca23\"" Feb 13 15:39:46.771814 containerd[1453]: time="2025-02-13T15:39:46.771788711Z" level=info msg="TearDown network for sandbox \"ee6732b5b77113a6205089d42119dad4c5eee465a9929a1fc3fc3e114a8fca23\" successfully" Feb 13 15:39:46.771814 containerd[1453]: time="2025-02-13T15:39:46.771801472Z" level=info msg="StopPodSandbox for \"ee6732b5b77113a6205089d42119dad4c5eee465a9929a1fc3fc3e114a8fca23\" returns successfully" Feb 13 15:39:46.771744 systemd[1]: run-netns-cni\x2d6152aab8\x2d1691\x2d3970\x2dfa74\x2d14598ec95a1d.mount: Deactivated successfully. Feb 13 15:39:46.773246 containerd[1453]: time="2025-02-13T15:39:46.772996534Z" level=info msg="StopPodSandbox for \"c7b8464e6806d7515d6d160d433c08824e6e2de7760100550e98ff9d2b183f53\"" Feb 13 15:39:46.774509 kubelet[2589]: I0213 15:39:46.773596 2589 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="101df1b5a2a631f5f92c5c2b39a4a8b148fad8cca3a6df2bdc90e67be4cca1a7" Feb 13 15:39:46.774581 containerd[1453]: time="2025-02-13T15:39:46.774175996Z" level=info msg="StopPodSandbox for \"101df1b5a2a631f5f92c5c2b39a4a8b148fad8cca3a6df2bdc90e67be4cca1a7\"" Feb 13 15:39:46.774581 containerd[1453]: time="2025-02-13T15:39:46.774332564Z" level=info msg="Ensure that sandbox 101df1b5a2a631f5f92c5c2b39a4a8b148fad8cca3a6df2bdc90e67be4cca1a7 in task-service has been cleanup successfully" Feb 13 15:39:46.775583 containerd[1453]: time="2025-02-13T15:39:46.775550027Z" level=info msg="TearDown network for sandbox \"c7b8464e6806d7515d6d160d433c08824e6e2de7760100550e98ff9d2b183f53\" successfully" Feb 13 15:39:46.775583 containerd[1453]: time="2025-02-13T15:39:46.775581189Z" level=info msg="StopPodSandbox for \"c7b8464e6806d7515d6d160d433c08824e6e2de7760100550e98ff9d2b183f53\" returns successfully" Feb 13 15:39:46.776327 containerd[1453]: time="2025-02-13T15:39:46.776262344Z" level=info msg="StopPodSandbox for \"04e549bcbe5ef51d7da81536fa57c286356e230b0a4a8de09f3b8b24843dcce6\"" Feb 13 15:39:46.776395 containerd[1453]: time="2025-02-13T15:39:46.776346589Z" level=info msg="TearDown network for sandbox \"04e549bcbe5ef51d7da81536fa57c286356e230b0a4a8de09f3b8b24843dcce6\" successfully" Feb 13 15:39:46.776395 containerd[1453]: time="2025-02-13T15:39:46.776357389Z" level=info msg="StopPodSandbox for \"04e549bcbe5ef51d7da81536fa57c286356e230b0a4a8de09f3b8b24843dcce6\" returns successfully" Feb 13 15:39:46.776601 containerd[1453]: time="2025-02-13T15:39:46.776502197Z" level=info msg="TearDown network for sandbox \"101df1b5a2a631f5f92c5c2b39a4a8b148fad8cca3a6df2bdc90e67be4cca1a7\" successfully" Feb 13 15:39:46.776601 containerd[1453]: time="2025-02-13T15:39:46.776525718Z" level=info msg="StopPodSandbox for \"101df1b5a2a631f5f92c5c2b39a4a8b148fad8cca3a6df2bdc90e67be4cca1a7\" returns successfully" Feb 13 15:39:46.776788 systemd[1]: run-netns-cni\x2dc20b2a50\x2dbdd2\x2d1d65\x2d5e2f\x2d59c65cf19963.mount: Deactivated successfully. Feb 13 15:39:46.777252 containerd[1453]: time="2025-02-13T15:39:46.776857535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67d55cd4f9-c8fqd,Uid:70d03d25-2cd5-469b-b092-195e4bf21efe,Namespace:calico-system,Attempt:6,}" Feb 13 15:39:46.777751 containerd[1453]: time="2025-02-13T15:39:46.777724861Z" level=info msg="StopPodSandbox for \"eafde1813a5db1480e193a3196ba6983d3672af483e22d08b7ea3d97fbf2a9c2\"" Feb 13 15:39:46.777921 containerd[1453]: time="2025-02-13T15:39:46.777881549Z" level=info msg="TearDown network for sandbox \"eafde1813a5db1480e193a3196ba6983d3672af483e22d08b7ea3d97fbf2a9c2\" successfully" Feb 13 15:39:46.778303 containerd[1453]: time="2025-02-13T15:39:46.778276609Z" level=info msg="StopPodSandbox for \"eafde1813a5db1480e193a3196ba6983d3672af483e22d08b7ea3d97fbf2a9c2\" returns successfully" Feb 13 15:39:46.779460 kubelet[2589]: E0213 15:39:46.779395 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:39:46.779724 containerd[1453]: time="2025-02-13T15:39:46.779692043Z" level=info msg="StopPodSandbox for \"71de9d911f60af43c2548ea926c32ea6d591d73f40411c67079cd659cd718877\"" Feb 13 15:39:46.779890 containerd[1453]: time="2025-02-13T15:39:46.779846811Z" level=info msg="TearDown network for sandbox \"71de9d911f60af43c2548ea926c32ea6d591d73f40411c67079cd659cd718877\" successfully" Feb 13 15:39:46.779924 containerd[1453]: time="2025-02-13T15:39:46.779889733Z" level=info msg="StopPodSandbox for \"71de9d911f60af43c2548ea926c32ea6d591d73f40411c67079cd659cd718877\" returns successfully" Feb 13 15:39:46.780968 containerd[1453]: time="2025-02-13T15:39:46.780904986Z" level=info msg="StopPodSandbox for \"5b053525623b9497da7e1b2e770d86673655bd7d93820efc534aa58222203703\"" Feb 13 15:39:46.781042 containerd[1453]: time="2025-02-13T15:39:46.781012632Z" level=info msg="TearDown network for sandbox \"5b053525623b9497da7e1b2e770d86673655bd7d93820efc534aa58222203703\" successfully" Feb 13 15:39:46.781042 containerd[1453]: time="2025-02-13T15:39:46.781024192Z" level=info msg="StopPodSandbox for \"5b053525623b9497da7e1b2e770d86673655bd7d93820efc534aa58222203703\" returns successfully" Feb 13 15:39:46.785559 containerd[1453]: time="2025-02-13T15:39:46.783342673Z" level=info msg="StopPodSandbox for \"5877426208bb0ec6f3b0bd6273d2a61f178cd37093bfdab85cd100b691fb5424\"" Feb 13 15:39:46.785559 containerd[1453]: time="2025-02-13T15:39:46.783481280Z" level=info msg="TearDown network for sandbox \"5877426208bb0ec6f3b0bd6273d2a61f178cd37093bfdab85cd100b691fb5424\" successfully" Feb 13 15:39:46.785559 containerd[1453]: time="2025-02-13T15:39:46.783493561Z" level=info msg="StopPodSandbox for \"5877426208bb0ec6f3b0bd6273d2a61f178cd37093bfdab85cd100b691fb5424\" returns successfully" Feb 13 15:39:46.788259 containerd[1453]: time="2025-02-13T15:39:46.787985235Z" level=info msg="StopPodSandbox for \"cda662325bbcccb4b3b73cb9c1a4ba575d4900e6a01847f7536483bed3456eb0\"" Feb 13 15:39:46.788259 containerd[1453]: time="2025-02-13T15:39:46.788118282Z" level=info msg="TearDown network for sandbox \"cda662325bbcccb4b3b73cb9c1a4ba575d4900e6a01847f7536483bed3456eb0\" successfully" Feb 13 15:39:46.788259 containerd[1453]: time="2025-02-13T15:39:46.788128642Z" level=info msg="StopPodSandbox for \"cda662325bbcccb4b3b73cb9c1a4ba575d4900e6a01847f7536483bed3456eb0\" returns successfully" Feb 13 15:39:46.790440 kubelet[2589]: E0213 15:39:46.788841 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:39:46.791269 containerd[1453]: time="2025-02-13T15:39:46.791051795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-tzqqh,Uid:22169313-af53-4d8b-b855-dc02e6d1e640,Namespace:kube-system,Attempt:6,}" Feb 13 15:39:46.791871 sshd[4866]: Connection closed by 10.0.0.1 port 58704 Feb 13 15:39:46.792671 sshd-session[4864]: pam_unix(sshd:session): session closed for user core Feb 13 15:39:46.801466 kubelet[2589]: I0213 15:39:46.800768 2589 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-8q5jw" podStartSLOduration=1.869173429 podStartE2EDuration="14.800721058s" podCreationTimestamp="2025-02-13 15:39:32 +0000 UTC" firstStartedPulling="2025-02-13 15:39:32.460644535 +0000 UTC m=+27.056008425" lastFinishedPulling="2025-02-13 15:39:45.392192164 +0000 UTC m=+39.987556054" observedRunningTime="2025-02-13 15:39:46.798578906 +0000 UTC m=+41.393942796" watchObservedRunningTime="2025-02-13 15:39:46.800721058 +0000 UTC m=+41.396084948" Feb 13 15:39:46.801466 kubelet[2589]: I0213 15:39:46.801271 2589 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b01e65f44a177baaf5f7f69dc8b5d4d3eb04ec8d0624506da181fde0fb48605a" Feb 13 15:39:46.801810 systemd[1]: sshd@11-10.0.0.113:22-10.0.0.1:58704.service: Deactivated successfully. Feb 13 15:39:46.803429 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 15:39:46.806057 systemd-logind[1429]: Session 12 logged out. Waiting for processes to exit. Feb 13 15:39:46.809466 containerd[1453]: time="2025-02-13T15:39:46.809065853Z" level=info msg="StopPodSandbox for \"b01e65f44a177baaf5f7f69dc8b5d4d3eb04ec8d0624506da181fde0fb48605a\"" Feb 13 15:39:46.809672 containerd[1453]: time="2025-02-13T15:39:46.809626162Z" level=info msg="Ensure that sandbox b01e65f44a177baaf5f7f69dc8b5d4d3eb04ec8d0624506da181fde0fb48605a in task-service has been cleanup successfully" Feb 13 15:39:46.810324 containerd[1453]: time="2025-02-13T15:39:46.810129748Z" level=info msg="TearDown network for sandbox \"b01e65f44a177baaf5f7f69dc8b5d4d3eb04ec8d0624506da181fde0fb48605a\" successfully" Feb 13 15:39:46.810324 containerd[1453]: time="2025-02-13T15:39:46.810148429Z" level=info msg="StopPodSandbox for \"b01e65f44a177baaf5f7f69dc8b5d4d3eb04ec8d0624506da181fde0fb48605a\" returns successfully" Feb 13 15:39:46.810820 systemd-logind[1429]: Removed session 12. Feb 13 15:39:46.813436 containerd[1453]: time="2025-02-13T15:39:46.813370797Z" level=info msg="StopPodSandbox for \"30a389c08acdff94511aaf20ccbee0dfab1cf917dd7515609cef1dd2a3cb4d98\"" Feb 13 15:39:46.814074 containerd[1453]: time="2025-02-13T15:39:46.814039592Z" level=info msg="TearDown network for sandbox \"30a389c08acdff94511aaf20ccbee0dfab1cf917dd7515609cef1dd2a3cb4d98\" successfully" Feb 13 15:39:46.814101 containerd[1453]: time="2025-02-13T15:39:46.814063553Z" level=info msg="StopPodSandbox for \"30a389c08acdff94511aaf20ccbee0dfab1cf917dd7515609cef1dd2a3cb4d98\" returns successfully" Feb 13 15:39:46.815452 containerd[1453]: time="2025-02-13T15:39:46.815379421Z" level=info msg="StopPodSandbox for \"757cc0d2ff1044664c04e58b47b1a5006b422f31425ed9c06bacf1bc76f6ad3e\"" Feb 13 15:39:46.815554 containerd[1453]: time="2025-02-13T15:39:46.815535629Z" level=info msg="TearDown network for sandbox \"757cc0d2ff1044664c04e58b47b1a5006b422f31425ed9c06bacf1bc76f6ad3e\" successfully" Feb 13 15:39:46.816935 containerd[1453]: time="2025-02-13T15:39:46.816896260Z" level=info msg="StopPodSandbox for \"757cc0d2ff1044664c04e58b47b1a5006b422f31425ed9c06bacf1bc76f6ad3e\" returns successfully" Feb 13 15:39:46.827686 containerd[1453]: time="2025-02-13T15:39:46.827536494Z" level=info msg="StopPodSandbox for \"396a2480d2093d2717a34e8bc7fb57dabb533a866511490272198d8e3429975d\"" Feb 13 15:39:46.827786 containerd[1453]: time="2025-02-13T15:39:46.827686382Z" level=info msg="TearDown network for sandbox \"396a2480d2093d2717a34e8bc7fb57dabb533a866511490272198d8e3429975d\" successfully" Feb 13 15:39:46.827786 containerd[1453]: time="2025-02-13T15:39:46.827709623Z" level=info msg="StopPodSandbox for \"396a2480d2093d2717a34e8bc7fb57dabb533a866511490272198d8e3429975d\" returns successfully" Feb 13 15:39:46.829072 kubelet[2589]: I0213 15:39:46.828998 2589 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b40c7454fd2776f6e99fb15ae912eedbe2585ed2954a98ab18eca3781785c734" Feb 13 15:39:46.849172 containerd[1453]: time="2025-02-13T15:39:46.846945345Z" level=info msg="StopPodSandbox for \"b40c7454fd2776f6e99fb15ae912eedbe2585ed2954a98ab18eca3781785c734\"" Feb 13 15:39:46.851220 containerd[1453]: time="2025-02-13T15:39:46.847393088Z" level=info msg="StopPodSandbox for \"6eb389d8019cf13e9ce3bc3cf2e15a9d423f6abd94bc80f011fe56a3be87a4b4\"" Feb 13 15:39:46.851220 containerd[1453]: time="2025-02-13T15:39:46.850673579Z" level=info msg="TearDown network for sandbox \"6eb389d8019cf13e9ce3bc3cf2e15a9d423f6abd94bc80f011fe56a3be87a4b4\" successfully" Feb 13 15:39:46.851220 containerd[1453]: time="2025-02-13T15:39:46.850686940Z" level=info msg="StopPodSandbox for \"6eb389d8019cf13e9ce3bc3cf2e15a9d423f6abd94bc80f011fe56a3be87a4b4\" returns successfully" Feb 13 15:39:46.851220 containerd[1453]: time="2025-02-13T15:39:46.847619900Z" level=info msg="StopPodSandbox for \"49e61bdc9174d5527e1c7974cc761bf4ead3dd292c49b47e6a610d38ca0b74ec\"" Feb 13 15:39:46.851220 containerd[1453]: time="2025-02-13T15:39:46.850830547Z" level=info msg="TearDown network for sandbox \"49e61bdc9174d5527e1c7974cc761bf4ead3dd292c49b47e6a610d38ca0b74ec\" successfully" Feb 13 15:39:46.851220 containerd[1453]: time="2025-02-13T15:39:46.850840868Z" level=info msg="StopPodSandbox for \"49e61bdc9174d5527e1c7974cc761bf4ead3dd292c49b47e6a610d38ca0b74ec\" returns successfully" Feb 13 15:39:46.851220 containerd[1453]: time="2025-02-13T15:39:46.851003756Z" level=info msg="Ensure that sandbox b40c7454fd2776f6e99fb15ae912eedbe2585ed2954a98ab18eca3781785c734 in task-service has been cleanup successfully" Feb 13 15:39:46.852480 containerd[1453]: time="2025-02-13T15:39:46.852236221Z" level=info msg="TearDown network for sandbox \"b40c7454fd2776f6e99fb15ae912eedbe2585ed2954a98ab18eca3781785c734\" successfully" Feb 13 15:39:46.852608 containerd[1453]: time="2025-02-13T15:39:46.852584639Z" level=info msg="StopPodSandbox for \"b40c7454fd2776f6e99fb15ae912eedbe2585ed2954a98ab18eca3781785c734\" returns successfully" Feb 13 15:39:46.855962 containerd[1453]: time="2025-02-13T15:39:46.855919532Z" level=info msg="StopPodSandbox for \"bbe0e585cf74fab1708cc4fe366a97dc7b448d8d7a1e597df3010cf40cbe9ec3\"" Feb 13 15:39:46.856067 containerd[1453]: time="2025-02-13T15:39:46.856050299Z" level=info msg="TearDown network for sandbox \"bbe0e585cf74fab1708cc4fe366a97dc7b448d8d7a1e597df3010cf40cbe9ec3\" successfully" Feb 13 15:39:46.856172 containerd[1453]: time="2025-02-13T15:39:46.856065380Z" level=info msg="StopPodSandbox for \"bbe0e585cf74fab1708cc4fe366a97dc7b448d8d7a1e597df3010cf40cbe9ec3\" returns successfully" Feb 13 15:39:46.856172 containerd[1453]: time="2025-02-13T15:39:46.856154585Z" level=info msg="StopPodSandbox for \"c0109edb01872ea6ddbb1270e554366cbd4b5a05c3fb263f4739b76a9f131107\"" Feb 13 15:39:46.856233 containerd[1453]: time="2025-02-13T15:39:46.856215188Z" level=info msg="TearDown network for sandbox \"c0109edb01872ea6ddbb1270e554366cbd4b5a05c3fb263f4739b76a9f131107\" successfully" Feb 13 15:39:46.856264 containerd[1453]: time="2025-02-13T15:39:46.856234109Z" level=info msg="StopPodSandbox for \"c0109edb01872ea6ddbb1270e554366cbd4b5a05c3fb263f4739b76a9f131107\" returns successfully" Feb 13 15:39:46.856301 containerd[1453]: time="2025-02-13T15:39:46.856287511Z" level=info msg="StopPodSandbox for \"993517d1fc77735746720ed6d1052b7f574a391e2aa3dd3c4e1ac176cb243e90\"" Feb 13 15:39:46.856505 containerd[1453]: time="2025-02-13T15:39:46.856350995Z" level=info msg="TearDown network for sandbox \"993517d1fc77735746720ed6d1052b7f574a391e2aa3dd3c4e1ac176cb243e90\" successfully" Feb 13 15:39:46.856505 containerd[1453]: time="2025-02-13T15:39:46.856394157Z" level=info msg="StopPodSandbox for \"993517d1fc77735746720ed6d1052b7f574a391e2aa3dd3c4e1ac176cb243e90\" returns successfully" Feb 13 15:39:46.858878 kubelet[2589]: E0213 15:39:46.858851 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:39:46.860324 containerd[1453]: time="2025-02-13T15:39:46.860164833Z" level=info msg="StopPodSandbox for \"ad5a92ae00ae485f6dedd5b8be0955d3e9faa55341d054bfa14ae18b47ab5efd\"" Feb 13 15:39:46.860608 containerd[1453]: time="2025-02-13T15:39:46.860478090Z" level=info msg="TearDown network for sandbox \"ad5a92ae00ae485f6dedd5b8be0955d3e9faa55341d054bfa14ae18b47ab5efd\" successfully" Feb 13 15:39:46.863714 containerd[1453]: time="2025-02-13T15:39:46.862976300Z" level=info msg="StopPodSandbox for \"ad5a92ae00ae485f6dedd5b8be0955d3e9faa55341d054bfa14ae18b47ab5efd\" returns successfully" Feb 13 15:39:46.864137 containerd[1453]: time="2025-02-13T15:39:46.863476686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-m78g5,Uid:74996f45-87e3-49ee-bffd-dfcfa7bb4a84,Namespace:kube-system,Attempt:6,}" Feb 13 15:39:46.864352 containerd[1453]: time="2025-02-13T15:39:46.864209524Z" level=info msg="StopPodSandbox for \"d86444dd64a16082496dff8bcc4382e23ca50d04c358f741bc26b8a391d6c550\"" Feb 13 15:39:46.864352 containerd[1453]: time="2025-02-13T15:39:46.864345571Z" level=info msg="TearDown network for sandbox \"d86444dd64a16082496dff8bcc4382e23ca50d04c358f741bc26b8a391d6c550\" successfully" Feb 13 15:39:46.864428 containerd[1453]: time="2025-02-13T15:39:46.864366852Z" level=info msg="StopPodSandbox for \"d86444dd64a16082496dff8bcc4382e23ca50d04c358f741bc26b8a391d6c550\" returns successfully" Feb 13 15:39:46.864752 containerd[1453]: time="2025-02-13T15:39:46.864723151Z" level=info msg="StopPodSandbox for \"50cf6045c8a1ddf321bab1e6e9a3f1fc9f3a297088fc52a6e468af940cbe1c10\"" Feb 13 15:39:46.865196 containerd[1453]: time="2025-02-13T15:39:46.864869798Z" level=info msg="TearDown network for sandbox \"50cf6045c8a1ddf321bab1e6e9a3f1fc9f3a297088fc52a6e468af940cbe1c10\" successfully" Feb 13 15:39:46.865196 containerd[1453]: time="2025-02-13T15:39:46.864888319Z" level=info msg="StopPodSandbox for \"50cf6045c8a1ddf321bab1e6e9a3f1fc9f3a297088fc52a6e468af940cbe1c10\" returns successfully" Feb 13 15:39:46.865196 containerd[1453]: time="2025-02-13T15:39:46.864928241Z" level=info msg="StopPodSandbox for \"008c18afaab252b50ccfad53679086b78620acbae6366a596e3e004153702937\"" Feb 13 15:39:46.865196 containerd[1453]: time="2025-02-13T15:39:46.865033567Z" level=info msg="TearDown network for sandbox \"008c18afaab252b50ccfad53679086b78620acbae6366a596e3e004153702937\" successfully" Feb 13 15:39:46.865196 containerd[1453]: time="2025-02-13T15:39:46.865044967Z" level=info msg="StopPodSandbox for \"008c18afaab252b50ccfad53679086b78620acbae6366a596e3e004153702937\" returns successfully" Feb 13 15:39:46.867264 containerd[1453]: time="2025-02-13T15:39:46.867229841Z" level=info msg="StopPodSandbox for \"323440efec752a3eb15eea898bfaffb5e63aebd2a734fd29578ba11bb1b33be8\"" Feb 13 15:39:46.867490 containerd[1453]: time="2025-02-13T15:39:46.867413371Z" level=info msg="TearDown network for sandbox \"323440efec752a3eb15eea898bfaffb5e63aebd2a734fd29578ba11bb1b33be8\" successfully" Feb 13 15:39:46.867490 containerd[1453]: time="2025-02-13T15:39:46.867432492Z" level=info msg="StopPodSandbox for \"323440efec752a3eb15eea898bfaffb5e63aebd2a734fd29578ba11bb1b33be8\" returns successfully" Feb 13 15:39:46.867599 containerd[1453]: time="2025-02-13T15:39:46.867566579Z" level=info msg="StopPodSandbox for \"2782e5c6315792cdee1e8b9d3b9b6dea34d36948f0bc75d3eaf32e17a3b39779\"" Feb 13 15:39:46.868375 containerd[1453]: time="2025-02-13T15:39:46.868342699Z" level=info msg="TearDown network for sandbox \"2782e5c6315792cdee1e8b9d3b9b6dea34d36948f0bc75d3eaf32e17a3b39779\" successfully" Feb 13 15:39:46.868969 containerd[1453]: time="2025-02-13T15:39:46.868368581Z" level=info msg="StopPodSandbox for \"2782e5c6315792cdee1e8b9d3b9b6dea34d36948f0bc75d3eaf32e17a3b39779\" returns successfully" Feb 13 15:39:46.870010 containerd[1453]: time="2025-02-13T15:39:46.869972344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9jl5n,Uid:0c3e32e2-3a7c-428a-a18f-8761ef2b92d8,Namespace:calico-system,Attempt:5,}" Feb 13 15:39:46.871158 containerd[1453]: time="2025-02-13T15:39:46.870972196Z" level=info msg="StopPodSandbox for \"fc9238e716f505d3af42ce11bca5e656f52405b452391535ed40e201ac9642d7\"" Feb 13 15:39:46.872303 containerd[1453]: time="2025-02-13T15:39:46.871376777Z" level=info msg="TearDown network for sandbox \"fc9238e716f505d3af42ce11bca5e656f52405b452391535ed40e201ac9642d7\" successfully" Feb 13 15:39:46.872303 containerd[1453]: time="2025-02-13T15:39:46.871500464Z" level=info msg="StopPodSandbox for \"fc9238e716f505d3af42ce11bca5e656f52405b452391535ed40e201ac9642d7\" returns successfully" Feb 13 15:39:46.873257 containerd[1453]: time="2025-02-13T15:39:46.873196752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-655c6976bf-dltfc,Uid:043eaf10-8df2-4749-97a8-7923e4159aba,Namespace:calico-apiserver,Attempt:6,}" Feb 13 15:39:47.049147 systemd-networkd[1390]: cali703ad1cbd8a: Link UP Feb 13 15:39:47.049322 systemd-networkd[1390]: cali703ad1cbd8a: Gained carrier Feb 13 15:39:47.103692 containerd[1453]: 2025-02-13 15:39:46.838 [INFO][4876] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:39:47.103692 containerd[1453]: 2025-02-13 15:39:46.862 [INFO][4876] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--655c6976bf--p7qqq-eth0 calico-apiserver-655c6976bf- calico-apiserver 95d7909a-cd44-4f88-af35-6de766421d4b 811 0 2025-02-13 15:39:31 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:655c6976bf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-655c6976bf-p7qqq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali703ad1cbd8a [] []}} ContainerID="20647a1964458ebcacf97e48977fafb06f283e31a11fb41029ced89ed112e134" Namespace="calico-apiserver" Pod="calico-apiserver-655c6976bf-p7qqq" WorkloadEndpoint="localhost-k8s-calico--apiserver--655c6976bf--p7qqq-" Feb 13 15:39:47.103692 containerd[1453]: 2025-02-13 15:39:46.863 [INFO][4876] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="20647a1964458ebcacf97e48977fafb06f283e31a11fb41029ced89ed112e134" Namespace="calico-apiserver" Pod="calico-apiserver-655c6976bf-p7qqq" WorkloadEndpoint="localhost-k8s-calico--apiserver--655c6976bf--p7qqq-eth0" Feb 13 15:39:47.103692 containerd[1453]: 2025-02-13 15:39:46.923 [INFO][4916] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="20647a1964458ebcacf97e48977fafb06f283e31a11fb41029ced89ed112e134" HandleID="k8s-pod-network.20647a1964458ebcacf97e48977fafb06f283e31a11fb41029ced89ed112e134" Workload="localhost-k8s-calico--apiserver--655c6976bf--p7qqq-eth0" Feb 13 15:39:47.103692 containerd[1453]: 2025-02-13 15:39:46.943 [INFO][4916] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="20647a1964458ebcacf97e48977fafb06f283e31a11fb41029ced89ed112e134" HandleID="k8s-pod-network.20647a1964458ebcacf97e48977fafb06f283e31a11fb41029ced89ed112e134" Workload="localhost-k8s-calico--apiserver--655c6976bf--p7qqq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000311700), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-655c6976bf-p7qqq", "timestamp":"2025-02-13 15:39:46.923064069 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:39:47.103692 containerd[1453]: 2025-02-13 15:39:46.943 [INFO][4916] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:39:47.103692 containerd[1453]: 2025-02-13 15:39:46.943 [INFO][4916] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:39:47.103692 containerd[1453]: 2025-02-13 15:39:46.943 [INFO][4916] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 15:39:47.103692 containerd[1453]: 2025-02-13 15:39:46.962 [INFO][4916] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.20647a1964458ebcacf97e48977fafb06f283e31a11fb41029ced89ed112e134" host="localhost" Feb 13 15:39:47.103692 containerd[1453]: 2025-02-13 15:39:46.971 [INFO][4916] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 15:39:47.103692 containerd[1453]: 2025-02-13 15:39:46.987 [INFO][4916] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 15:39:47.103692 containerd[1453]: 2025-02-13 15:39:46.993 [INFO][4916] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 15:39:47.103692 containerd[1453]: 2025-02-13 15:39:46.999 [INFO][4916] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 15:39:47.103692 containerd[1453]: 2025-02-13 15:39:46.999 [INFO][4916] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.20647a1964458ebcacf97e48977fafb06f283e31a11fb41029ced89ed112e134" host="localhost" Feb 13 15:39:47.103692 containerd[1453]: 2025-02-13 15:39:47.003 [INFO][4916] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.20647a1964458ebcacf97e48977fafb06f283e31a11fb41029ced89ed112e134 Feb 13 15:39:47.103692 containerd[1453]: 2025-02-13 15:39:47.013 [INFO][4916] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.20647a1964458ebcacf97e48977fafb06f283e31a11fb41029ced89ed112e134" host="localhost" Feb 13 15:39:47.103692 containerd[1453]: 2025-02-13 15:39:47.025 [INFO][4916] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.20647a1964458ebcacf97e48977fafb06f283e31a11fb41029ced89ed112e134" host="localhost" Feb 13 15:39:47.103692 containerd[1453]: 2025-02-13 15:39:47.025 [INFO][4916] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.20647a1964458ebcacf97e48977fafb06f283e31a11fb41029ced89ed112e134" host="localhost" Feb 13 15:39:47.103692 containerd[1453]: 2025-02-13 15:39:47.025 [INFO][4916] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:39:47.103692 containerd[1453]: 2025-02-13 15:39:47.026 [INFO][4916] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="20647a1964458ebcacf97e48977fafb06f283e31a11fb41029ced89ed112e134" HandleID="k8s-pod-network.20647a1964458ebcacf97e48977fafb06f283e31a11fb41029ced89ed112e134" Workload="localhost-k8s-calico--apiserver--655c6976bf--p7qqq-eth0" Feb 13 15:39:47.104438 containerd[1453]: 2025-02-13 15:39:47.033 [INFO][4876] cni-plugin/k8s.go 386: Populated endpoint ContainerID="20647a1964458ebcacf97e48977fafb06f283e31a11fb41029ced89ed112e134" Namespace="calico-apiserver" Pod="calico-apiserver-655c6976bf-p7qqq" WorkloadEndpoint="localhost-k8s-calico--apiserver--655c6976bf--p7qqq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--655c6976bf--p7qqq-eth0", GenerateName:"calico-apiserver-655c6976bf-", Namespace:"calico-apiserver", SelfLink:"", UID:"95d7909a-cd44-4f88-af35-6de766421d4b", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 39, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"655c6976bf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-655c6976bf-p7qqq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali703ad1cbd8a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:39:47.104438 containerd[1453]: 2025-02-13 15:39:47.033 [INFO][4876] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="20647a1964458ebcacf97e48977fafb06f283e31a11fb41029ced89ed112e134" Namespace="calico-apiserver" Pod="calico-apiserver-655c6976bf-p7qqq" WorkloadEndpoint="localhost-k8s-calico--apiserver--655c6976bf--p7qqq-eth0" Feb 13 15:39:47.104438 containerd[1453]: 2025-02-13 15:39:47.033 [INFO][4876] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali703ad1cbd8a ContainerID="20647a1964458ebcacf97e48977fafb06f283e31a11fb41029ced89ed112e134" Namespace="calico-apiserver" Pod="calico-apiserver-655c6976bf-p7qqq" WorkloadEndpoint="localhost-k8s-calico--apiserver--655c6976bf--p7qqq-eth0" Feb 13 15:39:47.104438 containerd[1453]: 2025-02-13 15:39:47.048 [INFO][4876] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="20647a1964458ebcacf97e48977fafb06f283e31a11fb41029ced89ed112e134" Namespace="calico-apiserver" Pod="calico-apiserver-655c6976bf-p7qqq" WorkloadEndpoint="localhost-k8s-calico--apiserver--655c6976bf--p7qqq-eth0" Feb 13 15:39:47.104438 containerd[1453]: 2025-02-13 15:39:47.051 [INFO][4876] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="20647a1964458ebcacf97e48977fafb06f283e31a11fb41029ced89ed112e134" Namespace="calico-apiserver" Pod="calico-apiserver-655c6976bf-p7qqq" WorkloadEndpoint="localhost-k8s-calico--apiserver--655c6976bf--p7qqq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--655c6976bf--p7qqq-eth0", GenerateName:"calico-apiserver-655c6976bf-", Namespace:"calico-apiserver", SelfLink:"", UID:"95d7909a-cd44-4f88-af35-6de766421d4b", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 39, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"655c6976bf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"20647a1964458ebcacf97e48977fafb06f283e31a11fb41029ced89ed112e134", Pod:"calico-apiserver-655c6976bf-p7qqq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali703ad1cbd8a", MAC:"62:f8:ab:dd:60:88", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:39:47.104438 containerd[1453]: 2025-02-13 15:39:47.081 [INFO][4876] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="20647a1964458ebcacf97e48977fafb06f283e31a11fb41029ced89ed112e134" Namespace="calico-apiserver" Pod="calico-apiserver-655c6976bf-p7qqq" WorkloadEndpoint="localhost-k8s-calico--apiserver--655c6976bf--p7qqq-eth0" Feb 13 15:39:47.117161 systemd-networkd[1390]: calidb6a52b5bf4: Link UP Feb 13 15:39:47.118266 systemd-networkd[1390]: calidb6a52b5bf4: Gained carrier Feb 13 15:39:47.137429 containerd[1453]: 2025-02-13 15:39:46.869 [INFO][4890] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:39:47.137429 containerd[1453]: 2025-02-13 15:39:46.890 [INFO][4890] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--tzqqh-eth0 coredns-76f75df574- kube-system 22169313-af53-4d8b-b855-dc02e6d1e640 806 0 2025-02-13 15:39:19 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-tzqqh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calidb6a52b5bf4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="0c94de63adf6ae7ad81627ecb2c40d6b7ff11baf52ea084e0da48d2b5db65b3d" Namespace="kube-system" Pod="coredns-76f75df574-tzqqh" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--tzqqh-" Feb 13 15:39:47.137429 containerd[1453]: 2025-02-13 15:39:46.890 [INFO][4890] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0c94de63adf6ae7ad81627ecb2c40d6b7ff11baf52ea084e0da48d2b5db65b3d" Namespace="kube-system" Pod="coredns-76f75df574-tzqqh" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--tzqqh-eth0" Feb 13 15:39:47.137429 containerd[1453]: 2025-02-13 15:39:46.951 [INFO][4946] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0c94de63adf6ae7ad81627ecb2c40d6b7ff11baf52ea084e0da48d2b5db65b3d" HandleID="k8s-pod-network.0c94de63adf6ae7ad81627ecb2c40d6b7ff11baf52ea084e0da48d2b5db65b3d" Workload="localhost-k8s-coredns--76f75df574--tzqqh-eth0" Feb 13 15:39:47.137429 containerd[1453]: 2025-02-13 15:39:46.970 [INFO][4946] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0c94de63adf6ae7ad81627ecb2c40d6b7ff11baf52ea084e0da48d2b5db65b3d" HandleID="k8s-pod-network.0c94de63adf6ae7ad81627ecb2c40d6b7ff11baf52ea084e0da48d2b5db65b3d" Workload="localhost-k8s-coredns--76f75df574--tzqqh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400036bc60), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-tzqqh", "timestamp":"2025-02-13 15:39:46.94998175 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:39:47.137429 containerd[1453]: 2025-02-13 15:39:46.971 [INFO][4946] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:39:47.137429 containerd[1453]: 2025-02-13 15:39:47.026 [INFO][4946] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:39:47.137429 containerd[1453]: 2025-02-13 15:39:47.027 [INFO][4946] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 15:39:47.137429 containerd[1453]: 2025-02-13 15:39:47.035 [INFO][4946] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0c94de63adf6ae7ad81627ecb2c40d6b7ff11baf52ea084e0da48d2b5db65b3d" host="localhost" Feb 13 15:39:47.137429 containerd[1453]: 2025-02-13 15:39:47.040 [INFO][4946] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 15:39:47.137429 containerd[1453]: 2025-02-13 15:39:47.053 [INFO][4946] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 15:39:47.137429 containerd[1453]: 2025-02-13 15:39:47.059 [INFO][4946] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 15:39:47.137429 containerd[1453]: 2025-02-13 15:39:47.071 [INFO][4946] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 15:39:47.137429 containerd[1453]: 2025-02-13 15:39:47.071 [INFO][4946] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0c94de63adf6ae7ad81627ecb2c40d6b7ff11baf52ea084e0da48d2b5db65b3d" host="localhost" Feb 13 15:39:47.137429 containerd[1453]: 2025-02-13 15:39:47.082 [INFO][4946] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0c94de63adf6ae7ad81627ecb2c40d6b7ff11baf52ea084e0da48d2b5db65b3d Feb 13 15:39:47.137429 containerd[1453]: 2025-02-13 15:39:47.091 [INFO][4946] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0c94de63adf6ae7ad81627ecb2c40d6b7ff11baf52ea084e0da48d2b5db65b3d" host="localhost" Feb 13 15:39:47.137429 containerd[1453]: 2025-02-13 15:39:47.105 [INFO][4946] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.0c94de63adf6ae7ad81627ecb2c40d6b7ff11baf52ea084e0da48d2b5db65b3d" host="localhost" Feb 13 15:39:47.137429 containerd[1453]: 2025-02-13 15:39:47.105 [INFO][4946] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.0c94de63adf6ae7ad81627ecb2c40d6b7ff11baf52ea084e0da48d2b5db65b3d" host="localhost" Feb 13 15:39:47.137429 containerd[1453]: 2025-02-13 15:39:47.105 [INFO][4946] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:39:47.137429 containerd[1453]: 2025-02-13 15:39:47.105 [INFO][4946] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="0c94de63adf6ae7ad81627ecb2c40d6b7ff11baf52ea084e0da48d2b5db65b3d" HandleID="k8s-pod-network.0c94de63adf6ae7ad81627ecb2c40d6b7ff11baf52ea084e0da48d2b5db65b3d" Workload="localhost-k8s-coredns--76f75df574--tzqqh-eth0" Feb 13 15:39:47.138133 containerd[1453]: 2025-02-13 15:39:47.109 [INFO][4890] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0c94de63adf6ae7ad81627ecb2c40d6b7ff11baf52ea084e0da48d2b5db65b3d" Namespace="kube-system" Pod="coredns-76f75df574-tzqqh" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--tzqqh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--tzqqh-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"22169313-af53-4d8b-b855-dc02e6d1e640", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 39, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-tzqqh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidb6a52b5bf4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:39:47.138133 containerd[1453]: 2025-02-13 15:39:47.109 [INFO][4890] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="0c94de63adf6ae7ad81627ecb2c40d6b7ff11baf52ea084e0da48d2b5db65b3d" Namespace="kube-system" Pod="coredns-76f75df574-tzqqh" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--tzqqh-eth0" Feb 13 15:39:47.138133 containerd[1453]: 2025-02-13 15:39:47.109 [INFO][4890] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidb6a52b5bf4 ContainerID="0c94de63adf6ae7ad81627ecb2c40d6b7ff11baf52ea084e0da48d2b5db65b3d" Namespace="kube-system" Pod="coredns-76f75df574-tzqqh" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--tzqqh-eth0" Feb 13 15:39:47.138133 containerd[1453]: 2025-02-13 15:39:47.116 [INFO][4890] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0c94de63adf6ae7ad81627ecb2c40d6b7ff11baf52ea084e0da48d2b5db65b3d" Namespace="kube-system" Pod="coredns-76f75df574-tzqqh" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--tzqqh-eth0" Feb 13 15:39:47.138133 containerd[1453]: 2025-02-13 15:39:47.121 [INFO][4890] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0c94de63adf6ae7ad81627ecb2c40d6b7ff11baf52ea084e0da48d2b5db65b3d" Namespace="kube-system" Pod="coredns-76f75df574-tzqqh" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--tzqqh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--tzqqh-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"22169313-af53-4d8b-b855-dc02e6d1e640", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 39, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0c94de63adf6ae7ad81627ecb2c40d6b7ff11baf52ea084e0da48d2b5db65b3d", Pod:"coredns-76f75df574-tzqqh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidb6a52b5bf4", MAC:"22:f3:59:2d:07:1c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:39:47.138133 containerd[1453]: 2025-02-13 15:39:47.134 [INFO][4890] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0c94de63adf6ae7ad81627ecb2c40d6b7ff11baf52ea084e0da48d2b5db65b3d" Namespace="kube-system" Pod="coredns-76f75df574-tzqqh" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--tzqqh-eth0" Feb 13 15:39:47.164672 containerd[1453]: time="2025-02-13T15:39:47.163778556Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:39:47.164672 containerd[1453]: time="2025-02-13T15:39:47.164087852Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:39:47.164672 containerd[1453]: time="2025-02-13T15:39:47.164104932Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:39:47.164672 containerd[1453]: time="2025-02-13T15:39:47.164185856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:39:47.165255 systemd-networkd[1390]: cali59d631b4192: Link UP Feb 13 15:39:47.165499 systemd-networkd[1390]: cali59d631b4192: Gained carrier Feb 13 15:39:47.181716 containerd[1453]: 2025-02-13 15:39:46.952 [INFO][4949] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:39:47.181716 containerd[1453]: 2025-02-13 15:39:46.981 [INFO][4949] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--655c6976bf--dltfc-eth0 calico-apiserver-655c6976bf- calico-apiserver 043eaf10-8df2-4749-97a8-7923e4159aba 809 0 2025-02-13 15:39:31 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:655c6976bf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-655c6976bf-dltfc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali59d631b4192 [] []}} ContainerID="bfc5fc84340452d9af0414266e2ac2ae45b3c5fe625e403740d1c164a4192f88" Namespace="calico-apiserver" Pod="calico-apiserver-655c6976bf-dltfc" WorkloadEndpoint="localhost-k8s-calico--apiserver--655c6976bf--dltfc-" Feb 13 15:39:47.181716 containerd[1453]: 2025-02-13 15:39:46.981 [INFO][4949] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="bfc5fc84340452d9af0414266e2ac2ae45b3c5fe625e403740d1c164a4192f88" Namespace="calico-apiserver" Pod="calico-apiserver-655c6976bf-dltfc" WorkloadEndpoint="localhost-k8s-calico--apiserver--655c6976bf--dltfc-eth0" Feb 13 15:39:47.181716 containerd[1453]: 2025-02-13 15:39:47.028 [INFO][5011] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bfc5fc84340452d9af0414266e2ac2ae45b3c5fe625e403740d1c164a4192f88" HandleID="k8s-pod-network.bfc5fc84340452d9af0414266e2ac2ae45b3c5fe625e403740d1c164a4192f88" Workload="localhost-k8s-calico--apiserver--655c6976bf--dltfc-eth0" Feb 13 15:39:47.181716 containerd[1453]: 2025-02-13 15:39:47.040 [INFO][5011] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bfc5fc84340452d9af0414266e2ac2ae45b3c5fe625e403740d1c164a4192f88" HandleID="k8s-pod-network.bfc5fc84340452d9af0414266e2ac2ae45b3c5fe625e403740d1c164a4192f88" Workload="localhost-k8s-calico--apiserver--655c6976bf--dltfc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003e6120), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-655c6976bf-dltfc", "timestamp":"2025-02-13 15:39:47.028537566 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:39:47.181716 containerd[1453]: 2025-02-13 15:39:47.040 [INFO][5011] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:39:47.181716 containerd[1453]: 2025-02-13 15:39:47.105 [INFO][5011] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:39:47.181716 containerd[1453]: 2025-02-13 15:39:47.106 [INFO][5011] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 15:39:47.181716 containerd[1453]: 2025-02-13 15:39:47.108 [INFO][5011] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bfc5fc84340452d9af0414266e2ac2ae45b3c5fe625e403740d1c164a4192f88" host="localhost" Feb 13 15:39:47.181716 containerd[1453]: 2025-02-13 15:39:47.125 [INFO][5011] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 15:39:47.181716 containerd[1453]: 2025-02-13 15:39:47.130 [INFO][5011] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 15:39:47.181716 containerd[1453]: 2025-02-13 15:39:47.135 [INFO][5011] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 15:39:47.181716 containerd[1453]: 2025-02-13 15:39:47.138 [INFO][5011] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 15:39:47.181716 containerd[1453]: 2025-02-13 15:39:47.138 [INFO][5011] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bfc5fc84340452d9af0414266e2ac2ae45b3c5fe625e403740d1c164a4192f88" host="localhost" Feb 13 15:39:47.181716 containerd[1453]: 2025-02-13 15:39:47.140 [INFO][5011] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.bfc5fc84340452d9af0414266e2ac2ae45b3c5fe625e403740d1c164a4192f88 Feb 13 15:39:47.181716 containerd[1453]: 2025-02-13 15:39:47.146 [INFO][5011] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bfc5fc84340452d9af0414266e2ac2ae45b3c5fe625e403740d1c164a4192f88" host="localhost" Feb 13 15:39:47.181716 containerd[1453]: 2025-02-13 15:39:47.155 [INFO][5011] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.bfc5fc84340452d9af0414266e2ac2ae45b3c5fe625e403740d1c164a4192f88" host="localhost" Feb 13 15:39:47.181716 containerd[1453]: 2025-02-13 15:39:47.155 [INFO][5011] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.bfc5fc84340452d9af0414266e2ac2ae45b3c5fe625e403740d1c164a4192f88" host="localhost" Feb 13 15:39:47.181716 containerd[1453]: 2025-02-13 15:39:47.155 [INFO][5011] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:39:47.181716 containerd[1453]: 2025-02-13 15:39:47.155 [INFO][5011] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="bfc5fc84340452d9af0414266e2ac2ae45b3c5fe625e403740d1c164a4192f88" HandleID="k8s-pod-network.bfc5fc84340452d9af0414266e2ac2ae45b3c5fe625e403740d1c164a4192f88" Workload="localhost-k8s-calico--apiserver--655c6976bf--dltfc-eth0" Feb 13 15:39:47.182532 containerd[1453]: 2025-02-13 15:39:47.160 [INFO][4949] cni-plugin/k8s.go 386: Populated endpoint ContainerID="bfc5fc84340452d9af0414266e2ac2ae45b3c5fe625e403740d1c164a4192f88" Namespace="calico-apiserver" Pod="calico-apiserver-655c6976bf-dltfc" WorkloadEndpoint="localhost-k8s-calico--apiserver--655c6976bf--dltfc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--655c6976bf--dltfc-eth0", GenerateName:"calico-apiserver-655c6976bf-", Namespace:"calico-apiserver", SelfLink:"", UID:"043eaf10-8df2-4749-97a8-7923e4159aba", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 39, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"655c6976bf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-655c6976bf-dltfc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali59d631b4192", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:39:47.182532 containerd[1453]: 2025-02-13 15:39:47.161 [INFO][4949] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="bfc5fc84340452d9af0414266e2ac2ae45b3c5fe625e403740d1c164a4192f88" Namespace="calico-apiserver" Pod="calico-apiserver-655c6976bf-dltfc" WorkloadEndpoint="localhost-k8s-calico--apiserver--655c6976bf--dltfc-eth0" Feb 13 15:39:47.182532 containerd[1453]: 2025-02-13 15:39:47.161 [INFO][4949] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali59d631b4192 ContainerID="bfc5fc84340452d9af0414266e2ac2ae45b3c5fe625e403740d1c164a4192f88" Namespace="calico-apiserver" Pod="calico-apiserver-655c6976bf-dltfc" WorkloadEndpoint="localhost-k8s-calico--apiserver--655c6976bf--dltfc-eth0" Feb 13 15:39:47.182532 containerd[1453]: 2025-02-13 15:39:47.167 [INFO][4949] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bfc5fc84340452d9af0414266e2ac2ae45b3c5fe625e403740d1c164a4192f88" Namespace="calico-apiserver" Pod="calico-apiserver-655c6976bf-dltfc" WorkloadEndpoint="localhost-k8s-calico--apiserver--655c6976bf--dltfc-eth0" Feb 13 15:39:47.182532 containerd[1453]: 2025-02-13 15:39:47.167 [INFO][4949] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="bfc5fc84340452d9af0414266e2ac2ae45b3c5fe625e403740d1c164a4192f88" Namespace="calico-apiserver" Pod="calico-apiserver-655c6976bf-dltfc" WorkloadEndpoint="localhost-k8s-calico--apiserver--655c6976bf--dltfc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--655c6976bf--dltfc-eth0", GenerateName:"calico-apiserver-655c6976bf-", Namespace:"calico-apiserver", SelfLink:"", UID:"043eaf10-8df2-4749-97a8-7923e4159aba", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 39, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"655c6976bf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bfc5fc84340452d9af0414266e2ac2ae45b3c5fe625e403740d1c164a4192f88", Pod:"calico-apiserver-655c6976bf-dltfc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali59d631b4192", MAC:"1a:52:e5:9c:eb:a5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:39:47.182532 containerd[1453]: 2025-02-13 15:39:47.177 [INFO][4949] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="bfc5fc84340452d9af0414266e2ac2ae45b3c5fe625e403740d1c164a4192f88" Namespace="calico-apiserver" Pod="calico-apiserver-655c6976bf-dltfc" WorkloadEndpoint="localhost-k8s-calico--apiserver--655c6976bf--dltfc-eth0" Feb 13 15:39:47.187544 containerd[1453]: time="2025-02-13T15:39:47.170515178Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:39:47.187544 containerd[1453]: time="2025-02-13T15:39:47.186814886Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:39:47.187544 containerd[1453]: time="2025-02-13T15:39:47.186840567Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:39:47.187544 containerd[1453]: time="2025-02-13T15:39:47.186985615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:39:47.194771 systemd[1]: Started cri-containerd-20647a1964458ebcacf97e48977fafb06f283e31a11fb41029ced89ed112e134.scope - libcontainer container 20647a1964458ebcacf97e48977fafb06f283e31a11fb41029ced89ed112e134. Feb 13 15:39:47.203676 containerd[1453]: time="2025-02-13T15:39:47.203153636Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:39:47.204493 containerd[1453]: time="2025-02-13T15:39:47.203417009Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:39:47.217141 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:39:47.219972 containerd[1453]: time="2025-02-13T15:39:47.203440970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:39:47.220178 containerd[1453]: time="2025-02-13T15:39:47.220100177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:39:47.219731 systemd-networkd[1390]: cali66738f164c3: Link UP Feb 13 15:39:47.221092 systemd-networkd[1390]: cali66738f164c3: Gained carrier Feb 13 15:39:47.233872 systemd[1]: Started cri-containerd-0c94de63adf6ae7ad81627ecb2c40d6b7ff11baf52ea084e0da48d2b5db65b3d.scope - libcontainer container 0c94de63adf6ae7ad81627ecb2c40d6b7ff11baf52ea084e0da48d2b5db65b3d. Feb 13 15:39:47.239680 containerd[1453]: 2025-02-13 15:39:46.899 [INFO][4903] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:39:47.239680 containerd[1453]: 2025-02-13 15:39:46.940 [INFO][4903] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--67d55cd4f9--c8fqd-eth0 calico-kube-controllers-67d55cd4f9- calico-system 70d03d25-2cd5-469b-b092-195e4bf21efe 810 0 2025-02-13 15:39:32 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:67d55cd4f9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-67d55cd4f9-c8fqd eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali66738f164c3 [] []}} ContainerID="bf5aea2fabc54cf3c28a2f4e1931e86bfcb677341051cc4253220a3add974556" Namespace="calico-system" Pod="calico-kube-controllers-67d55cd4f9-c8fqd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67d55cd4f9--c8fqd-" Feb 13 15:39:47.239680 containerd[1453]: 2025-02-13 15:39:46.940 [INFO][4903] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="bf5aea2fabc54cf3c28a2f4e1931e86bfcb677341051cc4253220a3add974556" Namespace="calico-system" Pod="calico-kube-controllers-67d55cd4f9-c8fqd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67d55cd4f9--c8fqd-eth0" Feb 13 15:39:47.239680 containerd[1453]: 2025-02-13 15:39:47.043 [INFO][5001] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bf5aea2fabc54cf3c28a2f4e1931e86bfcb677341051cc4253220a3add974556" HandleID="k8s-pod-network.bf5aea2fabc54cf3c28a2f4e1931e86bfcb677341051cc4253220a3add974556" Workload="localhost-k8s-calico--kube--controllers--67d55cd4f9--c8fqd-eth0" Feb 13 15:39:47.239680 containerd[1453]: 2025-02-13 15:39:47.079 [INFO][5001] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bf5aea2fabc54cf3c28a2f4e1931e86bfcb677341051cc4253220a3add974556" HandleID="k8s-pod-network.bf5aea2fabc54cf3c28a2f4e1931e86bfcb677341051cc4253220a3add974556" Workload="localhost-k8s-calico--kube--controllers--67d55cd4f9--c8fqd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001335c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-67d55cd4f9-c8fqd", "timestamp":"2025-02-13 15:39:47.042361548 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:39:47.239680 containerd[1453]: 2025-02-13 15:39:47.079 [INFO][5001] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:39:47.239680 containerd[1453]: 2025-02-13 15:39:47.155 [INFO][5001] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:39:47.239680 containerd[1453]: 2025-02-13 15:39:47.155 [INFO][5001] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 15:39:47.239680 containerd[1453]: 2025-02-13 15:39:47.157 [INFO][5001] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bf5aea2fabc54cf3c28a2f4e1931e86bfcb677341051cc4253220a3add974556" host="localhost" Feb 13 15:39:47.239680 containerd[1453]: 2025-02-13 15:39:47.164 [INFO][5001] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 15:39:47.239680 containerd[1453]: 2025-02-13 15:39:47.173 [INFO][5001] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 15:39:47.239680 containerd[1453]: 2025-02-13 15:39:47.178 [INFO][5001] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 15:39:47.239680 containerd[1453]: 2025-02-13 15:39:47.184 [INFO][5001] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 15:39:47.239680 containerd[1453]: 2025-02-13 15:39:47.184 [INFO][5001] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bf5aea2fabc54cf3c28a2f4e1931e86bfcb677341051cc4253220a3add974556" host="localhost" Feb 13 15:39:47.239680 containerd[1453]: 2025-02-13 15:39:47.188 [INFO][5001] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.bf5aea2fabc54cf3c28a2f4e1931e86bfcb677341051cc4253220a3add974556 Feb 13 15:39:47.239680 containerd[1453]: 2025-02-13 15:39:47.196 [INFO][5001] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bf5aea2fabc54cf3c28a2f4e1931e86bfcb677341051cc4253220a3add974556" host="localhost" Feb 13 15:39:47.239680 containerd[1453]: 2025-02-13 15:39:47.205 [INFO][5001] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.bf5aea2fabc54cf3c28a2f4e1931e86bfcb677341051cc4253220a3add974556" host="localhost" Feb 13 15:39:47.239680 containerd[1453]: 2025-02-13 15:39:47.205 [INFO][5001] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.bf5aea2fabc54cf3c28a2f4e1931e86bfcb677341051cc4253220a3add974556" host="localhost" Feb 13 15:39:47.239680 containerd[1453]: 2025-02-13 15:39:47.205 [INFO][5001] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:39:47.239680 containerd[1453]: 2025-02-13 15:39:47.205 [INFO][5001] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="bf5aea2fabc54cf3c28a2f4e1931e86bfcb677341051cc4253220a3add974556" HandleID="k8s-pod-network.bf5aea2fabc54cf3c28a2f4e1931e86bfcb677341051cc4253220a3add974556" Workload="localhost-k8s-calico--kube--controllers--67d55cd4f9--c8fqd-eth0" Feb 13 15:39:47.240174 containerd[1453]: 2025-02-13 15:39:47.212 [INFO][4903] cni-plugin/k8s.go 386: Populated endpoint ContainerID="bf5aea2fabc54cf3c28a2f4e1931e86bfcb677341051cc4253220a3add974556" Namespace="calico-system" Pod="calico-kube-controllers-67d55cd4f9-c8fqd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67d55cd4f9--c8fqd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--67d55cd4f9--c8fqd-eth0", GenerateName:"calico-kube-controllers-67d55cd4f9-", Namespace:"calico-system", SelfLink:"", UID:"70d03d25-2cd5-469b-b092-195e4bf21efe", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 39, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67d55cd4f9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-67d55cd4f9-c8fqd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali66738f164c3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:39:47.240174 containerd[1453]: 2025-02-13 15:39:47.212 [INFO][4903] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="bf5aea2fabc54cf3c28a2f4e1931e86bfcb677341051cc4253220a3add974556" Namespace="calico-system" Pod="calico-kube-controllers-67d55cd4f9-c8fqd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67d55cd4f9--c8fqd-eth0" Feb 13 15:39:47.240174 containerd[1453]: 2025-02-13 15:39:47.212 [INFO][4903] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali66738f164c3 ContainerID="bf5aea2fabc54cf3c28a2f4e1931e86bfcb677341051cc4253220a3add974556" Namespace="calico-system" Pod="calico-kube-controllers-67d55cd4f9-c8fqd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67d55cd4f9--c8fqd-eth0" Feb 13 15:39:47.240174 containerd[1453]: 2025-02-13 15:39:47.220 [INFO][4903] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bf5aea2fabc54cf3c28a2f4e1931e86bfcb677341051cc4253220a3add974556" Namespace="calico-system" Pod="calico-kube-controllers-67d55cd4f9-c8fqd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67d55cd4f9--c8fqd-eth0" Feb 13 15:39:47.240174 containerd[1453]: 2025-02-13 15:39:47.222 [INFO][4903] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="bf5aea2fabc54cf3c28a2f4e1931e86bfcb677341051cc4253220a3add974556" Namespace="calico-system" Pod="calico-kube-controllers-67d55cd4f9-c8fqd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67d55cd4f9--c8fqd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--67d55cd4f9--c8fqd-eth0", GenerateName:"calico-kube-controllers-67d55cd4f9-", Namespace:"calico-system", SelfLink:"", UID:"70d03d25-2cd5-469b-b092-195e4bf21efe", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 39, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67d55cd4f9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bf5aea2fabc54cf3c28a2f4e1931e86bfcb677341051cc4253220a3add974556", Pod:"calico-kube-controllers-67d55cd4f9-c8fqd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali66738f164c3", MAC:"32:32:70:18:f0:47", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:39:47.240174 containerd[1453]: 2025-02-13 15:39:47.234 [INFO][4903] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="bf5aea2fabc54cf3c28a2f4e1931e86bfcb677341051cc4253220a3add974556" Namespace="calico-system" Pod="calico-kube-controllers-67d55cd4f9-c8fqd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--67d55cd4f9--c8fqd-eth0" Feb 13 15:39:47.256250 containerd[1453]: time="2025-02-13T15:39:47.253318064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-655c6976bf-p7qqq,Uid:95d7909a-cd44-4f88-af35-6de766421d4b,Namespace:calico-apiserver,Attempt:6,} returns sandbox id \"20647a1964458ebcacf97e48977fafb06f283e31a11fb41029ced89ed112e134\"" Feb 13 15:39:47.256250 containerd[1453]: time="2025-02-13T15:39:47.255200760Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 15:39:47.258101 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:39:47.271063 systemd[1]: Started cri-containerd-bfc5fc84340452d9af0414266e2ac2ae45b3c5fe625e403740d1c164a4192f88.scope - libcontainer container bfc5fc84340452d9af0414266e2ac2ae45b3c5fe625e403740d1c164a4192f88. Feb 13 15:39:47.273523 systemd-networkd[1390]: cali3f94632ffc4: Link UP Feb 13 15:39:47.273807 systemd-networkd[1390]: cali3f94632ffc4: Gained carrier Feb 13 15:39:47.293862 systemd[1]: run-netns-cni\x2d481412e9\x2d28fa\x2dc609\x2d0b1c\x2d2efd7214ae30.mount: Deactivated successfully. Feb 13 15:39:47.293956 systemd[1]: run-netns-cni\x2da7090c39\x2d2b62\x2dd034\x2d1f56\x2db21f17e312d6.mount: Deactivated successfully. Feb 13 15:39:47.301115 containerd[1453]: time="2025-02-13T15:39:47.300845558Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:39:47.301115 containerd[1453]: time="2025-02-13T15:39:47.300904841Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:39:47.301115 containerd[1453]: time="2025-02-13T15:39:47.300915922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:39:47.301567 containerd[1453]: time="2025-02-13T15:39:47.301389266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:39:47.303265 containerd[1453]: 2025-02-13 15:39:46.953 [INFO][4948] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:39:47.303265 containerd[1453]: 2025-02-13 15:39:46.983 [INFO][4948] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--9jl5n-eth0 csi-node-driver- calico-system 0c3e32e2-3a7c-428a-a18f-8761ef2b92d8 933 0 2025-02-13 15:39:32 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-9jl5n eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali3f94632ffc4 [] []}} ContainerID="c9907ec4f6aa598ae49ef3bffd702cb1ef8ec3d9eb1c7d0cdf40fbfadb573be1" Namespace="calico-system" Pod="csi-node-driver-9jl5n" WorkloadEndpoint="localhost-k8s-csi--node--driver--9jl5n-" Feb 13 15:39:47.303265 containerd[1453]: 2025-02-13 15:39:46.983 [INFO][4948] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c9907ec4f6aa598ae49ef3bffd702cb1ef8ec3d9eb1c7d0cdf40fbfadb573be1" Namespace="calico-system" Pod="csi-node-driver-9jl5n" WorkloadEndpoint="localhost-k8s-csi--node--driver--9jl5n-eth0" Feb 13 15:39:47.303265 containerd[1453]: 2025-02-13 15:39:47.074 [INFO][5017] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c9907ec4f6aa598ae49ef3bffd702cb1ef8ec3d9eb1c7d0cdf40fbfadb573be1" HandleID="k8s-pod-network.c9907ec4f6aa598ae49ef3bffd702cb1ef8ec3d9eb1c7d0cdf40fbfadb573be1" Workload="localhost-k8s-csi--node--driver--9jl5n-eth0" Feb 13 15:39:47.303265 containerd[1453]: 2025-02-13 15:39:47.094 [INFO][5017] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c9907ec4f6aa598ae49ef3bffd702cb1ef8ec3d9eb1c7d0cdf40fbfadb573be1" HandleID="k8s-pod-network.c9907ec4f6aa598ae49ef3bffd702cb1ef8ec3d9eb1c7d0cdf40fbfadb573be1" Workload="localhost-k8s-csi--node--driver--9jl5n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400031e2a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-9jl5n", "timestamp":"2025-02-13 15:39:47.074887721 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:39:47.303265 containerd[1453]: 2025-02-13 15:39:47.094 [INFO][5017] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:39:47.303265 containerd[1453]: 2025-02-13 15:39:47.205 [INFO][5017] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:39:47.303265 containerd[1453]: 2025-02-13 15:39:47.205 [INFO][5017] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 15:39:47.303265 containerd[1453]: 2025-02-13 15:39:47.208 [INFO][5017] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c9907ec4f6aa598ae49ef3bffd702cb1ef8ec3d9eb1c7d0cdf40fbfadb573be1" host="localhost" Feb 13 15:39:47.303265 containerd[1453]: 2025-02-13 15:39:47.215 [INFO][5017] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 15:39:47.303265 containerd[1453]: 2025-02-13 15:39:47.223 [INFO][5017] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 15:39:47.303265 containerd[1453]: 2025-02-13 15:39:47.233 [INFO][5017] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 15:39:47.303265 containerd[1453]: 2025-02-13 15:39:47.240 [INFO][5017] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 15:39:47.303265 containerd[1453]: 2025-02-13 15:39:47.240 [INFO][5017] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c9907ec4f6aa598ae49ef3bffd702cb1ef8ec3d9eb1c7d0cdf40fbfadb573be1" host="localhost" Feb 13 15:39:47.303265 containerd[1453]: 2025-02-13 15:39:47.244 [INFO][5017] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c9907ec4f6aa598ae49ef3bffd702cb1ef8ec3d9eb1c7d0cdf40fbfadb573be1 Feb 13 15:39:47.303265 containerd[1453]: 2025-02-13 15:39:47.250 [INFO][5017] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c9907ec4f6aa598ae49ef3bffd702cb1ef8ec3d9eb1c7d0cdf40fbfadb573be1" host="localhost" Feb 13 15:39:47.303265 containerd[1453]: 2025-02-13 15:39:47.263 [INFO][5017] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.c9907ec4f6aa598ae49ef3bffd702cb1ef8ec3d9eb1c7d0cdf40fbfadb573be1" host="localhost" Feb 13 15:39:47.303265 containerd[1453]: 2025-02-13 15:39:47.264 [INFO][5017] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.c9907ec4f6aa598ae49ef3bffd702cb1ef8ec3d9eb1c7d0cdf40fbfadb573be1" host="localhost" Feb 13 15:39:47.303265 containerd[1453]: 2025-02-13 15:39:47.264 [INFO][5017] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:39:47.303265 containerd[1453]: 2025-02-13 15:39:47.264 [INFO][5017] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="c9907ec4f6aa598ae49ef3bffd702cb1ef8ec3d9eb1c7d0cdf40fbfadb573be1" HandleID="k8s-pod-network.c9907ec4f6aa598ae49ef3bffd702cb1ef8ec3d9eb1c7d0cdf40fbfadb573be1" Workload="localhost-k8s-csi--node--driver--9jl5n-eth0" Feb 13 15:39:47.303841 containerd[1453]: 2025-02-13 15:39:47.270 [INFO][4948] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c9907ec4f6aa598ae49ef3bffd702cb1ef8ec3d9eb1c7d0cdf40fbfadb573be1" Namespace="calico-system" Pod="csi-node-driver-9jl5n" WorkloadEndpoint="localhost-k8s-csi--node--driver--9jl5n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--9jl5n-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0c3e32e2-3a7c-428a-a18f-8761ef2b92d8", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 39, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-9jl5n", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3f94632ffc4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:39:47.303841 containerd[1453]: 2025-02-13 15:39:47.270 [INFO][4948] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="c9907ec4f6aa598ae49ef3bffd702cb1ef8ec3d9eb1c7d0cdf40fbfadb573be1" Namespace="calico-system" Pod="csi-node-driver-9jl5n" WorkloadEndpoint="localhost-k8s-csi--node--driver--9jl5n-eth0" Feb 13 15:39:47.303841 containerd[1453]: 2025-02-13 15:39:47.270 [INFO][4948] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3f94632ffc4 ContainerID="c9907ec4f6aa598ae49ef3bffd702cb1ef8ec3d9eb1c7d0cdf40fbfadb573be1" Namespace="calico-system" Pod="csi-node-driver-9jl5n" WorkloadEndpoint="localhost-k8s-csi--node--driver--9jl5n-eth0" Feb 13 15:39:47.303841 containerd[1453]: 2025-02-13 15:39:47.274 [INFO][4948] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c9907ec4f6aa598ae49ef3bffd702cb1ef8ec3d9eb1c7d0cdf40fbfadb573be1" Namespace="calico-system" Pod="csi-node-driver-9jl5n" WorkloadEndpoint="localhost-k8s-csi--node--driver--9jl5n-eth0" Feb 13 15:39:47.303841 containerd[1453]: 2025-02-13 15:39:47.275 [INFO][4948] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c9907ec4f6aa598ae49ef3bffd702cb1ef8ec3d9eb1c7d0cdf40fbfadb573be1" Namespace="calico-system" Pod="csi-node-driver-9jl5n" WorkloadEndpoint="localhost-k8s-csi--node--driver--9jl5n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--9jl5n-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0c3e32e2-3a7c-428a-a18f-8761ef2b92d8", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 39, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c9907ec4f6aa598ae49ef3bffd702cb1ef8ec3d9eb1c7d0cdf40fbfadb573be1", Pod:"csi-node-driver-9jl5n", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3f94632ffc4", MAC:"0a:af:33:36:72:ab", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:39:47.303841 containerd[1453]: 2025-02-13 15:39:47.297 [INFO][4948] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c9907ec4f6aa598ae49ef3bffd702cb1ef8ec3d9eb1c7d0cdf40fbfadb573be1" Namespace="calico-system" Pod="csi-node-driver-9jl5n" WorkloadEndpoint="localhost-k8s-csi--node--driver--9jl5n-eth0" Feb 13 15:39:47.307253 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:39:47.316258 containerd[1453]: time="2025-02-13T15:39:47.316205298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-tzqqh,Uid:22169313-af53-4d8b-b855-dc02e6d1e640,Namespace:kube-system,Attempt:6,} returns sandbox id \"0c94de63adf6ae7ad81627ecb2c40d6b7ff11baf52ea084e0da48d2b5db65b3d\"" Feb 13 15:39:47.317552 kubelet[2589]: E0213 15:39:47.317506 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:39:47.323510 containerd[1453]: time="2025-02-13T15:39:47.323399944Z" level=info msg="CreateContainer within sandbox \"0c94de63adf6ae7ad81627ecb2c40d6b7ff11baf52ea084e0da48d2b5db65b3d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:39:47.333926 systemd-networkd[1390]: calie39e2753277: Link UP Feb 13 15:39:47.336698 systemd[1]: Started cri-containerd-bf5aea2fabc54cf3c28a2f4e1931e86bfcb677341051cc4253220a3add974556.scope - libcontainer container bf5aea2fabc54cf3c28a2f4e1931e86bfcb677341051cc4253220a3add974556. Feb 13 15:39:47.338591 systemd-networkd[1390]: calie39e2753277: Gained carrier Feb 13 15:39:47.356076 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3702374531.mount: Deactivated successfully. Feb 13 15:39:47.361039 containerd[1453]: 2025-02-13 15:39:46.968 [INFO][4931] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:39:47.361039 containerd[1453]: 2025-02-13 15:39:46.997 [INFO][4931] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--m78g5-eth0 coredns-76f75df574- kube-system 74996f45-87e3-49ee-bffd-dfcfa7bb4a84 808 0 2025-02-13 15:39:19 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-m78g5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie39e2753277 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="31d619eb21f02d41f010f79d3efd21ce9430a5a6f0efbefe850e8a6ab821525e" Namespace="kube-system" Pod="coredns-76f75df574-m78g5" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--m78g5-" Feb 13 15:39:47.361039 containerd[1453]: 2025-02-13 15:39:46.997 [INFO][4931] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="31d619eb21f02d41f010f79d3efd21ce9430a5a6f0efbefe850e8a6ab821525e" Namespace="kube-system" Pod="coredns-76f75df574-m78g5" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--m78g5-eth0" Feb 13 15:39:47.361039 containerd[1453]: 2025-02-13 15:39:47.090 [INFO][5022] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="31d619eb21f02d41f010f79d3efd21ce9430a5a6f0efbefe850e8a6ab821525e" HandleID="k8s-pod-network.31d619eb21f02d41f010f79d3efd21ce9430a5a6f0efbefe850e8a6ab821525e" Workload="localhost-k8s-coredns--76f75df574--m78g5-eth0" Feb 13 15:39:47.361039 containerd[1453]: 2025-02-13 15:39:47.119 [INFO][5022] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="31d619eb21f02d41f010f79d3efd21ce9430a5a6f0efbefe850e8a6ab821525e" HandleID="k8s-pod-network.31d619eb21f02d41f010f79d3efd21ce9430a5a6f0efbefe850e8a6ab821525e" Workload="localhost-k8s-coredns--76f75df574--m78g5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000401440), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-m78g5", "timestamp":"2025-02-13 15:39:47.089267811 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:39:47.361039 containerd[1453]: 2025-02-13 15:39:47.120 [INFO][5022] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:39:47.361039 containerd[1453]: 2025-02-13 15:39:47.264 [INFO][5022] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:39:47.361039 containerd[1453]: 2025-02-13 15:39:47.264 [INFO][5022] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 15:39:47.361039 containerd[1453]: 2025-02-13 15:39:47.268 [INFO][5022] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.31d619eb21f02d41f010f79d3efd21ce9430a5a6f0efbefe850e8a6ab821525e" host="localhost" Feb 13 15:39:47.361039 containerd[1453]: 2025-02-13 15:39:47.276 [INFO][5022] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 15:39:47.361039 containerd[1453]: 2025-02-13 15:39:47.290 [INFO][5022] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 15:39:47.361039 containerd[1453]: 2025-02-13 15:39:47.292 [INFO][5022] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 15:39:47.361039 containerd[1453]: 2025-02-13 15:39:47.300 [INFO][5022] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 15:39:47.361039 containerd[1453]: 2025-02-13 15:39:47.301 [INFO][5022] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.31d619eb21f02d41f010f79d3efd21ce9430a5a6f0efbefe850e8a6ab821525e" host="localhost" Feb 13 15:39:47.361039 containerd[1453]: 2025-02-13 15:39:47.309 [INFO][5022] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.31d619eb21f02d41f010f79d3efd21ce9430a5a6f0efbefe850e8a6ab821525e Feb 13 15:39:47.361039 containerd[1453]: 2025-02-13 15:39:47.318 [INFO][5022] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.31d619eb21f02d41f010f79d3efd21ce9430a5a6f0efbefe850e8a6ab821525e" host="localhost" Feb 13 15:39:47.361039 containerd[1453]: 2025-02-13 15:39:47.325 [INFO][5022] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.31d619eb21f02d41f010f79d3efd21ce9430a5a6f0efbefe850e8a6ab821525e" host="localhost" Feb 13 15:39:47.361039 containerd[1453]: 2025-02-13 15:39:47.325 [INFO][5022] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.31d619eb21f02d41f010f79d3efd21ce9430a5a6f0efbefe850e8a6ab821525e" host="localhost" Feb 13 15:39:47.361039 containerd[1453]: 2025-02-13 15:39:47.325 [INFO][5022] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:39:47.361039 containerd[1453]: 2025-02-13 15:39:47.326 [INFO][5022] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="31d619eb21f02d41f010f79d3efd21ce9430a5a6f0efbefe850e8a6ab821525e" HandleID="k8s-pod-network.31d619eb21f02d41f010f79d3efd21ce9430a5a6f0efbefe850e8a6ab821525e" Workload="localhost-k8s-coredns--76f75df574--m78g5-eth0" Feb 13 15:39:47.361604 containerd[1453]: 2025-02-13 15:39:47.329 [INFO][4931] cni-plugin/k8s.go 386: Populated endpoint ContainerID="31d619eb21f02d41f010f79d3efd21ce9430a5a6f0efbefe850e8a6ab821525e" Namespace="kube-system" Pod="coredns-76f75df574-m78g5" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--m78g5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--m78g5-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"74996f45-87e3-49ee-bffd-dfcfa7bb4a84", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 39, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-m78g5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie39e2753277", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:39:47.361604 containerd[1453]: 2025-02-13 15:39:47.329 [INFO][4931] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="31d619eb21f02d41f010f79d3efd21ce9430a5a6f0efbefe850e8a6ab821525e" Namespace="kube-system" Pod="coredns-76f75df574-m78g5" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--m78g5-eth0" Feb 13 15:39:47.361604 containerd[1453]: 2025-02-13 15:39:47.329 [INFO][4931] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie39e2753277 ContainerID="31d619eb21f02d41f010f79d3efd21ce9430a5a6f0efbefe850e8a6ab821525e" Namespace="kube-system" Pod="coredns-76f75df574-m78g5" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--m78g5-eth0" Feb 13 15:39:47.361604 containerd[1453]: 2025-02-13 15:39:47.341 [INFO][4931] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="31d619eb21f02d41f010f79d3efd21ce9430a5a6f0efbefe850e8a6ab821525e" Namespace="kube-system" Pod="coredns-76f75df574-m78g5" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--m78g5-eth0" Feb 13 15:39:47.361604 containerd[1453]: 2025-02-13 15:39:47.345 [INFO][4931] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="31d619eb21f02d41f010f79d3efd21ce9430a5a6f0efbefe850e8a6ab821525e" Namespace="kube-system" Pod="coredns-76f75df574-m78g5" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--m78g5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--m78g5-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"74996f45-87e3-49ee-bffd-dfcfa7bb4a84", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 39, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"31d619eb21f02d41f010f79d3efd21ce9430a5a6f0efbefe850e8a6ab821525e", Pod:"coredns-76f75df574-m78g5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie39e2753277", MAC:"ca:8e:0c:18:50:b5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:39:47.361604 containerd[1453]: 2025-02-13 15:39:47.357 [INFO][4931] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="31d619eb21f02d41f010f79d3efd21ce9430a5a6f0efbefe850e8a6ab821525e" Namespace="kube-system" Pod="coredns-76f75df574-m78g5" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--m78g5-eth0" Feb 13 15:39:47.365713 containerd[1453]: time="2025-02-13T15:39:47.364021167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-655c6976bf-dltfc,Uid:043eaf10-8df2-4749-97a8-7923e4159aba,Namespace:calico-apiserver,Attempt:6,} returns sandbox id \"bfc5fc84340452d9af0414266e2ac2ae45b3c5fe625e403740d1c164a4192f88\"" Feb 13 15:39:47.365713 containerd[1453]: time="2025-02-13T15:39:47.365577046Z" level=info msg="CreateContainer within sandbox \"0c94de63adf6ae7ad81627ecb2c40d6b7ff11baf52ea084e0da48d2b5db65b3d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dcef9b389b8cfc026bc437d4270786e89e84661db822c4a7da64a944bff62b0a\"" Feb 13 15:39:47.367045 containerd[1453]: time="2025-02-13T15:39:47.367009759Z" level=info msg="StartContainer for \"dcef9b389b8cfc026bc437d4270786e89e84661db822c4a7da64a944bff62b0a\"" Feb 13 15:39:47.370375 containerd[1453]: time="2025-02-13T15:39:47.370035073Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:39:47.370375 containerd[1453]: time="2025-02-13T15:39:47.370210522Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:39:47.370375 containerd[1453]: time="2025-02-13T15:39:47.370229762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:39:47.372600 containerd[1453]: time="2025-02-13T15:39:47.370338048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:39:47.374424 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:39:47.429513 systemd[1]: Started cri-containerd-c9907ec4f6aa598ae49ef3bffd702cb1ef8ec3d9eb1c7d0cdf40fbfadb573be1.scope - libcontainer container c9907ec4f6aa598ae49ef3bffd702cb1ef8ec3d9eb1c7d0cdf40fbfadb573be1. Feb 13 15:39:47.433517 containerd[1453]: time="2025-02-13T15:39:47.431748807Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:39:47.433798 containerd[1453]: time="2025-02-13T15:39:47.433677385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67d55cd4f9-c8fqd,Uid:70d03d25-2cd5-469b-b092-195e4bf21efe,Namespace:calico-system,Attempt:6,} returns sandbox id \"bf5aea2fabc54cf3c28a2f4e1931e86bfcb677341051cc4253220a3add974556\"" Feb 13 15:39:47.434070 containerd[1453]: time="2025-02-13T15:39:47.433782911Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:39:47.434070 containerd[1453]: time="2025-02-13T15:39:47.433825713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:39:47.435763 systemd[1]: Started cri-containerd-dcef9b389b8cfc026bc437d4270786e89e84661db822c4a7da64a944bff62b0a.scope - libcontainer container dcef9b389b8cfc026bc437d4270786e89e84661db822c4a7da64a944bff62b0a. Feb 13 15:39:47.440065 containerd[1453]: time="2025-02-13T15:39:47.435588362Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:39:47.467855 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:39:47.481725 systemd[1]: Started cri-containerd-31d619eb21f02d41f010f79d3efd21ce9430a5a6f0efbefe850e8a6ab821525e.scope - libcontainer container 31d619eb21f02d41f010f79d3efd21ce9430a5a6f0efbefe850e8a6ab821525e. Feb 13 15:39:47.495012 containerd[1453]: time="2025-02-13T15:39:47.494969259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9jl5n,Uid:0c3e32e2-3a7c-428a-a18f-8761ef2b92d8,Namespace:calico-system,Attempt:5,} returns sandbox id \"c9907ec4f6aa598ae49ef3bffd702cb1ef8ec3d9eb1c7d0cdf40fbfadb573be1\"" Feb 13 15:39:47.497618 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:39:47.507230 containerd[1453]: time="2025-02-13T15:39:47.506587089Z" level=info msg="StartContainer for \"dcef9b389b8cfc026bc437d4270786e89e84661db822c4a7da64a944bff62b0a\" returns successfully" Feb 13 15:39:47.541228 containerd[1453]: time="2025-02-13T15:39:47.541167565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-m78g5,Uid:74996f45-87e3-49ee-bffd-dfcfa7bb4a84,Namespace:kube-system,Attempt:6,} returns sandbox id \"31d619eb21f02d41f010f79d3efd21ce9430a5a6f0efbefe850e8a6ab821525e\"" Feb 13 15:39:47.542977 kubelet[2589]: E0213 15:39:47.542496 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:39:47.548115 containerd[1453]: time="2025-02-13T15:39:47.547317678Z" level=info msg="CreateContainer within sandbox \"31d619eb21f02d41f010f79d3efd21ce9430a5a6f0efbefe850e8a6ab821525e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:39:47.610094 containerd[1453]: time="2025-02-13T15:39:47.610043304Z" level=info msg="CreateContainer within sandbox \"31d619eb21f02d41f010f79d3efd21ce9430a5a6f0efbefe850e8a6ab821525e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ba67630f8d318e6eb737a8bf231b6160e7f44aa5d554f79456083c0cc7a28a19\"" Feb 13 15:39:47.611220 containerd[1453]: time="2025-02-13T15:39:47.611169081Z" level=info msg="StartContainer for \"ba67630f8d318e6eb737a8bf231b6160e7f44aa5d554f79456083c0cc7a28a19\"" Feb 13 15:39:47.670530 systemd[1]: Started cri-containerd-ba67630f8d318e6eb737a8bf231b6160e7f44aa5d554f79456083c0cc7a28a19.scope - libcontainer container ba67630f8d318e6eb737a8bf231b6160e7f44aa5d554f79456083c0cc7a28a19. Feb 13 15:39:47.719732 containerd[1453]: time="2025-02-13T15:39:47.719686993Z" level=info msg="StartContainer for \"ba67630f8d318e6eb737a8bf231b6160e7f44aa5d554f79456083c0cc7a28a19\" returns successfully" Feb 13 15:39:47.735482 kernel: bpftool[5558]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 15:39:47.842895 kubelet[2589]: E0213 15:39:47.842540 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:39:47.850034 kubelet[2589]: E0213 15:39:47.849933 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:39:47.857784 kubelet[2589]: E0213 15:39:47.857648 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:39:47.858243 kubelet[2589]: I0213 15:39:47.858211 2589 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-tzqqh" podStartSLOduration=28.858177428 podStartE2EDuration="28.858177428s" podCreationTimestamp="2025-02-13 15:39:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:39:47.85565358 +0000 UTC m=+42.451017470" watchObservedRunningTime="2025-02-13 15:39:47.858177428 +0000 UTC m=+42.453541318" Feb 13 15:39:47.883079 kubelet[2589]: I0213 15:39:47.882962 2589 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-m78g5" podStartSLOduration=28.882919725 podStartE2EDuration="28.882919725s" podCreationTimestamp="2025-02-13 15:39:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:39:47.881097992 +0000 UTC m=+42.476461882" watchObservedRunningTime="2025-02-13 15:39:47.882919725 +0000 UTC m=+42.478283615" Feb 13 15:39:47.972288 systemd-networkd[1390]: vxlan.calico: Link UP Feb 13 15:39:47.972293 systemd-networkd[1390]: vxlan.calico: Gained carrier Feb 13 15:39:48.549615 systemd-networkd[1390]: cali66738f164c3: Gained IPv6LL Feb 13 15:39:48.677595 systemd-networkd[1390]: cali703ad1cbd8a: Gained IPv6LL Feb 13 15:39:48.678070 systemd-networkd[1390]: cali3f94632ffc4: Gained IPv6LL Feb 13 15:39:48.805798 systemd-networkd[1390]: calidb6a52b5bf4: Gained IPv6LL Feb 13 15:39:48.858362 kubelet[2589]: E0213 15:39:48.858187 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:39:48.865978 kubelet[2589]: E0213 15:39:48.863018 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:39:48.865978 kubelet[2589]: E0213 15:39:48.864129 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:39:48.998708 systemd-networkd[1390]: cali59d631b4192: Gained IPv6LL Feb 13 15:39:49.253711 systemd-networkd[1390]: calie39e2753277: Gained IPv6LL Feb 13 15:39:49.376991 containerd[1453]: time="2025-02-13T15:39:49.376942543Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:49.378083 containerd[1453]: time="2025-02-13T15:39:49.377323602Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Feb 13 15:39:49.378434 containerd[1453]: time="2025-02-13T15:39:49.378406414Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:49.380863 containerd[1453]: time="2025-02-13T15:39:49.380823051Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:49.381703 containerd[1453]: time="2025-02-13T15:39:49.381665372Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 2.126432811s" Feb 13 15:39:49.381703 containerd[1453]: time="2025-02-13T15:39:49.381702454Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Feb 13 15:39:49.382462 containerd[1453]: time="2025-02-13T15:39:49.382266281Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 15:39:49.385032 containerd[1453]: time="2025-02-13T15:39:49.384998974Z" level=info msg="CreateContainer within sandbox \"20647a1964458ebcacf97e48977fafb06f283e31a11fb41029ced89ed112e134\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 15:39:49.396542 containerd[1453]: time="2025-02-13T15:39:49.396490771Z" level=info msg="CreateContainer within sandbox \"20647a1964458ebcacf97e48977fafb06f283e31a11fb41029ced89ed112e134\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ee1d752f11fb9cdf9f5d575c087b2ac97eba4fffaebef8d63ff8d0179444d369\"" Feb 13 15:39:49.397938 containerd[1453]: time="2025-02-13T15:39:49.396964634Z" level=info msg="StartContainer for \"ee1d752f11fb9cdf9f5d575c087b2ac97eba4fffaebef8d63ff8d0179444d369\"" Feb 13 15:39:49.434644 systemd[1]: Started cri-containerd-ee1d752f11fb9cdf9f5d575c087b2ac97eba4fffaebef8d63ff8d0179444d369.scope - libcontainer container ee1d752f11fb9cdf9f5d575c087b2ac97eba4fffaebef8d63ff8d0179444d369. Feb 13 15:39:49.446689 systemd-networkd[1390]: vxlan.calico: Gained IPv6LL Feb 13 15:39:49.465633 containerd[1453]: time="2025-02-13T15:39:49.465590881Z" level=info msg="StartContainer for \"ee1d752f11fb9cdf9f5d575c087b2ac97eba4fffaebef8d63ff8d0179444d369\" returns successfully" Feb 13 15:39:49.687531 containerd[1453]: time="2025-02-13T15:39:49.687018055Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:49.695161 containerd[1453]: time="2025-02-13T15:39:49.691881210Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Feb 13 15:39:49.695161 containerd[1453]: time="2025-02-13T15:39:49.694369971Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 312.070928ms" Feb 13 15:39:49.695161 containerd[1453]: time="2025-02-13T15:39:49.694414533Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Feb 13 15:39:49.696252 containerd[1453]: time="2025-02-13T15:39:49.695843482Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Feb 13 15:39:49.705023 containerd[1453]: time="2025-02-13T15:39:49.704808077Z" level=info msg="CreateContainer within sandbox \"bfc5fc84340452d9af0414266e2ac2ae45b3c5fe625e403740d1c164a4192f88\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 15:39:49.718398 containerd[1453]: time="2025-02-13T15:39:49.716441361Z" level=info msg="CreateContainer within sandbox \"bfc5fc84340452d9af0414266e2ac2ae45b3c5fe625e403740d1c164a4192f88\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2c017440c39f5c7653731c514f053b5cdac8bd6ee16235cdbea3bb72a0450f28\"" Feb 13 15:39:49.718398 containerd[1453]: time="2025-02-13T15:39:49.717245480Z" level=info msg="StartContainer for \"2c017440c39f5c7653731c514f053b5cdac8bd6ee16235cdbea3bb72a0450f28\"" Feb 13 15:39:49.747682 systemd[1]: Started cri-containerd-2c017440c39f5c7653731c514f053b5cdac8bd6ee16235cdbea3bb72a0450f28.scope - libcontainer container 2c017440c39f5c7653731c514f053b5cdac8bd6ee16235cdbea3bb72a0450f28. Feb 13 15:39:49.782805 containerd[1453]: time="2025-02-13T15:39:49.782673612Z" level=info msg="StartContainer for \"2c017440c39f5c7653731c514f053b5cdac8bd6ee16235cdbea3bb72a0450f28\" returns successfully" Feb 13 15:39:49.871497 kubelet[2589]: E0213 15:39:49.870963 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:39:49.871497 kubelet[2589]: E0213 15:39:49.871517 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:39:49.895925 kubelet[2589]: I0213 15:39:49.895878 2589 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-655c6976bf-dltfc" podStartSLOduration=16.570051607 podStartE2EDuration="18.895830577s" podCreationTimestamp="2025-02-13 15:39:31 +0000 UTC" firstStartedPulling="2025-02-13 15:39:47.369696335 +0000 UTC m=+41.965060225" lastFinishedPulling="2025-02-13 15:39:49.695475305 +0000 UTC m=+44.290839195" observedRunningTime="2025-02-13 15:39:49.880371148 +0000 UTC m=+44.475734998" watchObservedRunningTime="2025-02-13 15:39:49.895830577 +0000 UTC m=+44.491194467" Feb 13 15:39:50.872659 kubelet[2589]: I0213 15:39:50.872619 2589 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:39:50.873344 kubelet[2589]: I0213 15:39:50.872622 2589 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:39:51.332333 containerd[1453]: time="2025-02-13T15:39:51.332273081Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:51.333330 containerd[1453]: time="2025-02-13T15:39:51.333272968Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Feb 13 15:39:51.335387 containerd[1453]: time="2025-02-13T15:39:51.335354185Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:51.338235 containerd[1453]: time="2025-02-13T15:39:51.338149114Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:51.339331 containerd[1453]: time="2025-02-13T15:39:51.338847347Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 1.642969142s" Feb 13 15:39:51.339331 containerd[1453]: time="2025-02-13T15:39:51.338884948Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Feb 13 15:39:51.339841 containerd[1453]: time="2025-02-13T15:39:51.339712787Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 15:39:51.357134 containerd[1453]: time="2025-02-13T15:39:51.357078233Z" level=info msg="CreateContainer within sandbox \"bf5aea2fabc54cf3c28a2f4e1931e86bfcb677341051cc4253220a3add974556\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 13 15:39:51.375194 containerd[1453]: time="2025-02-13T15:39:51.375141712Z" level=info msg="CreateContainer within sandbox \"bf5aea2fabc54cf3c28a2f4e1931e86bfcb677341051cc4253220a3add974556\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a1a16767f1d21119fd09d6b1af6fc212df02307cf3cdce31804980e188e5c644\"" Feb 13 15:39:51.378580 containerd[1453]: time="2025-02-13T15:39:51.375796703Z" level=info msg="StartContainer for \"a1a16767f1d21119fd09d6b1af6fc212df02307cf3cdce31804980e188e5c644\"" Feb 13 15:39:51.412499 systemd[1]: run-containerd-runc-k8s.io-a1a16767f1d21119fd09d6b1af6fc212df02307cf3cdce31804980e188e5c644-runc.maWGsh.mount: Deactivated successfully. Feb 13 15:39:51.422678 systemd[1]: Started cri-containerd-a1a16767f1d21119fd09d6b1af6fc212df02307cf3cdce31804980e188e5c644.scope - libcontainer container a1a16767f1d21119fd09d6b1af6fc212df02307cf3cdce31804980e188e5c644. Feb 13 15:39:51.458912 containerd[1453]: time="2025-02-13T15:39:51.458764595Z" level=info msg="StartContainer for \"a1a16767f1d21119fd09d6b1af6fc212df02307cf3cdce31804980e188e5c644\" returns successfully" Feb 13 15:39:51.806473 systemd[1]: Started sshd@12-10.0.0.113:22-10.0.0.1:58706.service - OpenSSH per-connection server daemon (10.0.0.1:58706). Feb 13 15:39:51.870552 sshd[5829]: Accepted publickey for core from 10.0.0.1 port 58706 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:39:51.872621 sshd-session[5829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:39:51.878419 systemd-logind[1429]: New session 13 of user core. Feb 13 15:39:51.884679 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 15:39:51.898290 kubelet[2589]: I0213 15:39:51.898143 2589 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-67d55cd4f9-c8fqd" podStartSLOduration=15.998860099 podStartE2EDuration="19.898099957s" podCreationTimestamp="2025-02-13 15:39:32 +0000 UTC" firstStartedPulling="2025-02-13 15:39:47.440005267 +0000 UTC m=+42.035369117" lastFinishedPulling="2025-02-13 15:39:51.339245085 +0000 UTC m=+45.934608975" observedRunningTime="2025-02-13 15:39:51.897163394 +0000 UTC m=+46.492527284" watchObservedRunningTime="2025-02-13 15:39:51.898099957 +0000 UTC m=+46.493463807" Feb 13 15:39:51.902024 kubelet[2589]: I0213 15:39:51.898387 2589 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-655c6976bf-p7qqq" podStartSLOduration=18.771271205 podStartE2EDuration="20.898366129s" podCreationTimestamp="2025-02-13 15:39:31 +0000 UTC" firstStartedPulling="2025-02-13 15:39:47.254993949 +0000 UTC m=+41.850357839" lastFinishedPulling="2025-02-13 15:39:49.382088873 +0000 UTC m=+43.977452763" observedRunningTime="2025-02-13 15:39:49.898144769 +0000 UTC m=+44.493508659" watchObservedRunningTime="2025-02-13 15:39:51.898366129 +0000 UTC m=+46.493729979" Feb 13 15:39:52.120789 sshd[5837]: Connection closed by 10.0.0.1 port 58706 Feb 13 15:39:52.121544 sshd-session[5829]: pam_unix(sshd:session): session closed for user core Feb 13 15:39:52.131319 systemd[1]: sshd@12-10.0.0.113:22-10.0.0.1:58706.service: Deactivated successfully. Feb 13 15:39:52.134636 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 15:39:52.136438 systemd-logind[1429]: Session 13 logged out. Waiting for processes to exit. Feb 13 15:39:52.154857 systemd[1]: Started sshd@13-10.0.0.113:22-10.0.0.1:58716.service - OpenSSH per-connection server daemon (10.0.0.1:58716). Feb 13 15:39:52.156182 systemd-logind[1429]: Removed session 13. Feb 13 15:39:52.192784 sshd[5862]: Accepted publickey for core from 10.0.0.1 port 58716 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:39:52.194397 sshd-session[5862]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:39:52.199828 systemd-logind[1429]: New session 14 of user core. Feb 13 15:39:52.210669 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 15:39:52.543193 sshd[5864]: Connection closed by 10.0.0.1 port 58716 Feb 13 15:39:52.543007 sshd-session[5862]: pam_unix(sshd:session): session closed for user core Feb 13 15:39:52.555914 systemd[1]: sshd@13-10.0.0.113:22-10.0.0.1:58716.service: Deactivated successfully. Feb 13 15:39:52.557622 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 15:39:52.559644 systemd-logind[1429]: Session 14 logged out. Waiting for processes to exit. Feb 13 15:39:52.565728 systemd[1]: Started sshd@14-10.0.0.113:22-10.0.0.1:51754.service - OpenSSH per-connection server daemon (10.0.0.1:51754). Feb 13 15:39:52.566735 systemd-logind[1429]: Removed session 14. Feb 13 15:39:52.608626 sshd[5876]: Accepted publickey for core from 10.0.0.1 port 51754 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:39:52.610308 sshd-session[5876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:39:52.615527 systemd-logind[1429]: New session 15 of user core. Feb 13 15:39:52.621602 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 15:39:52.750173 containerd[1453]: time="2025-02-13T15:39:52.750123470Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:52.750756 containerd[1453]: time="2025-02-13T15:39:52.750587211Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Feb 13 15:39:52.761414 containerd[1453]: time="2025-02-13T15:39:52.761347461Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:52.765488 containerd[1453]: time="2025-02-13T15:39:52.764720934Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:52.765488 containerd[1453]: time="2025-02-13T15:39:52.765412646Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.425658297s" Feb 13 15:39:52.765488 containerd[1453]: time="2025-02-13T15:39:52.765439367Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Feb 13 15:39:52.768383 containerd[1453]: time="2025-02-13T15:39:52.768240934Z" level=info msg="CreateContainer within sandbox \"c9907ec4f6aa598ae49ef3bffd702cb1ef8ec3d9eb1c7d0cdf40fbfadb573be1\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 15:39:52.785337 containerd[1453]: time="2025-02-13T15:39:52.785290150Z" level=info msg="CreateContainer within sandbox \"c9907ec4f6aa598ae49ef3bffd702cb1ef8ec3d9eb1c7d0cdf40fbfadb573be1\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"856dd2014a2bd484e1c51ac282944d206b4d0dde05fd314377340872e74ad53c\"" Feb 13 15:39:52.787713 containerd[1453]: time="2025-02-13T15:39:52.787664418Z" level=info msg="StartContainer for \"856dd2014a2bd484e1c51ac282944d206b4d0dde05fd314377340872e74ad53c\"" Feb 13 15:39:52.821637 systemd[1]: Started cri-containerd-856dd2014a2bd484e1c51ac282944d206b4d0dde05fd314377340872e74ad53c.scope - libcontainer container 856dd2014a2bd484e1c51ac282944d206b4d0dde05fd314377340872e74ad53c. Feb 13 15:39:52.860632 containerd[1453]: time="2025-02-13T15:39:52.860577537Z" level=info msg="StartContainer for \"856dd2014a2bd484e1c51ac282944d206b4d0dde05fd314377340872e74ad53c\" returns successfully" Feb 13 15:39:52.861781 containerd[1453]: time="2025-02-13T15:39:52.861757831Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 15:39:53.534590 kubelet[2589]: I0213 15:39:53.533892 2589 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:39:54.142988 containerd[1453]: time="2025-02-13T15:39:54.142937875Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:54.143798 containerd[1453]: time="2025-02-13T15:39:54.143739190Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Feb 13 15:39:54.144616 containerd[1453]: time="2025-02-13T15:39:54.144583947Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:54.149049 containerd[1453]: time="2025-02-13T15:39:54.148837854Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:39:54.156344 containerd[1453]: time="2025-02-13T15:39:54.153804271Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.292009159s" Feb 13 15:39:54.156344 containerd[1453]: time="2025-02-13T15:39:54.154544784Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Feb 13 15:39:54.157114 containerd[1453]: time="2025-02-13T15:39:54.157045734Z" level=info msg="CreateContainer within sandbox \"c9907ec4f6aa598ae49ef3bffd702cb1ef8ec3d9eb1c7d0cdf40fbfadb573be1\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 15:39:54.178794 containerd[1453]: time="2025-02-13T15:39:54.178745085Z" level=info msg="CreateContainer within sandbox \"c9907ec4f6aa598ae49ef3bffd702cb1ef8ec3d9eb1c7d0cdf40fbfadb573be1\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"ad088c192eac5c89d9f52c03685c48b92f36ff29c0c2bed0361d8e8cbfd2b17f\"" Feb 13 15:39:54.181359 containerd[1453]: time="2025-02-13T15:39:54.179616963Z" level=info msg="StartContainer for \"ad088c192eac5c89d9f52c03685c48b92f36ff29c0c2bed0361d8e8cbfd2b17f\"" Feb 13 15:39:54.182024 sshd[5879]: Connection closed by 10.0.0.1 port 51754 Feb 13 15:39:54.182950 sshd-session[5876]: pam_unix(sshd:session): session closed for user core Feb 13 15:39:54.193671 systemd[1]: sshd@14-10.0.0.113:22-10.0.0.1:51754.service: Deactivated successfully. Feb 13 15:39:54.197877 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 15:39:54.200395 systemd-logind[1429]: Session 15 logged out. Waiting for processes to exit. Feb 13 15:39:54.213091 systemd[1]: Started sshd@15-10.0.0.113:22-10.0.0.1:51770.service - OpenSSH per-connection server daemon (10.0.0.1:51770). Feb 13 15:39:54.214741 systemd-logind[1429]: Removed session 15. Feb 13 15:39:54.257619 systemd[1]: Started cri-containerd-ad088c192eac5c89d9f52c03685c48b92f36ff29c0c2bed0361d8e8cbfd2b17f.scope - libcontainer container ad088c192eac5c89d9f52c03685c48b92f36ff29c0c2bed0361d8e8cbfd2b17f. Feb 13 15:39:54.269131 sshd[5955]: Accepted publickey for core from 10.0.0.1 port 51770 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:39:54.270566 sshd-session[5955]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:39:54.275785 systemd-logind[1429]: New session 16 of user core. Feb 13 15:39:54.289660 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 15:39:54.301683 containerd[1453]: time="2025-02-13T15:39:54.301636431Z" level=info msg="StartContainer for \"ad088c192eac5c89d9f52c03685c48b92f36ff29c0c2bed0361d8e8cbfd2b17f\" returns successfully" Feb 13 15:39:54.580814 kubelet[2589]: I0213 15:39:54.580752 2589 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 15:39:54.583397 kubelet[2589]: I0213 15:39:54.583369 2589 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 15:39:54.615603 sshd[5975]: Connection closed by 10.0.0.1 port 51770 Feb 13 15:39:54.617906 sshd-session[5955]: pam_unix(sshd:session): session closed for user core Feb 13 15:39:54.625156 systemd[1]: sshd@15-10.0.0.113:22-10.0.0.1:51770.service: Deactivated successfully. Feb 13 15:39:54.629797 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 15:39:54.631767 systemd-logind[1429]: Session 16 logged out. Waiting for processes to exit. Feb 13 15:39:54.641771 systemd[1]: Started sshd@16-10.0.0.113:22-10.0.0.1:51778.service - OpenSSH per-connection server daemon (10.0.0.1:51778). Feb 13 15:39:54.643293 systemd-logind[1429]: Removed session 16. Feb 13 15:39:54.687150 sshd[5999]: Accepted publickey for core from 10.0.0.1 port 51778 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:39:54.688710 sshd-session[5999]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:39:54.694460 systemd-logind[1429]: New session 17 of user core. Feb 13 15:39:54.712638 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 15:39:54.860040 sshd[6001]: Connection closed by 10.0.0.1 port 51778 Feb 13 15:39:54.860323 sshd-session[5999]: pam_unix(sshd:session): session closed for user core Feb 13 15:39:54.863075 systemd[1]: sshd@16-10.0.0.113:22-10.0.0.1:51778.service: Deactivated successfully. Feb 13 15:39:54.865104 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 15:39:54.866628 systemd-logind[1429]: Session 17 logged out. Waiting for processes to exit. Feb 13 15:39:54.867469 systemd-logind[1429]: Removed session 17. Feb 13 15:39:54.914165 kubelet[2589]: I0213 15:39:54.913252 2589 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-9jl5n" podStartSLOduration=16.254365196 podStartE2EDuration="22.913210159s" podCreationTimestamp="2025-02-13 15:39:32 +0000 UTC" firstStartedPulling="2025-02-13 15:39:47.49636529 +0000 UTC m=+42.091729140" lastFinishedPulling="2025-02-13 15:39:54.155210213 +0000 UTC m=+48.750574103" observedRunningTime="2025-02-13 15:39:54.913144076 +0000 UTC m=+49.508507966" watchObservedRunningTime="2025-02-13 15:39:54.913210159 +0000 UTC m=+49.508574049" Feb 13 15:39:59.872060 systemd[1]: Started sshd@17-10.0.0.113:22-10.0.0.1:51794.service - OpenSSH per-connection server daemon (10.0.0.1:51794). Feb 13 15:39:59.912696 sshd[6032]: Accepted publickey for core from 10.0.0.1 port 51794 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:39:59.913826 sshd-session[6032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:39:59.917248 systemd-logind[1429]: New session 18 of user core. Feb 13 15:39:59.928637 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 15:40:00.067152 sshd[6034]: Connection closed by 10.0.0.1 port 51794 Feb 13 15:40:00.067520 sshd-session[6032]: pam_unix(sshd:session): session closed for user core Feb 13 15:40:00.069828 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 15:40:00.070630 systemd[1]: sshd@17-10.0.0.113:22-10.0.0.1:51794.service: Deactivated successfully. Feb 13 15:40:00.073666 systemd-logind[1429]: Session 18 logged out. Waiting for processes to exit. Feb 13 15:40:00.074904 systemd-logind[1429]: Removed session 18. Feb 13 15:40:05.078306 systemd[1]: Started sshd@18-10.0.0.113:22-10.0.0.1:56692.service - OpenSSH per-connection server daemon (10.0.0.1:56692). Feb 13 15:40:05.133663 sshd[6046]: Accepted publickey for core from 10.0.0.1 port 56692 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:40:05.134940 sshd-session[6046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:40:05.138506 systemd-logind[1429]: New session 19 of user core. Feb 13 15:40:05.149581 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 15:40:05.304684 sshd[6048]: Connection closed by 10.0.0.1 port 56692 Feb 13 15:40:05.305017 sshd-session[6046]: pam_unix(sshd:session): session closed for user core Feb 13 15:40:05.307402 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 15:40:05.308664 systemd[1]: sshd@18-10.0.0.113:22-10.0.0.1:56692.service: Deactivated successfully. Feb 13 15:40:05.311609 systemd-logind[1429]: Session 19 logged out. Waiting for processes to exit. Feb 13 15:40:05.312342 systemd-logind[1429]: Removed session 19. Feb 13 15:40:05.486685 containerd[1453]: time="2025-02-13T15:40:05.486554668Z" level=info msg="StopPodSandbox for \"323440efec752a3eb15eea898bfaffb5e63aebd2a734fd29578ba11bb1b33be8\"" Feb 13 15:40:05.486685 containerd[1453]: time="2025-02-13T15:40:05.486658992Z" level=info msg="TearDown network for sandbox \"323440efec752a3eb15eea898bfaffb5e63aebd2a734fd29578ba11bb1b33be8\" successfully" Feb 13 15:40:05.487187 containerd[1453]: time="2025-02-13T15:40:05.486670073Z" level=info msg="StopPodSandbox for \"323440efec752a3eb15eea898bfaffb5e63aebd2a734fd29578ba11bb1b33be8\" returns successfully" Feb 13 15:40:05.487607 containerd[1453]: time="2025-02-13T15:40:05.487577467Z" level=info msg="RemovePodSandbox for \"323440efec752a3eb15eea898bfaffb5e63aebd2a734fd29578ba11bb1b33be8\"" Feb 13 15:40:05.487607 containerd[1453]: time="2025-02-13T15:40:05.487606188Z" level=info msg="Forcibly stopping sandbox \"323440efec752a3eb15eea898bfaffb5e63aebd2a734fd29578ba11bb1b33be8\"" Feb 13 15:40:05.487692 containerd[1453]: time="2025-02-13T15:40:05.487671750Z" level=info msg="TearDown network for sandbox \"323440efec752a3eb15eea898bfaffb5e63aebd2a734fd29578ba11bb1b33be8\" successfully" Feb 13 15:40:05.496381 containerd[1453]: time="2025-02-13T15:40:05.496318076Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"323440efec752a3eb15eea898bfaffb5e63aebd2a734fd29578ba11bb1b33be8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:40:05.496526 containerd[1453]: time="2025-02-13T15:40:05.496430160Z" level=info msg="RemovePodSandbox \"323440efec752a3eb15eea898bfaffb5e63aebd2a734fd29578ba11bb1b33be8\" returns successfully" Feb 13 15:40:05.497029 containerd[1453]: time="2025-02-13T15:40:05.497004062Z" level=info msg="StopPodSandbox for \"50cf6045c8a1ddf321bab1e6e9a3f1fc9f3a297088fc52a6e468af940cbe1c10\"" Feb 13 15:40:05.497142 containerd[1453]: time="2025-02-13T15:40:05.497124626Z" level=info msg="TearDown network for sandbox \"50cf6045c8a1ddf321bab1e6e9a3f1fc9f3a297088fc52a6e468af940cbe1c10\" successfully" Feb 13 15:40:05.497142 containerd[1453]: time="2025-02-13T15:40:05.497139307Z" level=info msg="StopPodSandbox for \"50cf6045c8a1ddf321bab1e6e9a3f1fc9f3a297088fc52a6e468af940cbe1c10\" returns successfully" Feb 13 15:40:05.497548 containerd[1453]: time="2025-02-13T15:40:05.497522401Z" level=info msg="RemovePodSandbox for \"50cf6045c8a1ddf321bab1e6e9a3f1fc9f3a297088fc52a6e468af940cbe1c10\"" Feb 13 15:40:05.497625 containerd[1453]: time="2025-02-13T15:40:05.497552322Z" level=info msg="Forcibly stopping sandbox \"50cf6045c8a1ddf321bab1e6e9a3f1fc9f3a297088fc52a6e468af940cbe1c10\"" Feb 13 15:40:05.497625 containerd[1453]: time="2025-02-13T15:40:05.497617245Z" level=info msg="TearDown network for sandbox \"50cf6045c8a1ddf321bab1e6e9a3f1fc9f3a297088fc52a6e468af940cbe1c10\" successfully" Feb 13 15:40:05.510683 containerd[1453]: time="2025-02-13T15:40:05.510645175Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"50cf6045c8a1ddf321bab1e6e9a3f1fc9f3a297088fc52a6e468af940cbe1c10\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:40:05.510761 containerd[1453]: time="2025-02-13T15:40:05.510699897Z" level=info msg="RemovePodSandbox \"50cf6045c8a1ddf321bab1e6e9a3f1fc9f3a297088fc52a6e468af940cbe1c10\" returns successfully" Feb 13 15:40:05.511063 containerd[1453]: time="2025-02-13T15:40:05.511041510Z" level=info msg="StopPodSandbox for \"ad5a92ae00ae485f6dedd5b8be0955d3e9faa55341d054bfa14ae18b47ab5efd\"" Feb 13 15:40:05.511145 containerd[1453]: time="2025-02-13T15:40:05.511129153Z" level=info msg="TearDown network for sandbox \"ad5a92ae00ae485f6dedd5b8be0955d3e9faa55341d054bfa14ae18b47ab5efd\" successfully" Feb 13 15:40:05.511185 containerd[1453]: time="2025-02-13T15:40:05.511143554Z" level=info msg="StopPodSandbox for \"ad5a92ae00ae485f6dedd5b8be0955d3e9faa55341d054bfa14ae18b47ab5efd\" returns successfully" Feb 13 15:40:05.511480 containerd[1453]: time="2025-02-13T15:40:05.511458846Z" level=info msg="RemovePodSandbox for \"ad5a92ae00ae485f6dedd5b8be0955d3e9faa55341d054bfa14ae18b47ab5efd\"" Feb 13 15:40:05.511524 containerd[1453]: time="2025-02-13T15:40:05.511486647Z" level=info msg="Forcibly stopping sandbox \"ad5a92ae00ae485f6dedd5b8be0955d3e9faa55341d054bfa14ae18b47ab5efd\"" Feb 13 15:40:05.511562 containerd[1453]: time="2025-02-13T15:40:05.511550849Z" level=info msg="TearDown network for sandbox \"ad5a92ae00ae485f6dedd5b8be0955d3e9faa55341d054bfa14ae18b47ab5efd\" successfully" Feb 13 15:40:05.514138 containerd[1453]: time="2025-02-13T15:40:05.514094585Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ad5a92ae00ae485f6dedd5b8be0955d3e9faa55341d054bfa14ae18b47ab5efd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:40:05.514187 containerd[1453]: time="2025-02-13T15:40:05.514154627Z" level=info msg="RemovePodSandbox \"ad5a92ae00ae485f6dedd5b8be0955d3e9faa55341d054bfa14ae18b47ab5efd\" returns successfully" Feb 13 15:40:05.514660 containerd[1453]: time="2025-02-13T15:40:05.514495120Z" level=info msg="StopPodSandbox for \"c0109edb01872ea6ddbb1270e554366cbd4b5a05c3fb263f4739b76a9f131107\"" Feb 13 15:40:05.514660 containerd[1453]: time="2025-02-13T15:40:05.514588163Z" level=info msg="TearDown network for sandbox \"c0109edb01872ea6ddbb1270e554366cbd4b5a05c3fb263f4739b76a9f131107\" successfully" Feb 13 15:40:05.514660 containerd[1453]: time="2025-02-13T15:40:05.514598204Z" level=info msg="StopPodSandbox for \"c0109edb01872ea6ddbb1270e554366cbd4b5a05c3fb263f4739b76a9f131107\" returns successfully" Feb 13 15:40:05.515137 containerd[1453]: time="2025-02-13T15:40:05.515101423Z" level=info msg="RemovePodSandbox for \"c0109edb01872ea6ddbb1270e554366cbd4b5a05c3fb263f4739b76a9f131107\"" Feb 13 15:40:05.515196 containerd[1453]: time="2025-02-13T15:40:05.515140744Z" level=info msg="Forcibly stopping sandbox \"c0109edb01872ea6ddbb1270e554366cbd4b5a05c3fb263f4739b76a9f131107\"" Feb 13 15:40:05.515221 containerd[1453]: time="2025-02-13T15:40:05.515211307Z" level=info msg="TearDown network for sandbox \"c0109edb01872ea6ddbb1270e554366cbd4b5a05c3fb263f4739b76a9f131107\" successfully" Feb 13 15:40:05.517676 containerd[1453]: time="2025-02-13T15:40:05.517643998Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c0109edb01872ea6ddbb1270e554366cbd4b5a05c3fb263f4739b76a9f131107\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:40:05.517722 containerd[1453]: time="2025-02-13T15:40:05.517696440Z" level=info msg="RemovePodSandbox \"c0109edb01872ea6ddbb1270e554366cbd4b5a05c3fb263f4739b76a9f131107\" returns successfully" Feb 13 15:40:05.517967 containerd[1453]: time="2025-02-13T15:40:05.517950370Z" level=info msg="StopPodSandbox for \"6eb389d8019cf13e9ce3bc3cf2e15a9d423f6abd94bc80f011fe56a3be87a4b4\"" Feb 13 15:40:05.518044 containerd[1453]: time="2025-02-13T15:40:05.518028133Z" level=info msg="TearDown network for sandbox \"6eb389d8019cf13e9ce3bc3cf2e15a9d423f6abd94bc80f011fe56a3be87a4b4\" successfully" Feb 13 15:40:05.518075 containerd[1453]: time="2025-02-13T15:40:05.518043413Z" level=info msg="StopPodSandbox for \"6eb389d8019cf13e9ce3bc3cf2e15a9d423f6abd94bc80f011fe56a3be87a4b4\" returns successfully" Feb 13 15:40:05.518357 containerd[1453]: time="2025-02-13T15:40:05.518306903Z" level=info msg="RemovePodSandbox for \"6eb389d8019cf13e9ce3bc3cf2e15a9d423f6abd94bc80f011fe56a3be87a4b4\"" Feb 13 15:40:05.518357 containerd[1453]: time="2025-02-13T15:40:05.518332504Z" level=info msg="Forcibly stopping sandbox \"6eb389d8019cf13e9ce3bc3cf2e15a9d423f6abd94bc80f011fe56a3be87a4b4\"" Feb 13 15:40:05.518508 containerd[1453]: time="2025-02-13T15:40:05.518398747Z" level=info msg="TearDown network for sandbox \"6eb389d8019cf13e9ce3bc3cf2e15a9d423f6abd94bc80f011fe56a3be87a4b4\" successfully" Feb 13 15:40:05.520901 containerd[1453]: time="2025-02-13T15:40:05.520856439Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6eb389d8019cf13e9ce3bc3cf2e15a9d423f6abd94bc80f011fe56a3be87a4b4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:40:05.521187 containerd[1453]: time="2025-02-13T15:40:05.520907921Z" level=info msg="RemovePodSandbox \"6eb389d8019cf13e9ce3bc3cf2e15a9d423f6abd94bc80f011fe56a3be87a4b4\" returns successfully" Feb 13 15:40:05.525519 containerd[1453]: time="2025-02-13T15:40:05.525470813Z" level=info msg="StopPodSandbox for \"fc9238e716f505d3af42ce11bca5e656f52405b452391535ed40e201ac9642d7\"" Feb 13 15:40:05.525755 containerd[1453]: time="2025-02-13T15:40:05.525621059Z" level=info msg="TearDown network for sandbox \"fc9238e716f505d3af42ce11bca5e656f52405b452391535ed40e201ac9642d7\" successfully" Feb 13 15:40:05.525755 containerd[1453]: time="2025-02-13T15:40:05.525638539Z" level=info msg="StopPodSandbox for \"fc9238e716f505d3af42ce11bca5e656f52405b452391535ed40e201ac9642d7\" returns successfully" Feb 13 15:40:05.525944 containerd[1453]: time="2025-02-13T15:40:05.525916910Z" level=info msg="RemovePodSandbox for \"fc9238e716f505d3af42ce11bca5e656f52405b452391535ed40e201ac9642d7\"" Feb 13 15:40:05.525977 containerd[1453]: time="2025-02-13T15:40:05.525949151Z" level=info msg="Forcibly stopping sandbox \"fc9238e716f505d3af42ce11bca5e656f52405b452391535ed40e201ac9642d7\"" Feb 13 15:40:05.528463 containerd[1453]: time="2025-02-13T15:40:05.526032874Z" level=info msg="TearDown network for sandbox \"fc9238e716f505d3af42ce11bca5e656f52405b452391535ed40e201ac9642d7\" successfully" Feb 13 15:40:05.534550 containerd[1453]: time="2025-02-13T15:40:05.534513393Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fc9238e716f505d3af42ce11bca5e656f52405b452391535ed40e201ac9642d7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:40:05.534681 containerd[1453]: time="2025-02-13T15:40:05.534663359Z" level=info msg="RemovePodSandbox \"fc9238e716f505d3af42ce11bca5e656f52405b452391535ed40e201ac9642d7\" returns successfully" Feb 13 15:40:05.535256 containerd[1453]: time="2025-02-13T15:40:05.535232140Z" level=info msg="StopPodSandbox for \"2782e5c6315792cdee1e8b9d3b9b6dea34d36948f0bc75d3eaf32e17a3b39779\"" Feb 13 15:40:05.535343 containerd[1453]: time="2025-02-13T15:40:05.535325544Z" level=info msg="TearDown network for sandbox \"2782e5c6315792cdee1e8b9d3b9b6dea34d36948f0bc75d3eaf32e17a3b39779\" successfully" Feb 13 15:40:05.535343 containerd[1453]: time="2025-02-13T15:40:05.535340544Z" level=info msg="StopPodSandbox for \"2782e5c6315792cdee1e8b9d3b9b6dea34d36948f0bc75d3eaf32e17a3b39779\" returns successfully" Feb 13 15:40:05.535665 containerd[1453]: time="2025-02-13T15:40:05.535640916Z" level=info msg="RemovePodSandbox for \"2782e5c6315792cdee1e8b9d3b9b6dea34d36948f0bc75d3eaf32e17a3b39779\"" Feb 13 15:40:05.535713 containerd[1453]: time="2025-02-13T15:40:05.535666437Z" level=info msg="Forcibly stopping sandbox \"2782e5c6315792cdee1e8b9d3b9b6dea34d36948f0bc75d3eaf32e17a3b39779\"" Feb 13 15:40:05.535738 containerd[1453]: time="2025-02-13T15:40:05.535724439Z" level=info msg="TearDown network for sandbox \"2782e5c6315792cdee1e8b9d3b9b6dea34d36948f0bc75d3eaf32e17a3b39779\" successfully" Feb 13 15:40:05.540526 containerd[1453]: time="2025-02-13T15:40:05.540492218Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2782e5c6315792cdee1e8b9d3b9b6dea34d36948f0bc75d3eaf32e17a3b39779\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:40:05.540606 containerd[1453]: time="2025-02-13T15:40:05.540554341Z" level=info msg="RemovePodSandbox \"2782e5c6315792cdee1e8b9d3b9b6dea34d36948f0bc75d3eaf32e17a3b39779\" returns successfully" Feb 13 15:40:05.541130 containerd[1453]: time="2025-02-13T15:40:05.540868713Z" level=info msg="StopPodSandbox for \"008c18afaab252b50ccfad53679086b78620acbae6366a596e3e004153702937\"" Feb 13 15:40:05.541130 containerd[1453]: time="2025-02-13T15:40:05.540954156Z" level=info msg="TearDown network for sandbox \"008c18afaab252b50ccfad53679086b78620acbae6366a596e3e004153702937\" successfully" Feb 13 15:40:05.541130 containerd[1453]: time="2025-02-13T15:40:05.540963716Z" level=info msg="StopPodSandbox for \"008c18afaab252b50ccfad53679086b78620acbae6366a596e3e004153702937\" returns successfully" Feb 13 15:40:05.541518 containerd[1453]: time="2025-02-13T15:40:05.541479016Z" level=info msg="RemovePodSandbox for \"008c18afaab252b50ccfad53679086b78620acbae6366a596e3e004153702937\"" Feb 13 15:40:05.541572 containerd[1453]: time="2025-02-13T15:40:05.541520937Z" level=info msg="Forcibly stopping sandbox \"008c18afaab252b50ccfad53679086b78620acbae6366a596e3e004153702937\"" Feb 13 15:40:05.541595 containerd[1453]: time="2025-02-13T15:40:05.541586460Z" level=info msg="TearDown network for sandbox \"008c18afaab252b50ccfad53679086b78620acbae6366a596e3e004153702937\" successfully" Feb 13 15:40:05.544023 containerd[1453]: time="2025-02-13T15:40:05.543984910Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"008c18afaab252b50ccfad53679086b78620acbae6366a596e3e004153702937\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:40:05.544069 containerd[1453]: time="2025-02-13T15:40:05.544045992Z" level=info msg="RemovePodSandbox \"008c18afaab252b50ccfad53679086b78620acbae6366a596e3e004153702937\" returns successfully" Feb 13 15:40:05.544414 containerd[1453]: time="2025-02-13T15:40:05.544387525Z" level=info msg="StopPodSandbox for \"d86444dd64a16082496dff8bcc4382e23ca50d04c358f741bc26b8a391d6c550\"" Feb 13 15:40:05.544939 containerd[1453]: time="2025-02-13T15:40:05.544732458Z" level=info msg="TearDown network for sandbox \"d86444dd64a16082496dff8bcc4382e23ca50d04c358f741bc26b8a391d6c550\" successfully" Feb 13 15:40:05.544939 containerd[1453]: time="2025-02-13T15:40:05.544752099Z" level=info msg="StopPodSandbox for \"d86444dd64a16082496dff8bcc4382e23ca50d04c358f741bc26b8a391d6c550\" returns successfully" Feb 13 15:40:05.545038 containerd[1453]: time="2025-02-13T15:40:05.544993788Z" level=info msg="RemovePodSandbox for \"d86444dd64a16082496dff8bcc4382e23ca50d04c358f741bc26b8a391d6c550\"" Feb 13 15:40:05.545038 containerd[1453]: time="2025-02-13T15:40:05.545012709Z" level=info msg="Forcibly stopping sandbox \"d86444dd64a16082496dff8bcc4382e23ca50d04c358f741bc26b8a391d6c550\"" Feb 13 15:40:05.545085 containerd[1453]: time="2025-02-13T15:40:05.545071671Z" level=info msg="TearDown network for sandbox \"d86444dd64a16082496dff8bcc4382e23ca50d04c358f741bc26b8a391d6c550\" successfully" Feb 13 15:40:05.547268 containerd[1453]: time="2025-02-13T15:40:05.547219712Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d86444dd64a16082496dff8bcc4382e23ca50d04c358f741bc26b8a391d6c550\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:40:05.547268 containerd[1453]: time="2025-02-13T15:40:05.547272874Z" level=info msg="RemovePodSandbox \"d86444dd64a16082496dff8bcc4382e23ca50d04c358f741bc26b8a391d6c550\" returns successfully" Feb 13 15:40:05.547736 containerd[1453]: time="2025-02-13T15:40:05.547700850Z" level=info msg="StopPodSandbox for \"993517d1fc77735746720ed6d1052b7f574a391e2aa3dd3c4e1ac176cb243e90\"" Feb 13 15:40:05.547809 containerd[1453]: time="2025-02-13T15:40:05.547788653Z" level=info msg="TearDown network for sandbox \"993517d1fc77735746720ed6d1052b7f574a391e2aa3dd3c4e1ac176cb243e90\" successfully" Feb 13 15:40:05.547809 containerd[1453]: time="2025-02-13T15:40:05.547803134Z" level=info msg="StopPodSandbox for \"993517d1fc77735746720ed6d1052b7f574a391e2aa3dd3c4e1ac176cb243e90\" returns successfully" Feb 13 15:40:05.548111 containerd[1453]: time="2025-02-13T15:40:05.548076864Z" level=info msg="RemovePodSandbox for \"993517d1fc77735746720ed6d1052b7f574a391e2aa3dd3c4e1ac176cb243e90\"" Feb 13 15:40:05.548152 containerd[1453]: time="2025-02-13T15:40:05.548116665Z" level=info msg="Forcibly stopping sandbox \"993517d1fc77735746720ed6d1052b7f574a391e2aa3dd3c4e1ac176cb243e90\"" Feb 13 15:40:05.548196 containerd[1453]: time="2025-02-13T15:40:05.548180628Z" level=info msg="TearDown network for sandbox \"993517d1fc77735746720ed6d1052b7f574a391e2aa3dd3c4e1ac176cb243e90\" successfully" Feb 13 15:40:05.554141 containerd[1453]: time="2025-02-13T15:40:05.554090690Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"993517d1fc77735746720ed6d1052b7f574a391e2aa3dd3c4e1ac176cb243e90\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:40:05.554206 containerd[1453]: time="2025-02-13T15:40:05.554156853Z" level=info msg="RemovePodSandbox \"993517d1fc77735746720ed6d1052b7f574a391e2aa3dd3c4e1ac176cb243e90\" returns successfully" Feb 13 15:40:05.554528 containerd[1453]: time="2025-02-13T15:40:05.554500306Z" level=info msg="StopPodSandbox for \"b40c7454fd2776f6e99fb15ae912eedbe2585ed2954a98ab18eca3781785c734\"" Feb 13 15:40:05.554605 containerd[1453]: time="2025-02-13T15:40:05.554587429Z" level=info msg="TearDown network for sandbox \"b40c7454fd2776f6e99fb15ae912eedbe2585ed2954a98ab18eca3781785c734\" successfully" Feb 13 15:40:05.554636 containerd[1453]: time="2025-02-13T15:40:05.554604270Z" level=info msg="StopPodSandbox for \"b40c7454fd2776f6e99fb15ae912eedbe2585ed2954a98ab18eca3781785c734\" returns successfully" Feb 13 15:40:05.554820 containerd[1453]: time="2025-02-13T15:40:05.554800397Z" level=info msg="RemovePodSandbox for \"b40c7454fd2776f6e99fb15ae912eedbe2585ed2954a98ab18eca3781785c734\"" Feb 13 15:40:05.554852 containerd[1453]: time="2025-02-13T15:40:05.554824478Z" level=info msg="Forcibly stopping sandbox \"b40c7454fd2776f6e99fb15ae912eedbe2585ed2954a98ab18eca3781785c734\"" Feb 13 15:40:05.554889 containerd[1453]: time="2025-02-13T15:40:05.554875720Z" level=info msg="TearDown network for sandbox \"b40c7454fd2776f6e99fb15ae912eedbe2585ed2954a98ab18eca3781785c734\" successfully" Feb 13 15:40:05.557070 containerd[1453]: time="2025-02-13T15:40:05.557030601Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b40c7454fd2776f6e99fb15ae912eedbe2585ed2954a98ab18eca3781785c734\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:40:05.557127 containerd[1453]: time="2025-02-13T15:40:05.557081163Z" level=info msg="RemovePodSandbox \"b40c7454fd2776f6e99fb15ae912eedbe2585ed2954a98ab18eca3781785c734\" returns successfully" Feb 13 15:40:05.557728 containerd[1453]: time="2025-02-13T15:40:05.557569381Z" level=info msg="StopPodSandbox for \"04e549bcbe5ef51d7da81536fa57c286356e230b0a4a8de09f3b8b24843dcce6\"" Feb 13 15:40:05.557728 containerd[1453]: time="2025-02-13T15:40:05.557655744Z" level=info msg="TearDown network for sandbox \"04e549bcbe5ef51d7da81536fa57c286356e230b0a4a8de09f3b8b24843dcce6\" successfully" Feb 13 15:40:05.557728 containerd[1453]: time="2025-02-13T15:40:05.557665945Z" level=info msg="StopPodSandbox for \"04e549bcbe5ef51d7da81536fa57c286356e230b0a4a8de09f3b8b24843dcce6\" returns successfully" Feb 13 15:40:05.558015 containerd[1453]: time="2025-02-13T15:40:05.557967636Z" level=info msg="RemovePodSandbox for \"04e549bcbe5ef51d7da81536fa57c286356e230b0a4a8de09f3b8b24843dcce6\"" Feb 13 15:40:05.559182 containerd[1453]: time="2025-02-13T15:40:05.558085641Z" level=info msg="Forcibly stopping sandbox \"04e549bcbe5ef51d7da81536fa57c286356e230b0a4a8de09f3b8b24843dcce6\"" Feb 13 15:40:05.559182 containerd[1453]: time="2025-02-13T15:40:05.558170804Z" level=info msg="TearDown network for sandbox \"04e549bcbe5ef51d7da81536fa57c286356e230b0a4a8de09f3b8b24843dcce6\" successfully" Feb 13 15:40:05.560651 containerd[1453]: time="2025-02-13T15:40:05.560621776Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"04e549bcbe5ef51d7da81536fa57c286356e230b0a4a8de09f3b8b24843dcce6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:40:05.560815 containerd[1453]: time="2025-02-13T15:40:05.560796383Z" level=info msg="RemovePodSandbox \"04e549bcbe5ef51d7da81536fa57c286356e230b0a4a8de09f3b8b24843dcce6\" returns successfully" Feb 13 15:40:05.561176 containerd[1453]: time="2025-02-13T15:40:05.561149076Z" level=info msg="StopPodSandbox for \"c7b8464e6806d7515d6d160d433c08824e6e2de7760100550e98ff9d2b183f53\"" Feb 13 15:40:05.561253 containerd[1453]: time="2025-02-13T15:40:05.561231759Z" level=info msg="TearDown network for sandbox \"c7b8464e6806d7515d6d160d433c08824e6e2de7760100550e98ff9d2b183f53\" successfully" Feb 13 15:40:05.561298 containerd[1453]: time="2025-02-13T15:40:05.561245119Z" level=info msg="StopPodSandbox for \"c7b8464e6806d7515d6d160d433c08824e6e2de7760100550e98ff9d2b183f53\" returns successfully" Feb 13 15:40:05.561684 containerd[1453]: time="2025-02-13T15:40:05.561662975Z" level=info msg="RemovePodSandbox for \"c7b8464e6806d7515d6d160d433c08824e6e2de7760100550e98ff9d2b183f53\"" Feb 13 15:40:05.561795 containerd[1453]: time="2025-02-13T15:40:05.561779940Z" level=info msg="Forcibly stopping sandbox \"c7b8464e6806d7515d6d160d433c08824e6e2de7760100550e98ff9d2b183f53\"" Feb 13 15:40:05.561913 containerd[1453]: time="2025-02-13T15:40:05.561897624Z" level=info msg="TearDown network for sandbox \"c7b8464e6806d7515d6d160d433c08824e6e2de7760100550e98ff9d2b183f53\" successfully" Feb 13 15:40:05.564194 containerd[1453]: time="2025-02-13T15:40:05.564165549Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c7b8464e6806d7515d6d160d433c08824e6e2de7760100550e98ff9d2b183f53\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:40:05.564305 containerd[1453]: time="2025-02-13T15:40:05.564290114Z" level=info msg="RemovePodSandbox \"c7b8464e6806d7515d6d160d433c08824e6e2de7760100550e98ff9d2b183f53\" returns successfully" Feb 13 15:40:05.564667 containerd[1453]: time="2025-02-13T15:40:05.564646367Z" level=info msg="StopPodSandbox for \"ee6732b5b77113a6205089d42119dad4c5eee465a9929a1fc3fc3e114a8fca23\"" Feb 13 15:40:05.564886 containerd[1453]: time="2025-02-13T15:40:05.564868936Z" level=info msg="TearDown network for sandbox \"ee6732b5b77113a6205089d42119dad4c5eee465a9929a1fc3fc3e114a8fca23\" successfully" Feb 13 15:40:05.564946 containerd[1453]: time="2025-02-13T15:40:05.564933378Z" level=info msg="StopPodSandbox for \"ee6732b5b77113a6205089d42119dad4c5eee465a9929a1fc3fc3e114a8fca23\" returns successfully" Feb 13 15:40:05.565211 containerd[1453]: time="2025-02-13T15:40:05.565190828Z" level=info msg="RemovePodSandbox for \"ee6732b5b77113a6205089d42119dad4c5eee465a9929a1fc3fc3e114a8fca23\"" Feb 13 15:40:05.565289 containerd[1453]: time="2025-02-13T15:40:05.565276271Z" level=info msg="Forcibly stopping sandbox \"ee6732b5b77113a6205089d42119dad4c5eee465a9929a1fc3fc3e114a8fca23\"" Feb 13 15:40:05.565395 containerd[1453]: time="2025-02-13T15:40:05.565380755Z" level=info msg="TearDown network for sandbox \"ee6732b5b77113a6205089d42119dad4c5eee465a9929a1fc3fc3e114a8fca23\" successfully" Feb 13 15:40:05.567648 containerd[1453]: time="2025-02-13T15:40:05.567613119Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ee6732b5b77113a6205089d42119dad4c5eee465a9929a1fc3fc3e114a8fca23\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:40:05.567805 containerd[1453]: time="2025-02-13T15:40:05.567785646Z" level=info msg="RemovePodSandbox \"ee6732b5b77113a6205089d42119dad4c5eee465a9929a1fc3fc3e114a8fca23\" returns successfully" Feb 13 15:40:05.568165 containerd[1453]: time="2025-02-13T15:40:05.568140939Z" level=info msg="StopPodSandbox for \"83168c2b6aaee4a8905d06ddc03e782efd545d713d6d0b790b340a1e7610b3b8\"" Feb 13 15:40:05.568237 containerd[1453]: time="2025-02-13T15:40:05.568220622Z" level=info msg="TearDown network for sandbox \"83168c2b6aaee4a8905d06ddc03e782efd545d713d6d0b790b340a1e7610b3b8\" successfully" Feb 13 15:40:05.568267 containerd[1453]: time="2025-02-13T15:40:05.568236143Z" level=info msg="StopPodSandbox for \"83168c2b6aaee4a8905d06ddc03e782efd545d713d6d0b790b340a1e7610b3b8\" returns successfully" Feb 13 15:40:05.569533 containerd[1453]: time="2025-02-13T15:40:05.568482752Z" level=info msg="RemovePodSandbox for \"83168c2b6aaee4a8905d06ddc03e782efd545d713d6d0b790b340a1e7610b3b8\"" Feb 13 15:40:05.569533 containerd[1453]: time="2025-02-13T15:40:05.568507993Z" level=info msg="Forcibly stopping sandbox \"83168c2b6aaee4a8905d06ddc03e782efd545d713d6d0b790b340a1e7610b3b8\"" Feb 13 15:40:05.569533 containerd[1453]: time="2025-02-13T15:40:05.568566995Z" level=info msg="TearDown network for sandbox \"83168c2b6aaee4a8905d06ddc03e782efd545d713d6d0b790b340a1e7610b3b8\" successfully" Feb 13 15:40:05.570694 containerd[1453]: time="2025-02-13T15:40:05.570665634Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"83168c2b6aaee4a8905d06ddc03e782efd545d713d6d0b790b340a1e7610b3b8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:40:05.570831 containerd[1453]: time="2025-02-13T15:40:05.570813120Z" level=info msg="RemovePodSandbox \"83168c2b6aaee4a8905d06ddc03e782efd545d713d6d0b790b340a1e7610b3b8\" returns successfully" Feb 13 15:40:05.571170 containerd[1453]: time="2025-02-13T15:40:05.571145132Z" level=info msg="StopPodSandbox for \"39a88ace6d7473b18090855153397f0fb6353c83591be1df80145ca5ac8b441d\"" Feb 13 15:40:05.571246 containerd[1453]: time="2025-02-13T15:40:05.571230135Z" level=info msg="TearDown network for sandbox \"39a88ace6d7473b18090855153397f0fb6353c83591be1df80145ca5ac8b441d\" successfully" Feb 13 15:40:05.571246 containerd[1453]: time="2025-02-13T15:40:05.571244336Z" level=info msg="StopPodSandbox for \"39a88ace6d7473b18090855153397f0fb6353c83591be1df80145ca5ac8b441d\" returns successfully" Feb 13 15:40:05.571895 containerd[1453]: time="2025-02-13T15:40:05.571515706Z" level=info msg="RemovePodSandbox for \"39a88ace6d7473b18090855153397f0fb6353c83591be1df80145ca5ac8b441d\"" Feb 13 15:40:05.571895 containerd[1453]: time="2025-02-13T15:40:05.571544747Z" level=info msg="Forcibly stopping sandbox \"39a88ace6d7473b18090855153397f0fb6353c83591be1df80145ca5ac8b441d\"" Feb 13 15:40:05.571895 containerd[1453]: time="2025-02-13T15:40:05.571606869Z" level=info msg="TearDown network for sandbox \"39a88ace6d7473b18090855153397f0fb6353c83591be1df80145ca5ac8b441d\" successfully" Feb 13 15:40:05.574190 containerd[1453]: time="2025-02-13T15:40:05.574159005Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"39a88ace6d7473b18090855153397f0fb6353c83591be1df80145ca5ac8b441d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:40:05.574305 containerd[1453]: time="2025-02-13T15:40:05.574287890Z" level=info msg="RemovePodSandbox \"39a88ace6d7473b18090855153397f0fb6353c83591be1df80145ca5ac8b441d\" returns successfully" Feb 13 15:40:05.574944 containerd[1453]: time="2025-02-13T15:40:05.574904514Z" level=info msg="StopPodSandbox for \"3bf6fab8387c3ed056be1d602ef5aaa086404c97feb59ddfe547152a64bb840b\"" Feb 13 15:40:05.575200 containerd[1453]: time="2025-02-13T15:40:05.575098681Z" level=info msg="TearDown network for sandbox \"3bf6fab8387c3ed056be1d602ef5aaa086404c97feb59ddfe547152a64bb840b\" successfully" Feb 13 15:40:05.575200 containerd[1453]: time="2025-02-13T15:40:05.575131242Z" level=info msg="StopPodSandbox for \"3bf6fab8387c3ed056be1d602ef5aaa086404c97feb59ddfe547152a64bb840b\" returns successfully" Feb 13 15:40:05.575393 containerd[1453]: time="2025-02-13T15:40:05.575364971Z" level=info msg="RemovePodSandbox for \"3bf6fab8387c3ed056be1d602ef5aaa086404c97feb59ddfe547152a64bb840b\"" Feb 13 15:40:05.575428 containerd[1453]: time="2025-02-13T15:40:05.575399732Z" level=info msg="Forcibly stopping sandbox \"3bf6fab8387c3ed056be1d602ef5aaa086404c97feb59ddfe547152a64bb840b\"" Feb 13 15:40:05.575496 containerd[1453]: time="2025-02-13T15:40:05.575473615Z" level=info msg="TearDown network for sandbox \"3bf6fab8387c3ed056be1d602ef5aaa086404c97feb59ddfe547152a64bb840b\" successfully" Feb 13 15:40:05.577765 containerd[1453]: time="2025-02-13T15:40:05.577732980Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3bf6fab8387c3ed056be1d602ef5aaa086404c97feb59ddfe547152a64bb840b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:40:05.577840 containerd[1453]: time="2025-02-13T15:40:05.577785902Z" level=info msg="RemovePodSandbox \"3bf6fab8387c3ed056be1d602ef5aaa086404c97feb59ddfe547152a64bb840b\" returns successfully" Feb 13 15:40:05.578091 containerd[1453]: time="2025-02-13T15:40:05.578064112Z" level=info msg="StopPodSandbox for \"cda662325bbcccb4b3b73cb9c1a4ba575d4900e6a01847f7536483bed3456eb0\"" Feb 13 15:40:05.578390 containerd[1453]: time="2025-02-13T15:40:05.578276000Z" level=info msg="TearDown network for sandbox \"cda662325bbcccb4b3b73cb9c1a4ba575d4900e6a01847f7536483bed3456eb0\" successfully" Feb 13 15:40:05.578390 containerd[1453]: time="2025-02-13T15:40:05.578310522Z" level=info msg="StopPodSandbox for \"cda662325bbcccb4b3b73cb9c1a4ba575d4900e6a01847f7536483bed3456eb0\" returns successfully" Feb 13 15:40:05.578589 containerd[1453]: time="2025-02-13T15:40:05.578526170Z" level=info msg="RemovePodSandbox for \"cda662325bbcccb4b3b73cb9c1a4ba575d4900e6a01847f7536483bed3456eb0\"" Feb 13 15:40:05.578589 containerd[1453]: time="2025-02-13T15:40:05.578549611Z" level=info msg="Forcibly stopping sandbox \"cda662325bbcccb4b3b73cb9c1a4ba575d4900e6a01847f7536483bed3456eb0\"" Feb 13 15:40:05.578678 containerd[1453]: time="2025-02-13T15:40:05.578617053Z" level=info msg="TearDown network for sandbox \"cda662325bbcccb4b3b73cb9c1a4ba575d4900e6a01847f7536483bed3456eb0\" successfully" Feb 13 15:40:05.580827 containerd[1453]: time="2025-02-13T15:40:05.580776655Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cda662325bbcccb4b3b73cb9c1a4ba575d4900e6a01847f7536483bed3456eb0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:40:05.580891 containerd[1453]: time="2025-02-13T15:40:05.580842377Z" level=info msg="RemovePodSandbox \"cda662325bbcccb4b3b73cb9c1a4ba575d4900e6a01847f7536483bed3456eb0\" returns successfully" Feb 13 15:40:05.581146 containerd[1453]: time="2025-02-13T15:40:05.581120147Z" level=info msg="StopPodSandbox for \"5877426208bb0ec6f3b0bd6273d2a61f178cd37093bfdab85cd100b691fb5424\"" Feb 13 15:40:05.581456 containerd[1453]: time="2025-02-13T15:40:05.581374037Z" level=info msg="TearDown network for sandbox \"5877426208bb0ec6f3b0bd6273d2a61f178cd37093bfdab85cd100b691fb5424\" successfully" Feb 13 15:40:05.581456 containerd[1453]: time="2025-02-13T15:40:05.581392598Z" level=info msg="StopPodSandbox for \"5877426208bb0ec6f3b0bd6273d2a61f178cd37093bfdab85cd100b691fb5424\" returns successfully" Feb 13 15:40:05.581772 containerd[1453]: time="2025-02-13T15:40:05.581730570Z" level=info msg="RemovePodSandbox for \"5877426208bb0ec6f3b0bd6273d2a61f178cd37093bfdab85cd100b691fb5424\"" Feb 13 15:40:05.581772 containerd[1453]: time="2025-02-13T15:40:05.581758011Z" level=info msg="Forcibly stopping sandbox \"5877426208bb0ec6f3b0bd6273d2a61f178cd37093bfdab85cd100b691fb5424\"" Feb 13 15:40:05.581891 containerd[1453]: time="2025-02-13T15:40:05.581825774Z" level=info msg="TearDown network for sandbox \"5877426208bb0ec6f3b0bd6273d2a61f178cd37093bfdab85cd100b691fb5424\" successfully" Feb 13 15:40:05.584053 containerd[1453]: time="2025-02-13T15:40:05.584013056Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5877426208bb0ec6f3b0bd6273d2a61f178cd37093bfdab85cd100b691fb5424\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:40:05.584110 containerd[1453]: time="2025-02-13T15:40:05.584067618Z" level=info msg="RemovePodSandbox \"5877426208bb0ec6f3b0bd6273d2a61f178cd37093bfdab85cd100b691fb5424\" returns successfully" Feb 13 15:40:05.584529 containerd[1453]: time="2025-02-13T15:40:05.584500395Z" level=info msg="StopPodSandbox for \"5b053525623b9497da7e1b2e770d86673655bd7d93820efc534aa58222203703\"" Feb 13 15:40:05.584617 containerd[1453]: time="2025-02-13T15:40:05.584585758Z" level=info msg="TearDown network for sandbox \"5b053525623b9497da7e1b2e770d86673655bd7d93820efc534aa58222203703\" successfully" Feb 13 15:40:05.584617 containerd[1453]: time="2025-02-13T15:40:05.584599638Z" level=info msg="StopPodSandbox for \"5b053525623b9497da7e1b2e770d86673655bd7d93820efc534aa58222203703\" returns successfully" Feb 13 15:40:05.585097 containerd[1453]: time="2025-02-13T15:40:05.585076576Z" level=info msg="RemovePodSandbox for \"5b053525623b9497da7e1b2e770d86673655bd7d93820efc534aa58222203703\"" Feb 13 15:40:05.585097 containerd[1453]: time="2025-02-13T15:40:05.585099697Z" level=info msg="Forcibly stopping sandbox \"5b053525623b9497da7e1b2e770d86673655bd7d93820efc534aa58222203703\"" Feb 13 15:40:05.585227 containerd[1453]: time="2025-02-13T15:40:05.585174260Z" level=info msg="TearDown network for sandbox \"5b053525623b9497da7e1b2e770d86673655bd7d93820efc534aa58222203703\" successfully" Feb 13 15:40:05.587379 containerd[1453]: time="2025-02-13T15:40:05.587329861Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5b053525623b9497da7e1b2e770d86673655bd7d93820efc534aa58222203703\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:40:05.587379 containerd[1453]: time="2025-02-13T15:40:05.587385303Z" level=info msg="RemovePodSandbox \"5b053525623b9497da7e1b2e770d86673655bd7d93820efc534aa58222203703\" returns successfully" Feb 13 15:40:05.587781 containerd[1453]: time="2025-02-13T15:40:05.587731756Z" level=info msg="StopPodSandbox for \"71de9d911f60af43c2548ea926c32ea6d591d73f40411c67079cd659cd718877\"" Feb 13 15:40:05.587837 containerd[1453]: time="2025-02-13T15:40:05.587820440Z" level=info msg="TearDown network for sandbox \"71de9d911f60af43c2548ea926c32ea6d591d73f40411c67079cd659cd718877\" successfully" Feb 13 15:40:05.587837 containerd[1453]: time="2025-02-13T15:40:05.587835400Z" level=info msg="StopPodSandbox for \"71de9d911f60af43c2548ea926c32ea6d591d73f40411c67079cd659cd718877\" returns successfully" Feb 13 15:40:05.589592 containerd[1453]: time="2025-02-13T15:40:05.588404102Z" level=info msg="RemovePodSandbox for \"71de9d911f60af43c2548ea926c32ea6d591d73f40411c67079cd659cd718877\"" Feb 13 15:40:05.589592 containerd[1453]: time="2025-02-13T15:40:05.588432663Z" level=info msg="Forcibly stopping sandbox \"71de9d911f60af43c2548ea926c32ea6d591d73f40411c67079cd659cd718877\"" Feb 13 15:40:05.589592 containerd[1453]: time="2025-02-13T15:40:05.588511466Z" level=info msg="TearDown network for sandbox \"71de9d911f60af43c2548ea926c32ea6d591d73f40411c67079cd659cd718877\" successfully" Feb 13 15:40:05.592032 containerd[1453]: time="2025-02-13T15:40:05.592002637Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"71de9d911f60af43c2548ea926c32ea6d591d73f40411c67079cd659cd718877\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:40:05.592302 containerd[1453]: time="2025-02-13T15:40:05.592281728Z" level=info msg="RemovePodSandbox \"71de9d911f60af43c2548ea926c32ea6d591d73f40411c67079cd659cd718877\" returns successfully" Feb 13 15:40:05.593314 containerd[1453]: time="2025-02-13T15:40:05.593290726Z" level=info msg="StopPodSandbox for \"eafde1813a5db1480e193a3196ba6983d3672af483e22d08b7ea3d97fbf2a9c2\"" Feb 13 15:40:05.594031 containerd[1453]: time="2025-02-13T15:40:05.594007193Z" level=info msg="TearDown network for sandbox \"eafde1813a5db1480e193a3196ba6983d3672af483e22d08b7ea3d97fbf2a9c2\" successfully" Feb 13 15:40:05.594124 containerd[1453]: time="2025-02-13T15:40:05.594096556Z" level=info msg="StopPodSandbox for \"eafde1813a5db1480e193a3196ba6983d3672af483e22d08b7ea3d97fbf2a9c2\" returns successfully" Feb 13 15:40:05.594515 containerd[1453]: time="2025-02-13T15:40:05.594486131Z" level=info msg="RemovePodSandbox for \"eafde1813a5db1480e193a3196ba6983d3672af483e22d08b7ea3d97fbf2a9c2\"" Feb 13 15:40:05.594515 containerd[1453]: time="2025-02-13T15:40:05.594514772Z" level=info msg="Forcibly stopping sandbox \"eafde1813a5db1480e193a3196ba6983d3672af483e22d08b7ea3d97fbf2a9c2\"" Feb 13 15:40:05.594607 containerd[1453]: time="2025-02-13T15:40:05.594576134Z" level=info msg="TearDown network for sandbox \"eafde1813a5db1480e193a3196ba6983d3672af483e22d08b7ea3d97fbf2a9c2\" successfully" Feb 13 15:40:05.596824 containerd[1453]: time="2025-02-13T15:40:05.596793497Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"eafde1813a5db1480e193a3196ba6983d3672af483e22d08b7ea3d97fbf2a9c2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:40:05.596897 containerd[1453]: time="2025-02-13T15:40:05.596843579Z" level=info msg="RemovePodSandbox \"eafde1813a5db1480e193a3196ba6983d3672af483e22d08b7ea3d97fbf2a9c2\" returns successfully" Feb 13 15:40:05.597483 containerd[1453]: time="2025-02-13T15:40:05.597302117Z" level=info msg="StopPodSandbox for \"101df1b5a2a631f5f92c5c2b39a4a8b148fad8cca3a6df2bdc90e67be4cca1a7\"" Feb 13 15:40:05.597483 containerd[1453]: time="2025-02-13T15:40:05.597411521Z" level=info msg="TearDown network for sandbox \"101df1b5a2a631f5f92c5c2b39a4a8b148fad8cca3a6df2bdc90e67be4cca1a7\" successfully" Feb 13 15:40:05.597483 containerd[1453]: time="2025-02-13T15:40:05.597422521Z" level=info msg="StopPodSandbox for \"101df1b5a2a631f5f92c5c2b39a4a8b148fad8cca3a6df2bdc90e67be4cca1a7\" returns successfully" Feb 13 15:40:05.598323 containerd[1453]: time="2025-02-13T15:40:05.597921260Z" level=info msg="RemovePodSandbox for \"101df1b5a2a631f5f92c5c2b39a4a8b148fad8cca3a6df2bdc90e67be4cca1a7\"" Feb 13 15:40:05.598323 containerd[1453]: time="2025-02-13T15:40:05.597946581Z" level=info msg="Forcibly stopping sandbox \"101df1b5a2a631f5f92c5c2b39a4a8b148fad8cca3a6df2bdc90e67be4cca1a7\"" Feb 13 15:40:05.598323 containerd[1453]: time="2025-02-13T15:40:05.597999983Z" level=info msg="TearDown network for sandbox \"101df1b5a2a631f5f92c5c2b39a4a8b148fad8cca3a6df2bdc90e67be4cca1a7\" successfully" Feb 13 15:40:05.600746 containerd[1453]: time="2025-02-13T15:40:05.600715365Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"101df1b5a2a631f5f92c5c2b39a4a8b148fad8cca3a6df2bdc90e67be4cca1a7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:40:05.600919 containerd[1453]: time="2025-02-13T15:40:05.600897572Z" level=info msg="RemovePodSandbox \"101df1b5a2a631f5f92c5c2b39a4a8b148fad8cca3a6df2bdc90e67be4cca1a7\" returns successfully" Feb 13 15:40:05.601506 containerd[1453]: time="2025-02-13T15:40:05.601358829Z" level=info msg="StopPodSandbox for \"bbe0e585cf74fab1708cc4fe366a97dc7b448d8d7a1e597df3010cf40cbe9ec3\"" Feb 13 15:40:05.601506 containerd[1453]: time="2025-02-13T15:40:05.601440912Z" level=info msg="TearDown network for sandbox \"bbe0e585cf74fab1708cc4fe366a97dc7b448d8d7a1e597df3010cf40cbe9ec3\" successfully" Feb 13 15:40:05.601506 containerd[1453]: time="2025-02-13T15:40:05.601469393Z" level=info msg="StopPodSandbox for \"bbe0e585cf74fab1708cc4fe366a97dc7b448d8d7a1e597df3010cf40cbe9ec3\" returns successfully" Feb 13 15:40:05.602732 containerd[1453]: time="2025-02-13T15:40:05.602004094Z" level=info msg="RemovePodSandbox for \"bbe0e585cf74fab1708cc4fe366a97dc7b448d8d7a1e597df3010cf40cbe9ec3\"" Feb 13 15:40:05.602732 containerd[1453]: time="2025-02-13T15:40:05.602127218Z" level=info msg="Forcibly stopping sandbox \"bbe0e585cf74fab1708cc4fe366a97dc7b448d8d7a1e597df3010cf40cbe9ec3\"" Feb 13 15:40:05.602732 containerd[1453]: time="2025-02-13T15:40:05.602186860Z" level=info msg="TearDown network for sandbox \"bbe0e585cf74fab1708cc4fe366a97dc7b448d8d7a1e597df3010cf40cbe9ec3\" successfully" Feb 13 15:40:05.604723 containerd[1453]: time="2025-02-13T15:40:05.604688835Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bbe0e585cf74fab1708cc4fe366a97dc7b448d8d7a1e597df3010cf40cbe9ec3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:40:05.604949 containerd[1453]: time="2025-02-13T15:40:05.604851521Z" level=info msg="RemovePodSandbox \"bbe0e585cf74fab1708cc4fe366a97dc7b448d8d7a1e597df3010cf40cbe9ec3\" returns successfully" Feb 13 15:40:05.605383 containerd[1453]: time="2025-02-13T15:40:05.605359900Z" level=info msg="StopPodSandbox for \"49e61bdc9174d5527e1c7974cc761bf4ead3dd292c49b47e6a610d38ca0b74ec\"" Feb 13 15:40:05.605561 containerd[1453]: time="2025-02-13T15:40:05.605541707Z" level=info msg="TearDown network for sandbox \"49e61bdc9174d5527e1c7974cc761bf4ead3dd292c49b47e6a610d38ca0b74ec\" successfully" Feb 13 15:40:05.605641 containerd[1453]: time="2025-02-13T15:40:05.605623710Z" level=info msg="StopPodSandbox for \"49e61bdc9174d5527e1c7974cc761bf4ead3dd292c49b47e6a610d38ca0b74ec\" returns successfully" Feb 13 15:40:05.606171 containerd[1453]: time="2025-02-13T15:40:05.606132369Z" level=info msg="RemovePodSandbox for \"49e61bdc9174d5527e1c7974cc761bf4ead3dd292c49b47e6a610d38ca0b74ec\"" Feb 13 15:40:05.606171 containerd[1453]: time="2025-02-13T15:40:05.606162250Z" level=info msg="Forcibly stopping sandbox \"49e61bdc9174d5527e1c7974cc761bf4ead3dd292c49b47e6a610d38ca0b74ec\"" Feb 13 15:40:05.606274 containerd[1453]: time="2025-02-13T15:40:05.606221852Z" level=info msg="TearDown network for sandbox \"49e61bdc9174d5527e1c7974cc761bf4ead3dd292c49b47e6a610d38ca0b74ec\" successfully" Feb 13 15:40:05.608486 containerd[1453]: time="2025-02-13T15:40:05.608433775Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"49e61bdc9174d5527e1c7974cc761bf4ead3dd292c49b47e6a610d38ca0b74ec\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:40:05.608544 containerd[1453]: time="2025-02-13T15:40:05.608503258Z" level=info msg="RemovePodSandbox \"49e61bdc9174d5527e1c7974cc761bf4ead3dd292c49b47e6a610d38ca0b74ec\" returns successfully" Feb 13 15:40:05.609021 containerd[1453]: time="2025-02-13T15:40:05.608894313Z" level=info msg="StopPodSandbox for \"396a2480d2093d2717a34e8bc7fb57dabb533a866511490272198d8e3429975d\"" Feb 13 15:40:05.609021 containerd[1453]: time="2025-02-13T15:40:05.608987796Z" level=info msg="TearDown network for sandbox \"396a2480d2093d2717a34e8bc7fb57dabb533a866511490272198d8e3429975d\" successfully" Feb 13 15:40:05.609021 containerd[1453]: time="2025-02-13T15:40:05.608998957Z" level=info msg="StopPodSandbox for \"396a2480d2093d2717a34e8bc7fb57dabb533a866511490272198d8e3429975d\" returns successfully" Feb 13 15:40:05.609508 containerd[1453]: time="2025-02-13T15:40:05.609483335Z" level=info msg="RemovePodSandbox for \"396a2480d2093d2717a34e8bc7fb57dabb533a866511490272198d8e3429975d\"" Feb 13 15:40:05.609552 containerd[1453]: time="2025-02-13T15:40:05.609512296Z" level=info msg="Forcibly stopping sandbox \"396a2480d2093d2717a34e8bc7fb57dabb533a866511490272198d8e3429975d\"" Feb 13 15:40:05.609589 containerd[1453]: time="2025-02-13T15:40:05.609574018Z" level=info msg="TearDown network for sandbox \"396a2480d2093d2717a34e8bc7fb57dabb533a866511490272198d8e3429975d\" successfully" Feb 13 15:40:05.613797 containerd[1453]: time="2025-02-13T15:40:05.613751256Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"396a2480d2093d2717a34e8bc7fb57dabb533a866511490272198d8e3429975d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:40:05.613859 containerd[1453]: time="2025-02-13T15:40:05.613818138Z" level=info msg="RemovePodSandbox \"396a2480d2093d2717a34e8bc7fb57dabb533a866511490272198d8e3429975d\" returns successfully" Feb 13 15:40:05.614281 containerd[1453]: time="2025-02-13T15:40:05.614252074Z" level=info msg="StopPodSandbox for \"757cc0d2ff1044664c04e58b47b1a5006b422f31425ed9c06bacf1bc76f6ad3e\"" Feb 13 15:40:05.614642 containerd[1453]: time="2025-02-13T15:40:05.614499564Z" level=info msg="TearDown network for sandbox \"757cc0d2ff1044664c04e58b47b1a5006b422f31425ed9c06bacf1bc76f6ad3e\" successfully" Feb 13 15:40:05.614642 containerd[1453]: time="2025-02-13T15:40:05.614529005Z" level=info msg="StopPodSandbox for \"757cc0d2ff1044664c04e58b47b1a5006b422f31425ed9c06bacf1bc76f6ad3e\" returns successfully" Feb 13 15:40:05.614963 containerd[1453]: time="2025-02-13T15:40:05.614891739Z" level=info msg="RemovePodSandbox for \"757cc0d2ff1044664c04e58b47b1a5006b422f31425ed9c06bacf1bc76f6ad3e\"" Feb 13 15:40:05.614963 containerd[1453]: time="2025-02-13T15:40:05.614925620Z" level=info msg="Forcibly stopping sandbox \"757cc0d2ff1044664c04e58b47b1a5006b422f31425ed9c06bacf1bc76f6ad3e\"" Feb 13 15:40:05.615083 containerd[1453]: time="2025-02-13T15:40:05.615063185Z" level=info msg="TearDown network for sandbox \"757cc0d2ff1044664c04e58b47b1a5006b422f31425ed9c06bacf1bc76f6ad3e\" successfully" Feb 13 15:40:05.617412 containerd[1453]: time="2025-02-13T15:40:05.617374832Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"757cc0d2ff1044664c04e58b47b1a5006b422f31425ed9c06bacf1bc76f6ad3e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:40:05.617481 containerd[1453]: time="2025-02-13T15:40:05.617429114Z" level=info msg="RemovePodSandbox \"757cc0d2ff1044664c04e58b47b1a5006b422f31425ed9c06bacf1bc76f6ad3e\" returns successfully" Feb 13 15:40:05.617995 containerd[1453]: time="2025-02-13T15:40:05.617876971Z" level=info msg="StopPodSandbox for \"30a389c08acdff94511aaf20ccbee0dfab1cf917dd7515609cef1dd2a3cb4d98\"" Feb 13 15:40:05.617995 containerd[1453]: time="2025-02-13T15:40:05.617962814Z" level=info msg="TearDown network for sandbox \"30a389c08acdff94511aaf20ccbee0dfab1cf917dd7515609cef1dd2a3cb4d98\" successfully" Feb 13 15:40:05.617995 containerd[1453]: time="2025-02-13T15:40:05.617972455Z" level=info msg="StopPodSandbox for \"30a389c08acdff94511aaf20ccbee0dfab1cf917dd7515609cef1dd2a3cb4d98\" returns successfully" Feb 13 15:40:05.618501 containerd[1453]: time="2025-02-13T15:40:05.618471113Z" level=info msg="RemovePodSandbox for \"30a389c08acdff94511aaf20ccbee0dfab1cf917dd7515609cef1dd2a3cb4d98\"" Feb 13 15:40:05.618577 containerd[1453]: time="2025-02-13T15:40:05.618502314Z" level=info msg="Forcibly stopping sandbox \"30a389c08acdff94511aaf20ccbee0dfab1cf917dd7515609cef1dd2a3cb4d98\"" Feb 13 15:40:05.618577 containerd[1453]: time="2025-02-13T15:40:05.618565637Z" level=info msg="TearDown network for sandbox \"30a389c08acdff94511aaf20ccbee0dfab1cf917dd7515609cef1dd2a3cb4d98\" successfully" Feb 13 15:40:05.620810 containerd[1453]: time="2025-02-13T15:40:05.620772880Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"30a389c08acdff94511aaf20ccbee0dfab1cf917dd7515609cef1dd2a3cb4d98\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:40:05.620887 containerd[1453]: time="2025-02-13T15:40:05.620842163Z" level=info msg="RemovePodSandbox \"30a389c08acdff94511aaf20ccbee0dfab1cf917dd7515609cef1dd2a3cb4d98\" returns successfully" Feb 13 15:40:05.621381 containerd[1453]: time="2025-02-13T15:40:05.621356262Z" level=info msg="StopPodSandbox for \"b01e65f44a177baaf5f7f69dc8b5d4d3eb04ec8d0624506da181fde0fb48605a\"" Feb 13 15:40:05.621588 containerd[1453]: time="2025-02-13T15:40:05.621538029Z" level=info msg="TearDown network for sandbox \"b01e65f44a177baaf5f7f69dc8b5d4d3eb04ec8d0624506da181fde0fb48605a\" successfully" Feb 13 15:40:05.621588 containerd[1453]: time="2025-02-13T15:40:05.621557229Z" level=info msg="StopPodSandbox for \"b01e65f44a177baaf5f7f69dc8b5d4d3eb04ec8d0624506da181fde0fb48605a\" returns successfully" Feb 13 15:40:05.622483 containerd[1453]: time="2025-02-13T15:40:05.622234615Z" level=info msg="RemovePodSandbox for \"b01e65f44a177baaf5f7f69dc8b5d4d3eb04ec8d0624506da181fde0fb48605a\"" Feb 13 15:40:05.622483 containerd[1453]: time="2025-02-13T15:40:05.622266376Z" level=info msg="Forcibly stopping sandbox \"b01e65f44a177baaf5f7f69dc8b5d4d3eb04ec8d0624506da181fde0fb48605a\"" Feb 13 15:40:05.622483 containerd[1453]: time="2025-02-13T15:40:05.622333939Z" level=info msg="TearDown network for sandbox \"b01e65f44a177baaf5f7f69dc8b5d4d3eb04ec8d0624506da181fde0fb48605a\" successfully" Feb 13 15:40:05.625908 containerd[1453]: time="2025-02-13T15:40:05.625863151Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b01e65f44a177baaf5f7f69dc8b5d4d3eb04ec8d0624506da181fde0fb48605a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:40:05.625945 containerd[1453]: time="2025-02-13T15:40:05.625927634Z" level=info msg="RemovePodSandbox \"b01e65f44a177baaf5f7f69dc8b5d4d3eb04ec8d0624506da181fde0fb48605a\" returns successfully" Feb 13 15:40:05.626313 containerd[1453]: time="2025-02-13T15:40:05.626283167Z" level=info msg="StopPodSandbox for \"1366802ebaa9a45a0b2a61073284e55df3a597069127539e15d8cfea7f6ea1a6\"" Feb 13 15:40:05.626406 containerd[1453]: time="2025-02-13T15:40:05.626379411Z" level=info msg="TearDown network for sandbox \"1366802ebaa9a45a0b2a61073284e55df3a597069127539e15d8cfea7f6ea1a6\" successfully" Feb 13 15:40:05.626406 containerd[1453]: time="2025-02-13T15:40:05.626397932Z" level=info msg="StopPodSandbox for \"1366802ebaa9a45a0b2a61073284e55df3a597069127539e15d8cfea7f6ea1a6\" returns successfully" Feb 13 15:40:05.626701 containerd[1453]: time="2025-02-13T15:40:05.626670422Z" level=info msg="RemovePodSandbox for \"1366802ebaa9a45a0b2a61073284e55df3a597069127539e15d8cfea7f6ea1a6\"" Feb 13 15:40:05.626737 containerd[1453]: time="2025-02-13T15:40:05.626703223Z" level=info msg="Forcibly stopping sandbox \"1366802ebaa9a45a0b2a61073284e55df3a597069127539e15d8cfea7f6ea1a6\"" Feb 13 15:40:05.626788 containerd[1453]: time="2025-02-13T15:40:05.626771826Z" level=info msg="TearDown network for sandbox \"1366802ebaa9a45a0b2a61073284e55df3a597069127539e15d8cfea7f6ea1a6\" successfully" Feb 13 15:40:05.629094 containerd[1453]: time="2025-02-13T15:40:05.629048111Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1366802ebaa9a45a0b2a61073284e55df3a597069127539e15d8cfea7f6ea1a6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:40:05.629156 containerd[1453]: time="2025-02-13T15:40:05.629117314Z" level=info msg="RemovePodSandbox \"1366802ebaa9a45a0b2a61073284e55df3a597069127539e15d8cfea7f6ea1a6\" returns successfully" Feb 13 15:40:05.629605 containerd[1453]: time="2025-02-13T15:40:05.629583972Z" level=info msg="StopPodSandbox for \"5c6df23e15af22b2b3475eb65ba94a3e2b7eca755b23c91b1d64f81c31949671\"" Feb 13 15:40:05.629703 containerd[1453]: time="2025-02-13T15:40:05.629685855Z" level=info msg="TearDown network for sandbox \"5c6df23e15af22b2b3475eb65ba94a3e2b7eca755b23c91b1d64f81c31949671\" successfully" Feb 13 15:40:05.629728 containerd[1453]: time="2025-02-13T15:40:05.629702256Z" level=info msg="StopPodSandbox for \"5c6df23e15af22b2b3475eb65ba94a3e2b7eca755b23c91b1d64f81c31949671\" returns successfully" Feb 13 15:40:05.630112 containerd[1453]: time="2025-02-13T15:40:05.630077510Z" level=info msg="RemovePodSandbox for \"5c6df23e15af22b2b3475eb65ba94a3e2b7eca755b23c91b1d64f81c31949671\"" Feb 13 15:40:05.630151 containerd[1453]: time="2025-02-13T15:40:05.630117192Z" level=info msg="Forcibly stopping sandbox \"5c6df23e15af22b2b3475eb65ba94a3e2b7eca755b23c91b1d64f81c31949671\"" Feb 13 15:40:05.630216 containerd[1453]: time="2025-02-13T15:40:05.630200395Z" level=info msg="TearDown network for sandbox \"5c6df23e15af22b2b3475eb65ba94a3e2b7eca755b23c91b1d64f81c31949671\" successfully" Feb 13 15:40:05.632454 containerd[1453]: time="2025-02-13T15:40:05.632412158Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5c6df23e15af22b2b3475eb65ba94a3e2b7eca755b23c91b1d64f81c31949671\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:40:05.632523 containerd[1453]: time="2025-02-13T15:40:05.632505041Z" level=info msg="RemovePodSandbox \"5c6df23e15af22b2b3475eb65ba94a3e2b7eca755b23c91b1d64f81c31949671\" returns successfully" Feb 13 15:40:05.632853 containerd[1453]: time="2025-02-13T15:40:05.632826734Z" level=info msg="StopPodSandbox for \"7ea59ef2557339d5092fba62282ec76fa969b68ad495c317bb92c97891321274\"" Feb 13 15:40:05.632946 containerd[1453]: time="2025-02-13T15:40:05.632922657Z" level=info msg="TearDown network for sandbox \"7ea59ef2557339d5092fba62282ec76fa969b68ad495c317bb92c97891321274\" successfully" Feb 13 15:40:05.632946 containerd[1453]: time="2025-02-13T15:40:05.632937578Z" level=info msg="StopPodSandbox for \"7ea59ef2557339d5092fba62282ec76fa969b68ad495c317bb92c97891321274\" returns successfully" Feb 13 15:40:05.633235 containerd[1453]: time="2025-02-13T15:40:05.633204908Z" level=info msg="RemovePodSandbox for \"7ea59ef2557339d5092fba62282ec76fa969b68ad495c317bb92c97891321274\"" Feb 13 15:40:05.633235 containerd[1453]: time="2025-02-13T15:40:05.633232749Z" level=info msg="Forcibly stopping sandbox \"7ea59ef2557339d5092fba62282ec76fa969b68ad495c317bb92c97891321274\"" Feb 13 15:40:05.633310 containerd[1453]: time="2025-02-13T15:40:05.633293191Z" level=info msg="TearDown network for sandbox \"7ea59ef2557339d5092fba62282ec76fa969b68ad495c317bb92c97891321274\" successfully" Feb 13 15:40:05.635719 containerd[1453]: time="2025-02-13T15:40:05.635689201Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7ea59ef2557339d5092fba62282ec76fa969b68ad495c317bb92c97891321274\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:40:05.635774 containerd[1453]: time="2025-02-13T15:40:05.635739643Z" level=info msg="RemovePodSandbox \"7ea59ef2557339d5092fba62282ec76fa969b68ad495c317bb92c97891321274\" returns successfully" Feb 13 15:40:05.636057 containerd[1453]: time="2025-02-13T15:40:05.636034334Z" level=info msg="StopPodSandbox for \"d707feadb9268fa011363e35e4a444418fcd0f37bc28decb570859962b6aafb6\"" Feb 13 15:40:05.636141 containerd[1453]: time="2025-02-13T15:40:05.636123658Z" level=info msg="TearDown network for sandbox \"d707feadb9268fa011363e35e4a444418fcd0f37bc28decb570859962b6aafb6\" successfully" Feb 13 15:40:05.636141 containerd[1453]: time="2025-02-13T15:40:05.636138818Z" level=info msg="StopPodSandbox for \"d707feadb9268fa011363e35e4a444418fcd0f37bc28decb570859962b6aafb6\" returns successfully" Feb 13 15:40:05.636430 containerd[1453]: time="2025-02-13T15:40:05.636398228Z" level=info msg="RemovePodSandbox for \"d707feadb9268fa011363e35e4a444418fcd0f37bc28decb570859962b6aafb6\"" Feb 13 15:40:05.636430 containerd[1453]: time="2025-02-13T15:40:05.636427149Z" level=info msg="Forcibly stopping sandbox \"d707feadb9268fa011363e35e4a444418fcd0f37bc28decb570859962b6aafb6\"" Feb 13 15:40:05.636532 containerd[1453]: time="2025-02-13T15:40:05.636515472Z" level=info msg="TearDown network for sandbox \"d707feadb9268fa011363e35e4a444418fcd0f37bc28decb570859962b6aafb6\" successfully" Feb 13 15:40:05.638740 containerd[1453]: time="2025-02-13T15:40:05.638712195Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d707feadb9268fa011363e35e4a444418fcd0f37bc28decb570859962b6aafb6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:40:05.638865 containerd[1453]: time="2025-02-13T15:40:05.638758597Z" level=info msg="RemovePodSandbox \"d707feadb9268fa011363e35e4a444418fcd0f37bc28decb570859962b6aafb6\" returns successfully" Feb 13 15:40:05.639153 containerd[1453]: time="2025-02-13T15:40:05.639124171Z" level=info msg="StopPodSandbox for \"3a525c6ec978a5a69c0212d35cd2b17b293d11491a62157f6c3ed38426d316ae\"" Feb 13 15:40:05.639269 containerd[1453]: time="2025-02-13T15:40:05.639210134Z" level=info msg="TearDown network for sandbox \"3a525c6ec978a5a69c0212d35cd2b17b293d11491a62157f6c3ed38426d316ae\" successfully" Feb 13 15:40:05.639269 containerd[1453]: time="2025-02-13T15:40:05.639220134Z" level=info msg="StopPodSandbox for \"3a525c6ec978a5a69c0212d35cd2b17b293d11491a62157f6c3ed38426d316ae\" returns successfully" Feb 13 15:40:05.639962 containerd[1453]: time="2025-02-13T15:40:05.639464063Z" level=info msg="RemovePodSandbox for \"3a525c6ec978a5a69c0212d35cd2b17b293d11491a62157f6c3ed38426d316ae\"" Feb 13 15:40:05.639962 containerd[1453]: time="2025-02-13T15:40:05.639492304Z" level=info msg="Forcibly stopping sandbox \"3a525c6ec978a5a69c0212d35cd2b17b293d11491a62157f6c3ed38426d316ae\"" Feb 13 15:40:05.639962 containerd[1453]: time="2025-02-13T15:40:05.639550627Z" level=info msg="TearDown network for sandbox \"3a525c6ec978a5a69c0212d35cd2b17b293d11491a62157f6c3ed38426d316ae\" successfully" Feb 13 15:40:05.641885 containerd[1453]: time="2025-02-13T15:40:05.641856873Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3a525c6ec978a5a69c0212d35cd2b17b293d11491a62157f6c3ed38426d316ae\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:40:05.641965 containerd[1453]: time="2025-02-13T15:40:05.641903835Z" level=info msg="RemovePodSandbox \"3a525c6ec978a5a69c0212d35cd2b17b293d11491a62157f6c3ed38426d316ae\" returns successfully" Feb 13 15:40:05.642214 containerd[1453]: time="2025-02-13T15:40:05.642192566Z" level=info msg="StopPodSandbox for \"ba7317d179593a0699866447e61ca8280457f65066e5eed9890fb1749b2f40b0\"" Feb 13 15:40:05.642295 containerd[1453]: time="2025-02-13T15:40:05.642279809Z" level=info msg="TearDown network for sandbox \"ba7317d179593a0699866447e61ca8280457f65066e5eed9890fb1749b2f40b0\" successfully" Feb 13 15:40:05.642324 containerd[1453]: time="2025-02-13T15:40:05.642294250Z" level=info msg="StopPodSandbox for \"ba7317d179593a0699866447e61ca8280457f65066e5eed9890fb1749b2f40b0\" returns successfully" Feb 13 15:40:05.642596 containerd[1453]: time="2025-02-13T15:40:05.642574100Z" level=info msg="RemovePodSandbox for \"ba7317d179593a0699866447e61ca8280457f65066e5eed9890fb1749b2f40b0\"" Feb 13 15:40:05.642800 containerd[1453]: time="2025-02-13T15:40:05.642695105Z" level=info msg="Forcibly stopping sandbox \"ba7317d179593a0699866447e61ca8280457f65066e5eed9890fb1749b2f40b0\"" Feb 13 15:40:05.642800 containerd[1453]: time="2025-02-13T15:40:05.642759947Z" level=info msg="TearDown network for sandbox \"ba7317d179593a0699866447e61ca8280457f65066e5eed9890fb1749b2f40b0\" successfully" Feb 13 15:40:05.645351 containerd[1453]: time="2025-02-13T15:40:05.645220480Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ba7317d179593a0699866447e61ca8280457f65066e5eed9890fb1749b2f40b0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:40:05.645351 containerd[1453]: time="2025-02-13T15:40:05.645271482Z" level=info msg="RemovePodSandbox \"ba7317d179593a0699866447e61ca8280457f65066e5eed9890fb1749b2f40b0\" returns successfully" Feb 13 15:40:08.696412 kubelet[2589]: I0213 15:40:08.696258 2589 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:40:10.316110 systemd[1]: Started sshd@19-10.0.0.113:22-10.0.0.1:56702.service - OpenSSH per-connection server daemon (10.0.0.1:56702). Feb 13 15:40:10.376430 sshd[6072]: Accepted publickey for core from 10.0.0.1 port 56702 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:40:10.377787 sshd-session[6072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:40:10.381496 systemd-logind[1429]: New session 20 of user core. Feb 13 15:40:10.388657 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 15:40:10.538030 sshd[6074]: Connection closed by 10.0.0.1 port 56702 Feb 13 15:40:10.538405 sshd-session[6072]: pam_unix(sshd:session): session closed for user core Feb 13 15:40:10.541723 systemd[1]: sshd@19-10.0.0.113:22-10.0.0.1:56702.service: Deactivated successfully. Feb 13 15:40:10.543351 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 15:40:10.544791 systemd-logind[1429]: Session 20 logged out. Waiting for processes to exit. Feb 13 15:40:10.545908 systemd-logind[1429]: Removed session 20. Feb 13 15:40:11.427285 kubelet[2589]: E0213 15:40:11.427217 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"