Feb 13 15:20:39.878011 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 15:20:39.878099 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu Feb 13 14:02:42 -00 2025 Feb 13 15:20:39.878110 kernel: KASLR enabled Feb 13 15:20:39.878116 kernel: efi: EFI v2.7 by EDK II Feb 13 15:20:39.878121 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 Feb 13 15:20:39.878127 kernel: random: crng init done Feb 13 15:20:39.878134 kernel: secureboot: Secure boot disabled Feb 13 15:20:39.878139 kernel: ACPI: Early table checksum verification disabled Feb 13 15:20:39.878145 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Feb 13 15:20:39.878153 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 13 15:20:39.878158 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:20:39.878164 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:20:39.878170 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:20:39.878176 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:20:39.878183 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:20:39.878190 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:20:39.878197 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:20:39.878203 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:20:39.878208 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:20:39.878214 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 13 15:20:39.878220 kernel: NUMA: Failed to initialise from firmware Feb 13 15:20:39.878226 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 15:20:39.878232 kernel: NUMA: NODE_DATA [mem 0xdc959800-0xdc95efff] Feb 13 15:20:39.878238 kernel: Zone ranges: Feb 13 15:20:39.878244 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 15:20:39.878252 kernel: DMA32 empty Feb 13 15:20:39.878257 kernel: Normal empty Feb 13 15:20:39.878263 kernel: Movable zone start for each node Feb 13 15:20:39.878269 kernel: Early memory node ranges Feb 13 15:20:39.878275 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Feb 13 15:20:39.878281 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Feb 13 15:20:39.878287 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Feb 13 15:20:39.878293 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Feb 13 15:20:39.878299 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Feb 13 15:20:39.878305 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Feb 13 15:20:39.878311 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Feb 13 15:20:39.878318 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Feb 13 15:20:39.878325 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Feb 13 15:20:39.878331 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 15:20:39.878338 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 13 15:20:39.878347 kernel: psci: probing for conduit method from ACPI. Feb 13 15:20:39.878354 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 15:20:39.878360 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 15:20:39.878387 kernel: psci: Trusted OS migration not required Feb 13 15:20:39.878394 kernel: psci: SMC Calling Convention v1.1 Feb 13 15:20:39.878400 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 13 15:20:39.878407 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 15:20:39.878413 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 15:20:39.878420 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 13 15:20:39.878427 kernel: Detected PIPT I-cache on CPU0 Feb 13 15:20:39.878433 kernel: CPU features: detected: GIC system register CPU interface Feb 13 15:20:39.878440 kernel: CPU features: detected: Hardware dirty bit management Feb 13 15:20:39.878446 kernel: CPU features: detected: Spectre-v4 Feb 13 15:20:39.878454 kernel: CPU features: detected: Spectre-BHB Feb 13 15:20:39.878460 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 15:20:39.878468 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 15:20:39.878474 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 15:20:39.878480 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 15:20:39.878487 kernel: alternatives: applying boot alternatives Feb 13 15:20:39.878494 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=685b18f1e2a119f561f35348e788538aade62ddb9fa889a87d9e00058aaa4b5a Feb 13 15:20:39.878501 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:20:39.878508 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 15:20:39.878514 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 15:20:39.878521 kernel: Fallback order for Node 0: 0 Feb 13 15:20:39.878528 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 13 15:20:39.878535 kernel: Policy zone: DMA Feb 13 15:20:39.878541 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:20:39.878548 kernel: software IO TLB: area num 4. Feb 13 15:20:39.878554 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Feb 13 15:20:39.878561 kernel: Memory: 2385944K/2572288K available (10304K kernel code, 2184K rwdata, 8092K rodata, 39936K init, 897K bss, 186344K reserved, 0K cma-reserved) Feb 13 15:20:39.878568 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 15:20:39.878574 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:20:39.878582 kernel: rcu: RCU event tracing is enabled. Feb 13 15:20:39.878588 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 15:20:39.878595 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:20:39.878601 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:20:39.878609 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:20:39.878616 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 15:20:39.878622 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 15:20:39.878629 kernel: GICv3: 256 SPIs implemented Feb 13 15:20:39.878635 kernel: GICv3: 0 Extended SPIs implemented Feb 13 15:20:39.878641 kernel: Root IRQ handler: gic_handle_irq Feb 13 15:20:39.878648 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 15:20:39.878654 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 13 15:20:39.878660 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 13 15:20:39.878667 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 15:20:39.878674 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Feb 13 15:20:39.878681 kernel: GICv3: using LPI property table @0x00000000400f0000 Feb 13 15:20:39.878688 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Feb 13 15:20:39.878694 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:20:39.878701 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:20:39.878707 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 15:20:39.878714 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 15:20:39.878720 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 15:20:39.878727 kernel: arm-pv: using stolen time PV Feb 13 15:20:39.878734 kernel: Console: colour dummy device 80x25 Feb 13 15:20:39.878740 kernel: ACPI: Core revision 20230628 Feb 13 15:20:39.878747 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 15:20:39.878755 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:20:39.878762 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:20:39.878769 kernel: landlock: Up and running. Feb 13 15:20:39.878776 kernel: SELinux: Initializing. Feb 13 15:20:39.878783 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:20:39.878790 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:20:39.878797 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:20:39.878803 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:20:39.878810 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:20:39.878818 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:20:39.878825 kernel: Platform MSI: ITS@0x8080000 domain created Feb 13 15:20:39.878831 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 13 15:20:39.878845 kernel: Remapping and enabling EFI services. Feb 13 15:20:39.878852 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:20:39.878859 kernel: Detected PIPT I-cache on CPU1 Feb 13 15:20:39.878866 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 13 15:20:39.878873 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Feb 13 15:20:39.878879 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:20:39.878888 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 15:20:39.878895 kernel: Detected PIPT I-cache on CPU2 Feb 13 15:20:39.878906 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 13 15:20:39.878914 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Feb 13 15:20:39.878921 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:20:39.878929 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 13 15:20:39.878936 kernel: Detected PIPT I-cache on CPU3 Feb 13 15:20:39.878942 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 13 15:20:39.878950 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Feb 13 15:20:39.878958 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:20:39.878965 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 13 15:20:39.878984 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 15:20:39.878990 kernel: SMP: Total of 4 processors activated. Feb 13 15:20:39.878997 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 15:20:39.879005 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 15:20:39.879012 kernel: CPU features: detected: Common not Private translations Feb 13 15:20:39.879019 kernel: CPU features: detected: CRC32 instructions Feb 13 15:20:39.879055 kernel: CPU features: detected: Enhanced Virtualization Traps Feb 13 15:20:39.879063 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 15:20:39.879070 kernel: CPU features: detected: LSE atomic instructions Feb 13 15:20:39.879077 kernel: CPU features: detected: Privileged Access Never Feb 13 15:20:39.879084 kernel: CPU features: detected: RAS Extension Support Feb 13 15:20:39.879091 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 13 15:20:39.879099 kernel: CPU: All CPU(s) started at EL1 Feb 13 15:20:39.879106 kernel: alternatives: applying system-wide alternatives Feb 13 15:20:39.879114 kernel: devtmpfs: initialized Feb 13 15:20:39.879122 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:20:39.879129 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 15:20:39.879136 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:20:39.879144 kernel: SMBIOS 3.0.0 present. Feb 13 15:20:39.879151 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Feb 13 15:20:39.879158 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:20:39.879165 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 15:20:39.879172 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 15:20:39.879179 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 15:20:39.879187 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:20:39.879194 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Feb 13 15:20:39.879202 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:20:39.879209 kernel: cpuidle: using governor menu Feb 13 15:20:39.879216 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 15:20:39.879223 kernel: ASID allocator initialised with 32768 entries Feb 13 15:20:39.879230 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:20:39.879237 kernel: Serial: AMBA PL011 UART driver Feb 13 15:20:39.879244 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 15:20:39.879253 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 15:20:39.879260 kernel: Modules: 508880 pages in range for PLT usage Feb 13 15:20:39.879266 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 15:20:39.879273 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 15:20:39.879281 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 15:20:39.879288 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 15:20:39.879294 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:20:39.879302 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:20:39.879315 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 15:20:39.879324 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 15:20:39.879332 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:20:39.879339 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:20:39.879353 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:20:39.879365 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:20:39.879372 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 15:20:39.879379 kernel: ACPI: Interpreter enabled Feb 13 15:20:39.879386 kernel: ACPI: Using GIC for interrupt routing Feb 13 15:20:39.879394 kernel: ACPI: MCFG table detected, 1 entries Feb 13 15:20:39.879401 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 13 15:20:39.879409 kernel: printk: console [ttyAMA0] enabled Feb 13 15:20:39.879416 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 15:20:39.879552 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 15:20:39.879621 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 15:20:39.879681 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 15:20:39.879741 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 13 15:20:39.879801 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 13 15:20:39.879813 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 13 15:20:39.879820 kernel: PCI host bridge to bus 0000:00 Feb 13 15:20:39.879910 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 13 15:20:39.879968 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 15:20:39.880031 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 13 15:20:39.880090 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 15:20:39.880182 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 13 15:20:39.880261 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 15:20:39.880326 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 13 15:20:39.880388 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 13 15:20:39.880449 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 15:20:39.880511 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 15:20:39.880573 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 13 15:20:39.880651 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 13 15:20:39.880709 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 13 15:20:39.880764 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 15:20:39.880819 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 13 15:20:39.880829 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 15:20:39.880842 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 15:20:39.880851 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 15:20:39.880858 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 15:20:39.880868 kernel: iommu: Default domain type: Translated Feb 13 15:20:39.880875 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 15:20:39.880882 kernel: efivars: Registered efivars operations Feb 13 15:20:39.880889 kernel: vgaarb: loaded Feb 13 15:20:39.880896 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 15:20:39.880904 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:20:39.880911 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:20:39.880918 kernel: pnp: PnP ACPI init Feb 13 15:20:39.880998 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 13 15:20:39.881011 kernel: pnp: PnP ACPI: found 1 devices Feb 13 15:20:39.881018 kernel: NET: Registered PF_INET protocol family Feb 13 15:20:39.881040 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 15:20:39.881048 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 15:20:39.881055 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:20:39.881062 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 15:20:39.881069 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 15:20:39.881077 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 15:20:39.881086 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:20:39.881093 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:20:39.881101 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:20:39.881108 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:20:39.881115 kernel: kvm [1]: HYP mode not available Feb 13 15:20:39.881122 kernel: Initialise system trusted keyrings Feb 13 15:20:39.881129 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 15:20:39.881136 kernel: Key type asymmetric registered Feb 13 15:20:39.881143 kernel: Asymmetric key parser 'x509' registered Feb 13 15:20:39.881151 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 15:20:39.881158 kernel: io scheduler mq-deadline registered Feb 13 15:20:39.881165 kernel: io scheduler kyber registered Feb 13 15:20:39.881172 kernel: io scheduler bfq registered Feb 13 15:20:39.881180 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 15:20:39.881187 kernel: ACPI: button: Power Button [PWRB] Feb 13 15:20:39.881195 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 15:20:39.881264 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 13 15:20:39.881274 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:20:39.881283 kernel: thunder_xcv, ver 1.0 Feb 13 15:20:39.881290 kernel: thunder_bgx, ver 1.0 Feb 13 15:20:39.881297 kernel: nicpf, ver 1.0 Feb 13 15:20:39.881304 kernel: nicvf, ver 1.0 Feb 13 15:20:39.881378 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 15:20:39.881439 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T15:20:39 UTC (1739460039) Feb 13 15:20:39.881449 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 15:20:39.881456 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 13 15:20:39.881463 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 15:20:39.881472 kernel: watchdog: Hard watchdog permanently disabled Feb 13 15:20:39.881480 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:20:39.881486 kernel: Segment Routing with IPv6 Feb 13 15:20:39.881493 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:20:39.881500 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:20:39.881507 kernel: Key type dns_resolver registered Feb 13 15:20:39.881514 kernel: registered taskstats version 1 Feb 13 15:20:39.881521 kernel: Loading compiled-in X.509 certificates Feb 13 15:20:39.881528 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 62d673f884efd54b6d6ef802a9b879413c8a346e' Feb 13 15:20:39.881536 kernel: Key type .fscrypt registered Feb 13 15:20:39.881543 kernel: Key type fscrypt-provisioning registered Feb 13 15:20:39.881550 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 15:20:39.881557 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:20:39.881564 kernel: ima: No architecture policies found Feb 13 15:20:39.881571 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 15:20:39.881578 kernel: clk: Disabling unused clocks Feb 13 15:20:39.881585 kernel: Freeing unused kernel memory: 39936K Feb 13 15:20:39.881593 kernel: Run /init as init process Feb 13 15:20:39.881600 kernel: with arguments: Feb 13 15:20:39.881607 kernel: /init Feb 13 15:20:39.881614 kernel: with environment: Feb 13 15:20:39.881621 kernel: HOME=/ Feb 13 15:20:39.881628 kernel: TERM=linux Feb 13 15:20:39.881635 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:20:39.881643 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:20:39.881653 systemd[1]: Detected virtualization kvm. Feb 13 15:20:39.881661 systemd[1]: Detected architecture arm64. Feb 13 15:20:39.881668 systemd[1]: Running in initrd. Feb 13 15:20:39.881675 systemd[1]: No hostname configured, using default hostname. Feb 13 15:20:39.881683 systemd[1]: Hostname set to . Feb 13 15:20:39.881691 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:20:39.881698 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:20:39.881706 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:20:39.881715 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:20:39.881723 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:20:39.881730 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:20:39.881738 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:20:39.881746 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:20:39.881754 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:20:39.881762 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:20:39.881771 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:20:39.881779 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:20:39.881786 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:20:39.881794 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:20:39.881801 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:20:39.881809 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:20:39.881816 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:20:39.881824 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:20:39.881831 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:20:39.881848 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 15:20:39.881856 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:20:39.881864 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:20:39.881872 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:20:39.881879 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:20:39.881887 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:20:39.881894 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:20:39.881902 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:20:39.881911 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:20:39.881919 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:20:39.881926 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:20:39.881934 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:20:39.881941 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:20:39.881949 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:20:39.881956 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:20:39.881966 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:20:39.881974 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:20:39.881998 systemd-journald[238]: Collecting audit messages is disabled. Feb 13 15:20:39.882018 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:20:39.882035 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:20:39.882044 systemd-journald[238]: Journal started Feb 13 15:20:39.882066 systemd-journald[238]: Runtime Journal (/run/log/journal/4f08732bea464f088b8b9a17792ca50a) is 5.9M, max 47.3M, 41.4M free. Feb 13 15:20:39.888095 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:20:39.873485 systemd-modules-load[239]: Inserted module 'overlay' Feb 13 15:20:39.890828 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:20:39.892064 kernel: Bridge firewalling registered Feb 13 15:20:39.892101 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:20:39.891369 systemd-modules-load[239]: Inserted module 'br_netfilter' Feb 13 15:20:39.893081 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:20:39.897748 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:20:39.899150 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:20:39.900884 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:20:39.906296 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:20:39.907512 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:20:39.919205 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:20:39.920130 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:20:39.923012 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:20:39.928940 dracut-cmdline[275]: dracut-dracut-053 Feb 13 15:20:39.931272 dracut-cmdline[275]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=685b18f1e2a119f561f35348e788538aade62ddb9fa889a87d9e00058aaa4b5a Feb 13 15:20:39.961881 systemd-resolved[280]: Positive Trust Anchors: Feb 13 15:20:39.961900 systemd-resolved[280]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:20:39.961931 systemd-resolved[280]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:20:39.966414 systemd-resolved[280]: Defaulting to hostname 'linux'. Feb 13 15:20:39.968106 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:20:39.968950 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:20:39.999050 kernel: SCSI subsystem initialized Feb 13 15:20:40.004041 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:20:40.011047 kernel: iscsi: registered transport (tcp) Feb 13 15:20:40.024054 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:20:40.024103 kernel: QLogic iSCSI HBA Driver Feb 13 15:20:40.064279 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:20:40.072188 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:20:40.087869 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:20:40.087924 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:20:40.087936 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:20:40.144047 kernel: raid6: neonx8 gen() 15788 MB/s Feb 13 15:20:40.161039 kernel: raid6: neonx4 gen() 15817 MB/s Feb 13 15:20:40.178036 kernel: raid6: neonx2 gen() 13297 MB/s Feb 13 15:20:40.195044 kernel: raid6: neonx1 gen() 10495 MB/s Feb 13 15:20:40.212042 kernel: raid6: int64x8 gen() 6793 MB/s Feb 13 15:20:40.229044 kernel: raid6: int64x4 gen() 7349 MB/s Feb 13 15:20:40.246046 kernel: raid6: int64x2 gen() 6111 MB/s Feb 13 15:20:40.263047 kernel: raid6: int64x1 gen() 5055 MB/s Feb 13 15:20:40.263076 kernel: raid6: using algorithm neonx4 gen() 15817 MB/s Feb 13 15:20:40.280046 kernel: raid6: .... xor() 12361 MB/s, rmw enabled Feb 13 15:20:40.280059 kernel: raid6: using neon recovery algorithm Feb 13 15:20:40.285198 kernel: xor: measuring software checksum speed Feb 13 15:20:40.285217 kernel: 8regs : 21653 MB/sec Feb 13 15:20:40.286298 kernel: 32regs : 21704 MB/sec Feb 13 15:20:40.286309 kernel: arm64_neon : 27277 MB/sec Feb 13 15:20:40.286318 kernel: xor: using function: arm64_neon (27277 MB/sec) Feb 13 15:20:40.336053 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:20:40.346712 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:20:40.355217 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:20:40.374376 systemd-udevd[462]: Using default interface naming scheme 'v255'. Feb 13 15:20:40.377660 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:20:40.397232 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:20:40.408345 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation Feb 13 15:20:40.436249 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:20:40.447182 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:20:40.485606 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:20:40.493260 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:20:40.506245 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:20:40.507851 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:20:40.509603 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:20:40.511521 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:20:40.522201 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:20:40.525752 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Feb 13 15:20:40.537161 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 15:20:40.537278 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 15:20:40.537293 kernel: GPT:9289727 != 19775487 Feb 13 15:20:40.537303 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 15:20:40.537312 kernel: GPT:9289727 != 19775487 Feb 13 15:20:40.537321 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 15:20:40.537332 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:20:40.537190 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:20:40.542527 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:20:40.542644 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:20:40.547054 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:20:40.547943 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:20:40.548104 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:20:40.549746 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:20:40.561778 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (521) Feb 13 15:20:40.561840 kernel: BTRFS: device fsid dbbe73f5-49db-4e16-b023-d47ce63b488f devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (519) Feb 13 15:20:40.564342 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:20:40.575750 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:20:40.583108 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 15:20:40.587368 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 15:20:40.591530 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 15:20:40.595104 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 15:20:40.595974 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 15:20:40.611198 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:20:40.612752 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:20:40.617181 disk-uuid[550]: Primary Header is updated. Feb 13 15:20:40.617181 disk-uuid[550]: Secondary Entries is updated. Feb 13 15:20:40.617181 disk-uuid[550]: Secondary Header is updated. Feb 13 15:20:40.621058 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:20:40.634098 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:20:41.632058 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:20:41.632268 disk-uuid[552]: The operation has completed successfully. Feb 13 15:20:41.656985 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:20:41.657093 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:20:41.674152 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:20:41.676964 sh[574]: Success Feb 13 15:20:41.692055 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 15:20:41.732498 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:20:41.734108 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:20:41.734860 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:20:41.746779 kernel: BTRFS info (device dm-0): first mount of filesystem dbbe73f5-49db-4e16-b023-d47ce63b488f Feb 13 15:20:41.746820 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:20:41.746836 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:20:41.747935 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:20:41.747953 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:20:41.751351 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:20:41.752475 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:20:41.762207 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:20:41.763608 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:20:41.770641 kernel: BTRFS info (device vda6): first mount of filesystem f03a17c4-6ca2-4f02-a9a3-5e771d63df74 Feb 13 15:20:41.770685 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:20:41.770696 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:20:41.776039 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:20:41.782571 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 15:20:41.784059 kernel: BTRFS info (device vda6): last unmount of filesystem f03a17c4-6ca2-4f02-a9a3-5e771d63df74 Feb 13 15:20:41.790092 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:20:41.797178 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:20:41.860351 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:20:41.875283 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:20:41.904105 systemd-networkd[760]: lo: Link UP Feb 13 15:20:41.904116 systemd-networkd[760]: lo: Gained carrier Feb 13 15:20:41.905019 systemd-networkd[760]: Enumeration completed Feb 13 15:20:41.905129 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:20:41.906156 systemd[1]: Reached target network.target - Network. Feb 13 15:20:41.907577 systemd-networkd[760]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:20:41.907580 systemd-networkd[760]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:20:41.908466 systemd-networkd[760]: eth0: Link UP Feb 13 15:20:41.908469 systemd-networkd[760]: eth0: Gained carrier Feb 13 15:20:41.908476 systemd-networkd[760]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:20:41.916490 ignition[659]: Ignition 2.20.0 Feb 13 15:20:41.916503 ignition[659]: Stage: fetch-offline Feb 13 15:20:41.916553 ignition[659]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:20:41.916561 ignition[659]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:20:41.916707 ignition[659]: parsed url from cmdline: "" Feb 13 15:20:41.916710 ignition[659]: no config URL provided Feb 13 15:20:41.916714 ignition[659]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:20:41.916721 ignition[659]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:20:41.916749 ignition[659]: op(1): [started] loading QEMU firmware config module Feb 13 15:20:41.916753 ignition[659]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 15:20:41.924872 ignition[659]: op(1): [finished] loading QEMU firmware config module Feb 13 15:20:41.929090 systemd-networkd[760]: eth0: DHCPv4 address 10.0.0.35/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:20:41.964272 ignition[659]: parsing config with SHA512: f92cf490c64e65f0c0d084d15857a102fa65beade71914d37fcce19f4ab7b3cc0e7f6ddaaf2a81631e3371226657b653b8ca4e38d9804f493f958ab5957b74b6 Feb 13 15:20:41.969019 unknown[659]: fetched base config from "system" Feb 13 15:20:41.969046 unknown[659]: fetched user config from "qemu" Feb 13 15:20:41.969501 ignition[659]: fetch-offline: fetch-offline passed Feb 13 15:20:41.969575 ignition[659]: Ignition finished successfully Feb 13 15:20:41.972059 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:20:41.973103 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 15:20:41.984166 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:20:41.996599 ignition[772]: Ignition 2.20.0 Feb 13 15:20:41.996608 ignition[772]: Stage: kargs Feb 13 15:20:41.996790 ignition[772]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:20:41.996799 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:20:41.997688 ignition[772]: kargs: kargs passed Feb 13 15:20:42.000478 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:20:41.997738 ignition[772]: Ignition finished successfully Feb 13 15:20:42.003796 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:20:42.014881 ignition[781]: Ignition 2.20.0 Feb 13 15:20:42.014891 ignition[781]: Stage: disks Feb 13 15:20:42.015055 ignition[781]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:20:42.015064 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:20:42.015883 ignition[781]: disks: disks passed Feb 13 15:20:42.015926 ignition[781]: Ignition finished successfully Feb 13 15:20:42.019073 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:20:42.020429 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:20:42.021615 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:20:42.023149 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:20:42.024591 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:20:42.025868 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:20:42.040209 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:20:42.051143 systemd-fsck[791]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 15:20:42.055667 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:20:42.065134 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:20:42.109054 kernel: EXT4-fs (vda9): mounted filesystem 469d244b-00c1-45f4-bce0-c1d88e98a895 r/w with ordered data mode. Quota mode: none. Feb 13 15:20:42.109602 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:20:42.110639 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:20:42.121102 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:20:42.122866 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:20:42.123631 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 15:20:42.123664 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:20:42.123684 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:20:42.128415 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:20:42.130907 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:20:42.134067 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (799) Feb 13 15:20:42.134095 kernel: BTRFS info (device vda6): first mount of filesystem f03a17c4-6ca2-4f02-a9a3-5e771d63df74 Feb 13 15:20:42.134111 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:20:42.134120 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:20:42.137071 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:20:42.138149 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:20:42.178594 initrd-setup-root[824]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:20:42.182524 initrd-setup-root[831]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:20:42.185992 initrd-setup-root[838]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:20:42.188578 initrd-setup-root[845]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:20:42.252832 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:20:42.261156 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:20:42.262441 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:20:42.267040 kernel: BTRFS info (device vda6): last unmount of filesystem f03a17c4-6ca2-4f02-a9a3-5e771d63df74 Feb 13 15:20:42.282503 ignition[913]: INFO : Ignition 2.20.0 Feb 13 15:20:42.282503 ignition[913]: INFO : Stage: mount Feb 13 15:20:42.283661 ignition[913]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:20:42.283661 ignition[913]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:20:42.283661 ignition[913]: INFO : mount: mount passed Feb 13 15:20:42.283661 ignition[913]: INFO : Ignition finished successfully Feb 13 15:20:42.283693 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:20:42.285163 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:20:42.297153 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:20:42.745238 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:20:42.755291 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:20:42.761659 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (929) Feb 13 15:20:42.761686 kernel: BTRFS info (device vda6): first mount of filesystem f03a17c4-6ca2-4f02-a9a3-5e771d63df74 Feb 13 15:20:42.761696 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:20:42.762321 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:20:42.765045 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:20:42.765751 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:20:42.780911 ignition[946]: INFO : Ignition 2.20.0 Feb 13 15:20:42.780911 ignition[946]: INFO : Stage: files Feb 13 15:20:42.782144 ignition[946]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:20:42.782144 ignition[946]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:20:42.782144 ignition[946]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:20:42.786131 ignition[946]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:20:42.786131 ignition[946]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:20:42.789017 ignition[946]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:20:42.789943 ignition[946]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:20:42.789943 ignition[946]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:20:42.789488 unknown[946]: wrote ssh authorized keys file for user: core Feb 13 15:20:42.792790 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 15:20:42.792790 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 15:20:42.839084 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 15:20:43.125547 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 15:20:43.125547 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:20:43.128587 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:20:43.128587 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:20:43.128587 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:20:43.128587 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:20:43.128587 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:20:43.128587 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:20:43.128587 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:20:43.128587 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:20:43.128587 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:20:43.128587 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 15:20:43.128587 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 15:20:43.128587 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 15:20:43.128587 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Feb 13 15:20:43.467851 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 15:20:43.520123 systemd-networkd[760]: eth0: Gained IPv6LL Feb 13 15:20:43.703538 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 15:20:43.703538 ignition[946]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 15:20:43.706137 ignition[946]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:20:43.706137 ignition[946]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:20:43.706137 ignition[946]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 15:20:43.706137 ignition[946]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Feb 13 15:20:43.706137 ignition[946]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 15:20:43.706137 ignition[946]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 15:20:43.706137 ignition[946]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Feb 13 15:20:43.706137 ignition[946]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 15:20:43.731991 ignition[946]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 15:20:43.735186 ignition[946]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 15:20:43.736375 ignition[946]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 15:20:43.736375 ignition[946]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Feb 13 15:20:43.736375 ignition[946]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 15:20:43.736375 ignition[946]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:20:43.736375 ignition[946]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:20:43.736375 ignition[946]: INFO : files: files passed Feb 13 15:20:43.736375 ignition[946]: INFO : Ignition finished successfully Feb 13 15:20:43.738398 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:20:43.749160 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:20:43.750490 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:20:43.751841 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:20:43.751917 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:20:43.757587 initrd-setup-root-after-ignition[976]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 15:20:43.759928 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:20:43.759928 initrd-setup-root-after-ignition[978]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:20:43.762143 initrd-setup-root-after-ignition[982]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:20:43.763490 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:20:43.764530 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:20:43.781191 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:20:43.798873 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:20:43.798970 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:20:43.802265 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:20:43.803562 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:20:43.804971 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:20:43.805629 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:20:43.819407 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:20:43.830159 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:20:43.837457 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:20:43.838371 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:20:43.839969 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:20:43.841239 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:20:43.841342 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:20:43.843301 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:20:43.844742 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:20:43.846015 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:20:43.847236 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:20:43.848611 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:20:43.849990 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:20:43.851505 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:20:43.852931 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:20:43.854350 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:20:43.855576 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:20:43.856730 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:20:43.856847 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:20:43.858525 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:20:43.859885 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:20:43.861266 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:20:43.862082 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:20:43.863425 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:20:43.863525 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:20:43.865646 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:20:43.865762 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:20:43.867110 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:20:43.868267 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:20:43.873100 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:20:43.874059 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:20:43.875608 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:20:43.876843 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:20:43.876926 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:20:43.878478 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:20:43.878553 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:20:43.879651 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:20:43.879748 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:20:43.881043 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:20:43.881135 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:20:43.894278 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:20:43.896235 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:20:43.896855 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:20:43.896960 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:20:43.898286 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:20:43.898369 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:20:43.903067 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:20:43.903153 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:20:43.907064 ignition[1002]: INFO : Ignition 2.20.0 Feb 13 15:20:43.907064 ignition[1002]: INFO : Stage: umount Feb 13 15:20:43.908327 ignition[1002]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:20:43.908327 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:20:43.908327 ignition[1002]: INFO : umount: umount passed Feb 13 15:20:43.908327 ignition[1002]: INFO : Ignition finished successfully Feb 13 15:20:43.908817 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:20:43.909291 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:20:43.910344 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:20:43.915473 systemd[1]: Stopped target network.target - Network. Feb 13 15:20:43.916152 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:20:43.916232 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:20:43.921230 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:20:43.921283 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:20:43.922372 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:20:43.922407 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:20:43.923722 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:20:43.923764 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:20:43.925201 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:20:43.926394 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:20:43.933134 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:20:43.933250 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:20:43.934881 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:20:43.934938 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:20:43.935079 systemd-networkd[760]: eth0: DHCPv6 lease lost Feb 13 15:20:43.937939 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:20:43.938057 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:20:43.939640 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:20:43.939687 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:20:43.951135 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:20:43.951776 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:20:43.951830 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:20:43.953751 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:20:43.953792 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:20:43.955042 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:20:43.955079 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:20:43.956761 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:20:43.966745 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:20:43.966920 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:20:43.970441 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:20:43.970520 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:20:43.972236 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:20:43.972315 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:20:43.976606 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:20:43.976738 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:20:43.978329 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:20:43.978365 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:20:43.979739 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:20:43.979767 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:20:43.981007 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:20:43.981054 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:20:43.983053 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:20:43.983089 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:20:43.985060 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:20:43.985107 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:20:43.996153 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:20:43.996892 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:20:43.996938 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:20:43.998510 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 15:20:43.998546 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:20:43.999922 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:20:43.999955 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:20:44.001526 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:20:44.001563 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:20:44.003205 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:20:44.003310 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:20:44.006088 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:20:44.008186 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:20:44.016765 systemd[1]: Switching root. Feb 13 15:20:44.041777 systemd-journald[238]: Journal stopped Feb 13 15:20:44.727709 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Feb 13 15:20:44.727758 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 15:20:44.727774 kernel: SELinux: policy capability open_perms=1 Feb 13 15:20:44.727784 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 15:20:44.727809 kernel: SELinux: policy capability always_check_network=0 Feb 13 15:20:44.727824 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 15:20:44.727837 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 15:20:44.727847 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 15:20:44.727857 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 15:20:44.727867 kernel: audit: type=1403 audit(1739460044.184:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 15:20:44.727877 systemd[1]: Successfully loaded SELinux policy in 32.387ms. Feb 13 15:20:44.727890 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.640ms. Feb 13 15:20:44.727901 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:20:44.727914 systemd[1]: Detected virtualization kvm. Feb 13 15:20:44.727924 systemd[1]: Detected architecture arm64. Feb 13 15:20:44.727936 systemd[1]: Detected first boot. Feb 13 15:20:44.727947 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:20:44.727958 zram_generator::config[1048]: No configuration found. Feb 13 15:20:44.727972 systemd[1]: Populated /etc with preset unit settings. Feb 13 15:20:44.727983 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 15:20:44.727993 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 15:20:44.728003 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 15:20:44.728018 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 15:20:44.728041 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 15:20:44.728053 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 15:20:44.728064 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 15:20:44.728075 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 15:20:44.728085 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 15:20:44.728095 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 15:20:44.728105 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 15:20:44.728115 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:20:44.728126 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:20:44.728137 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 15:20:44.728148 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 15:20:44.728160 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 15:20:44.728171 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:20:44.728181 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 15:20:44.728190 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:20:44.728201 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 15:20:44.728211 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 15:20:44.728221 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 15:20:44.728233 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 15:20:44.728310 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:20:44.728324 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:20:44.728335 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:20:44.728345 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:20:44.728355 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 15:20:44.728365 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 15:20:44.728375 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:20:44.728389 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:20:44.728399 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:20:44.728410 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 15:20:44.728420 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 15:20:44.728430 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 15:20:44.728440 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 15:20:44.728452 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 15:20:44.728462 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 15:20:44.728472 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 15:20:44.728484 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 15:20:44.728494 systemd[1]: Reached target machines.target - Containers. Feb 13 15:20:44.728505 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 15:20:44.728515 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:20:44.728525 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:20:44.728536 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 15:20:44.728546 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:20:44.728556 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:20:44.728567 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:20:44.728577 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 15:20:44.728588 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:20:44.728598 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 15:20:44.728608 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 15:20:44.728618 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 15:20:44.728628 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 15:20:44.728638 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 15:20:44.728650 kernel: fuse: init (API version 7.39) Feb 13 15:20:44.728660 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:20:44.728671 kernel: loop: module loaded Feb 13 15:20:44.728681 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:20:44.728691 kernel: ACPI: bus type drm_connector registered Feb 13 15:20:44.728700 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 15:20:44.728711 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 15:20:44.728721 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:20:44.728731 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 15:20:44.728741 systemd[1]: Stopped verity-setup.service. Feb 13 15:20:44.728752 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 15:20:44.728762 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 15:20:44.728773 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 15:20:44.728809 systemd-journald[1115]: Collecting audit messages is disabled. Feb 13 15:20:44.728835 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 15:20:44.728845 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 15:20:44.728856 systemd-journald[1115]: Journal started Feb 13 15:20:44.728881 systemd-journald[1115]: Runtime Journal (/run/log/journal/4f08732bea464f088b8b9a17792ca50a) is 5.9M, max 47.3M, 41.4M free. Feb 13 15:20:44.543455 systemd[1]: Queued start job for default target multi-user.target. Feb 13 15:20:44.563964 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 15:20:44.564323 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 15:20:44.730534 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:20:44.731784 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 15:20:44.734061 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 15:20:44.735188 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:20:44.736344 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 15:20:44.736476 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 15:20:44.737599 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:20:44.737744 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:20:44.738837 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:20:44.738966 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:20:44.740147 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:20:44.740276 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:20:44.741410 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 15:20:44.741554 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 15:20:44.742594 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:20:44.742721 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:20:44.743938 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:20:44.745033 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 15:20:44.746155 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 15:20:44.757562 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 15:20:44.768179 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 15:20:44.769918 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 15:20:44.770790 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 15:20:44.770832 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:20:44.772522 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 15:20:44.774373 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 15:20:44.776133 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 15:20:44.776955 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:20:44.778930 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 15:20:44.780662 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 15:20:44.781722 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:20:44.785198 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 15:20:44.787416 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:20:44.791128 systemd-journald[1115]: Time spent on flushing to /var/log/journal/4f08732bea464f088b8b9a17792ca50a is 16.109ms for 856 entries. Feb 13 15:20:44.791128 systemd-journald[1115]: System Journal (/var/log/journal/4f08732bea464f088b8b9a17792ca50a) is 8.0M, max 195.6M, 187.6M free. Feb 13 15:20:44.829179 systemd-journald[1115]: Received client request to flush runtime journal. Feb 13 15:20:44.829244 kernel: loop0: detected capacity change from 0 to 113552 Feb 13 15:20:44.788514 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:20:44.793242 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 15:20:44.796363 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:20:44.801068 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:20:44.802235 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 15:20:44.831059 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 15:20:44.803301 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 15:20:44.804392 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 15:20:44.805680 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 15:20:44.809445 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 15:20:44.820411 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 15:20:44.825188 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 15:20:44.829017 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:20:44.831521 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 15:20:44.840959 udevadm[1171]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 15:20:44.843565 systemd-tmpfiles[1160]: ACLs are not supported, ignoring. Feb 13 15:20:44.843582 systemd-tmpfiles[1160]: ACLs are not supported, ignoring. Feb 13 15:20:44.850161 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:20:44.858235 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 15:20:44.859709 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 15:20:44.860410 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 15:20:44.871052 kernel: loop1: detected capacity change from 0 to 116784 Feb 13 15:20:44.883628 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 15:20:44.889226 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:20:44.902188 systemd-tmpfiles[1183]: ACLs are not supported, ignoring. Feb 13 15:20:44.902205 systemd-tmpfiles[1183]: ACLs are not supported, ignoring. Feb 13 15:20:44.905958 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:20:44.909039 kernel: loop2: detected capacity change from 0 to 194096 Feb 13 15:20:44.954285 kernel: loop3: detected capacity change from 0 to 113552 Feb 13 15:20:44.961262 kernel: loop4: detected capacity change from 0 to 116784 Feb 13 15:20:44.966066 kernel: loop5: detected capacity change from 0 to 194096 Feb 13 15:20:44.974996 (sd-merge)[1187]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 15:20:44.975488 (sd-merge)[1187]: Merged extensions into '/usr'. Feb 13 15:20:44.979403 systemd[1]: Reloading requested from client PID 1159 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 15:20:44.979536 systemd[1]: Reloading... Feb 13 15:20:45.052054 zram_generator::config[1213]: No configuration found. Feb 13 15:20:45.087706 ldconfig[1154]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 15:20:45.150271 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:20:45.189579 systemd[1]: Reloading finished in 209 ms. Feb 13 15:20:45.220694 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 15:20:45.221908 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 15:20:45.240239 systemd[1]: Starting ensure-sysext.service... Feb 13 15:20:45.242052 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:20:45.255660 systemd[1]: Reloading requested from client PID 1247 ('systemctl') (unit ensure-sysext.service)... Feb 13 15:20:45.255679 systemd[1]: Reloading... Feb 13 15:20:45.262463 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 15:20:45.262684 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 15:20:45.263337 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 15:20:45.263530 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Feb 13 15:20:45.263583 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Feb 13 15:20:45.266330 systemd-tmpfiles[1248]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:20:45.266341 systemd-tmpfiles[1248]: Skipping /boot Feb 13 15:20:45.274611 systemd-tmpfiles[1248]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:20:45.274628 systemd-tmpfiles[1248]: Skipping /boot Feb 13 15:20:45.303088 zram_generator::config[1275]: No configuration found. Feb 13 15:20:45.388701 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:20:45.428544 systemd[1]: Reloading finished in 172 ms. Feb 13 15:20:45.444175 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 15:20:45.464513 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:20:45.471865 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:20:45.474245 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 15:20:45.476288 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 15:20:45.479281 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:20:45.487281 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:20:45.491353 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 15:20:45.494480 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:20:45.498199 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:20:45.500272 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:20:45.505380 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:20:45.506496 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:20:45.512108 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 15:20:45.514729 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 15:20:45.517370 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:20:45.519048 systemd-udevd[1316]: Using default interface naming scheme 'v255'. Feb 13 15:20:45.519073 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:20:45.523143 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:20:45.523303 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:20:45.525687 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:20:45.526642 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:20:45.538578 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 15:20:45.542356 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:20:45.550365 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:20:45.553327 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:20:45.556322 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:20:45.558834 augenrules[1348]: No rules Feb 13 15:20:45.559779 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:20:45.561266 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:20:45.563159 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 15:20:45.564483 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:20:45.567111 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 15:20:45.568413 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:20:45.570038 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:20:45.573999 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:20:45.574435 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:20:45.575697 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:20:45.575839 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:20:45.578007 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:20:45.578152 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:20:45.580257 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:20:45.580570 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:20:45.583279 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 15:20:45.593140 systemd[1]: Finished ensure-sysext.service. Feb 13 15:20:45.595946 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 15:20:45.616234 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:20:45.617212 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:20:45.617283 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:20:45.621438 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 15:20:45.626197 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:20:45.626536 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Feb 13 15:20:45.646193 systemd-resolved[1314]: Positive Trust Anchors: Feb 13 15:20:45.646210 systemd-resolved[1314]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:20:45.646241 systemd-resolved[1314]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:20:45.656041 systemd-resolved[1314]: Defaulting to hostname 'linux'. Feb 13 15:20:45.659609 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:20:45.661391 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:20:45.694065 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1375) Feb 13 15:20:45.711323 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 15:20:45.714318 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 15:20:45.715777 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 15:20:45.720394 systemd-networkd[1387]: lo: Link UP Feb 13 15:20:45.720405 systemd-networkd[1387]: lo: Gained carrier Feb 13 15:20:45.725130 systemd-networkd[1387]: Enumeration completed Feb 13 15:20:45.725996 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 15:20:45.727438 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:20:45.728735 systemd[1]: Reached target network.target - Network. Feb 13 15:20:45.733244 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 15:20:45.735732 systemd-networkd[1387]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:20:45.735742 systemd-networkd[1387]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:20:45.739088 systemd-networkd[1387]: eth0: Link UP Feb 13 15:20:45.739097 systemd-networkd[1387]: eth0: Gained carrier Feb 13 15:20:45.739112 systemd-networkd[1387]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:20:45.739338 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:20:45.744969 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 15:20:45.757230 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 15:20:45.762141 systemd-networkd[1387]: eth0: DHCPv4 address 10.0.0.35/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:20:45.763959 systemd-timesyncd[1388]: Network configuration changed, trying to establish connection. Feb 13 15:20:45.764259 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 15:20:45.764977 systemd-timesyncd[1388]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 15:20:45.765084 systemd-timesyncd[1388]: Initial clock synchronization to Thu 2025-02-13 15:20:45.826502 UTC. Feb 13 15:20:45.778276 lvm[1406]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:20:45.794241 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:20:45.815102 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 15:20:45.816425 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:20:45.817291 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:20:45.818227 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 15:20:45.819152 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 15:20:45.820235 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 15:20:45.821146 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 15:20:45.822075 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 15:20:45.822970 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 15:20:45.823002 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:20:45.823820 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:20:45.825453 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 15:20:45.827683 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 15:20:45.841002 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 15:20:45.842921 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 15:20:45.844246 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 15:20:45.845144 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:20:45.845815 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:20:45.846574 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:20:45.846610 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:20:45.847513 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 15:20:45.849299 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 15:20:45.852284 lvm[1414]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:20:45.853165 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 15:20:45.858345 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 15:20:45.859106 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 15:20:45.860188 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 15:20:45.860945 jq[1417]: false Feb 13 15:20:45.862719 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 15:20:45.865862 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 15:20:45.870291 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 15:20:45.873272 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 15:20:45.880087 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 15:20:45.880563 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 15:20:45.881586 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 15:20:45.884362 dbus-daemon[1416]: [system] SELinux support is enabled Feb 13 15:20:45.885217 extend-filesystems[1418]: Found loop3 Feb 13 15:20:45.885938 extend-filesystems[1418]: Found loop4 Feb 13 15:20:45.885938 extend-filesystems[1418]: Found loop5 Feb 13 15:20:45.885938 extend-filesystems[1418]: Found vda Feb 13 15:20:45.885938 extend-filesystems[1418]: Found vda1 Feb 13 15:20:45.885938 extend-filesystems[1418]: Found vda2 Feb 13 15:20:45.885938 extend-filesystems[1418]: Found vda3 Feb 13 15:20:45.885938 extend-filesystems[1418]: Found usr Feb 13 15:20:45.885938 extend-filesystems[1418]: Found vda4 Feb 13 15:20:45.885938 extend-filesystems[1418]: Found vda6 Feb 13 15:20:45.885938 extend-filesystems[1418]: Found vda7 Feb 13 15:20:45.885938 extend-filesystems[1418]: Found vda9 Feb 13 15:20:45.885938 extend-filesystems[1418]: Checking size of /dev/vda9 Feb 13 15:20:45.885810 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 15:20:45.887223 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 15:20:45.908782 jq[1431]: true Feb 13 15:20:45.890744 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 15:20:45.897472 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 15:20:45.897663 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 15:20:45.897961 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 15:20:45.898123 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 15:20:45.901423 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 15:20:45.901588 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 15:20:45.915398 jq[1440]: true Feb 13 15:20:45.916658 (ntainerd)[1442]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 15:20:45.919892 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 15:20:45.919958 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 15:20:45.922654 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 15:20:45.922678 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 15:20:45.927036 extend-filesystems[1418]: Resized partition /dev/vda9 Feb 13 15:20:45.931060 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1356) Feb 13 15:20:45.934011 extend-filesystems[1454]: resize2fs 1.47.1 (20-May-2024) Feb 13 15:20:45.935247 tar[1438]: linux-arm64/helm Feb 13 15:20:45.943076 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 15:20:45.951553 update_engine[1427]: I20250213 15:20:45.951306 1427 main.cc:92] Flatcar Update Engine starting Feb 13 15:20:45.962082 systemd-logind[1425]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 15:20:45.962280 systemd-logind[1425]: New seat seat0. Feb 13 15:20:45.964977 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 15:20:45.967046 systemd[1]: Started update-engine.service - Update Engine. Feb 13 15:20:45.967302 update_engine[1427]: I20250213 15:20:45.967257 1427 update_check_scheduler.cc:74] Next update check in 3m28s Feb 13 15:20:45.972142 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 15:20:45.977249 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 15:20:45.990334 extend-filesystems[1454]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 15:20:45.990334 extend-filesystems[1454]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 15:20:45.990334 extend-filesystems[1454]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 15:20:45.995060 extend-filesystems[1418]: Resized filesystem in /dev/vda9 Feb 13 15:20:45.991732 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 15:20:45.991932 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 15:20:46.022970 bash[1468]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:20:46.025414 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 15:20:46.027287 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 15:20:46.036305 locksmithd[1470]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 15:20:46.152181 containerd[1442]: time="2025-02-13T15:20:46.152087121Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 15:20:46.181465 containerd[1442]: time="2025-02-13T15:20:46.181344232Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:20:46.183042 containerd[1442]: time="2025-02-13T15:20:46.182991404Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:20:46.183042 containerd[1442]: time="2025-02-13T15:20:46.183036417Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 15:20:46.183115 containerd[1442]: time="2025-02-13T15:20:46.183058824Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 15:20:46.183236 containerd[1442]: time="2025-02-13T15:20:46.183208760Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 15:20:46.183236 containerd[1442]: time="2025-02-13T15:20:46.183232290Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 15:20:46.183312 containerd[1442]: time="2025-02-13T15:20:46.183296578Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:20:46.183332 containerd[1442]: time="2025-02-13T15:20:46.183312840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:20:46.183512 containerd[1442]: time="2025-02-13T15:20:46.183478356Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:20:46.183512 containerd[1442]: time="2025-02-13T15:20:46.183506023Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 15:20:46.183559 containerd[1442]: time="2025-02-13T15:20:46.183521803Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:20:46.183559 containerd[1442]: time="2025-02-13T15:20:46.183530798Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 15:20:46.183615 containerd[1442]: time="2025-02-13T15:20:46.183600506Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:20:46.183829 containerd[1442]: time="2025-02-13T15:20:46.183804129Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:20:46.183929 containerd[1442]: time="2025-02-13T15:20:46.183911783Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:20:46.183960 containerd[1442]: time="2025-02-13T15:20:46.183928527Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 15:20:46.184016 containerd[1442]: time="2025-02-13T15:20:46.184002451Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 15:20:46.184078 containerd[1442]: time="2025-02-13T15:20:46.184065253Z" level=info msg="metadata content store policy set" policy=shared Feb 13 15:20:46.190837 containerd[1442]: time="2025-02-13T15:20:46.190796449Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 15:20:46.190880 containerd[1442]: time="2025-02-13T15:20:46.190856118Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 15:20:46.190880 containerd[1442]: time="2025-02-13T15:20:46.190872742Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 15:20:46.190914 containerd[1442]: time="2025-02-13T15:20:46.190888121Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 15:20:46.190914 containerd[1442]: time="2025-02-13T15:20:46.190902738Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 15:20:46.191174 containerd[1442]: time="2025-02-13T15:20:46.191139688Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 15:20:46.191648 containerd[1442]: time="2025-02-13T15:20:46.191628046Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 15:20:46.191780 containerd[1442]: time="2025-02-13T15:20:46.191756058Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 15:20:46.191806 containerd[1442]: time="2025-02-13T15:20:46.191778986Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 15:20:46.191806 containerd[1442]: time="2025-02-13T15:20:46.191794365Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 15:20:46.191860 containerd[1442]: time="2025-02-13T15:20:46.191826087Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 15:20:46.191860 containerd[1442]: time="2025-02-13T15:20:46.191841025Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 15:20:46.191860 containerd[1442]: time="2025-02-13T15:20:46.191853794Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 15:20:46.191914 containerd[1442]: time="2025-02-13T15:20:46.191869092Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 15:20:46.191914 containerd[1442]: time="2025-02-13T15:20:46.191893265Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 15:20:46.191914 containerd[1442]: time="2025-02-13T15:20:46.191906235Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 15:20:46.191965 containerd[1442]: time="2025-02-13T15:20:46.191918723Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 15:20:46.191965 containerd[1442]: time="2025-02-13T15:20:46.191936431Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 15:20:46.195170 containerd[1442]: time="2025-02-13T15:20:46.195140509Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 15:20:46.195200 containerd[1442]: time="2025-02-13T15:20:46.195171990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 15:20:46.195200 containerd[1442]: time="2025-02-13T15:20:46.195186646Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 15:20:46.195250 containerd[1442]: time="2025-02-13T15:20:46.195208129Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 15:20:46.195250 containerd[1442]: time="2025-02-13T15:20:46.195224833Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 15:20:46.195250 containerd[1442]: time="2025-02-13T15:20:46.195244027Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 15:20:46.195304 containerd[1442]: time="2025-02-13T15:20:46.195255993Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 15:20:46.195304 containerd[1442]: time="2025-02-13T15:20:46.195268400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 15:20:46.195304 containerd[1442]: time="2025-02-13T15:20:46.195280728Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 15:20:46.195304 containerd[1442]: time="2025-02-13T15:20:46.195295545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 15:20:46.195371 containerd[1442]: time="2025-02-13T15:20:46.195307792Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 15:20:46.195423 containerd[1442]: time="2025-02-13T15:20:46.195320882Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 15:20:46.195448 containerd[1442]: time="2025-02-13T15:20:46.195433033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 15:20:46.195467 containerd[1442]: time="2025-02-13T15:20:46.195451745Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 15:20:46.195485 containerd[1442]: time="2025-02-13T15:20:46.195475155Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 15:20:46.195512 containerd[1442]: time="2025-02-13T15:20:46.195497320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 15:20:46.195530 containerd[1442]: time="2025-02-13T15:20:46.195509206Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 15:20:46.195771 containerd[1442]: time="2025-02-13T15:20:46.195752742Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 15:20:46.195838 containerd[1442]: time="2025-02-13T15:20:46.195818636Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 15:20:46.195866 containerd[1442]: time="2025-02-13T15:20:46.195842367Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 15:20:46.195866 containerd[1442]: time="2025-02-13T15:20:46.195855859Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 15:20:46.195866 containerd[1442]: time="2025-02-13T15:20:46.195864411Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 15:20:46.195924 containerd[1442]: time="2025-02-13T15:20:46.195876458Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 15:20:46.195924 containerd[1442]: time="2025-02-13T15:20:46.195887781Z" level=info msg="NRI interface is disabled by configuration." Feb 13 15:20:46.195924 containerd[1442]: time="2025-02-13T15:20:46.195897780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 15:20:46.196508 containerd[1442]: time="2025-02-13T15:20:46.196397300Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 15:20:46.196508 containerd[1442]: time="2025-02-13T15:20:46.196506480Z" level=info msg="Connect containerd service" Feb 13 15:20:46.196665 containerd[1442]: time="2025-02-13T15:20:46.196553501Z" level=info msg="using legacy CRI server" Feb 13 15:20:46.196665 containerd[1442]: time="2025-02-13T15:20:46.196564182Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 15:20:46.197082 containerd[1442]: time="2025-02-13T15:20:46.197055912Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 15:20:46.198081 containerd[1442]: time="2025-02-13T15:20:46.198052061Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:20:46.201050 containerd[1442]: time="2025-02-13T15:20:46.199923576Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 15:20:46.201050 containerd[1442]: time="2025-02-13T15:20:46.199982844Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 15:20:46.204099 containerd[1442]: time="2025-02-13T15:20:46.204049156Z" level=info msg="Start subscribing containerd event" Feb 13 15:20:46.204235 containerd[1442]: time="2025-02-13T15:20:46.204218607Z" level=info msg="Start recovering state" Feb 13 15:20:46.204351 containerd[1442]: time="2025-02-13T15:20:46.204337625Z" level=info msg="Start event monitor" Feb 13 15:20:46.204410 containerd[1442]: time="2025-02-13T15:20:46.204398017Z" level=info msg="Start snapshots syncer" Feb 13 15:20:46.204458 containerd[1442]: time="2025-02-13T15:20:46.204446403Z" level=info msg="Start cni network conf syncer for default" Feb 13 15:20:46.204502 containerd[1442]: time="2025-02-13T15:20:46.204491817Z" level=info msg="Start streaming server" Feb 13 15:20:46.204692 containerd[1442]: time="2025-02-13T15:20:46.204678254Z" level=info msg="containerd successfully booted in 0.054273s" Feb 13 15:20:46.204775 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 15:20:46.328302 tar[1438]: linux-arm64/LICENSE Feb 13 15:20:46.328400 tar[1438]: linux-arm64/README.md Feb 13 15:20:46.341185 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 15:20:46.645662 sshd_keygen[1437]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 15:20:46.663699 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 15:20:46.674318 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 15:20:46.679339 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 15:20:46.681078 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 15:20:46.683354 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 15:20:46.696254 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 15:20:46.698595 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 15:20:46.701187 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 15:20:46.702133 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 15:20:47.424653 systemd-networkd[1387]: eth0: Gained IPv6LL Feb 13 15:20:47.426783 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 15:20:47.428725 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 15:20:47.442400 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 15:20:47.444610 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:20:47.446446 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 15:20:47.464194 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 15:20:47.464370 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 15:20:47.465581 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 15:20:47.469287 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 15:20:47.924947 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:20:47.926296 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 15:20:47.928878 (kubelet)[1529]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:20:47.932415 systemd[1]: Startup finished in 527ms (kernel) + 4.483s (initrd) + 3.785s (userspace) = 8.795s. Feb 13 15:20:47.940037 agetty[1505]: failed to open credentials directory Feb 13 15:20:47.941674 agetty[1506]: failed to open credentials directory Feb 13 15:20:48.507612 kubelet[1529]: E0213 15:20:48.507509 1529 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:20:48.510203 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:20:48.510359 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:20:52.017621 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 15:20:52.018730 systemd[1]: Started sshd@0-10.0.0.35:22-10.0.0.1:50904.service - OpenSSH per-connection server daemon (10.0.0.1:50904). Feb 13 15:20:52.075592 sshd[1543]: Accepted publickey for core from 10.0.0.1 port 50904 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:20:52.077638 sshd-session[1543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:20:52.087085 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 15:20:52.096341 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 15:20:52.098175 systemd-logind[1425]: New session 1 of user core. Feb 13 15:20:52.110413 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 15:20:52.112613 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 15:20:52.120700 (systemd)[1547]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 15:20:52.197579 systemd[1547]: Queued start job for default target default.target. Feb 13 15:20:52.204992 systemd[1547]: Created slice app.slice - User Application Slice. Feb 13 15:20:52.205053 systemd[1547]: Reached target paths.target - Paths. Feb 13 15:20:52.205066 systemd[1547]: Reached target timers.target - Timers. Feb 13 15:20:52.206222 systemd[1547]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 15:20:52.215957 systemd[1547]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 15:20:52.216018 systemd[1547]: Reached target sockets.target - Sockets. Feb 13 15:20:52.216044 systemd[1547]: Reached target basic.target - Basic System. Feb 13 15:20:52.216078 systemd[1547]: Reached target default.target - Main User Target. Feb 13 15:20:52.216102 systemd[1547]: Startup finished in 90ms. Feb 13 15:20:52.216325 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 15:20:52.217622 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 15:20:52.276537 systemd[1]: Started sshd@1-10.0.0.35:22-10.0.0.1:50914.service - OpenSSH per-connection server daemon (10.0.0.1:50914). Feb 13 15:20:52.320632 sshd[1558]: Accepted publickey for core from 10.0.0.1 port 50914 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:20:52.321840 sshd-session[1558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:20:52.325899 systemd-logind[1425]: New session 2 of user core. Feb 13 15:20:52.338178 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 15:20:52.392202 sshd[1560]: Connection closed by 10.0.0.1 port 50914 Feb 13 15:20:52.392609 sshd-session[1558]: pam_unix(sshd:session): session closed for user core Feb 13 15:20:52.400292 systemd[1]: sshd@1-10.0.0.35:22-10.0.0.1:50914.service: Deactivated successfully. Feb 13 15:20:52.401931 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 15:20:52.405904 systemd-logind[1425]: Session 2 logged out. Waiting for processes to exit. Feb 13 15:20:52.417393 systemd[1]: Started sshd@2-10.0.0.35:22-10.0.0.1:50928.service - OpenSSH per-connection server daemon (10.0.0.1:50928). Feb 13 15:20:52.418389 systemd-logind[1425]: Removed session 2. Feb 13 15:20:52.459232 sshd[1565]: Accepted publickey for core from 10.0.0.1 port 50928 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:20:52.460523 sshd-session[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:20:52.464165 systemd-logind[1425]: New session 3 of user core. Feb 13 15:20:52.475179 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 15:20:52.524011 sshd[1567]: Connection closed by 10.0.0.1 port 50928 Feb 13 15:20:52.524531 sshd-session[1565]: pam_unix(sshd:session): session closed for user core Feb 13 15:20:52.531236 systemd[1]: sshd@2-10.0.0.35:22-10.0.0.1:50928.service: Deactivated successfully. Feb 13 15:20:52.532513 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 15:20:52.535182 systemd-logind[1425]: Session 3 logged out. Waiting for processes to exit. Feb 13 15:20:52.543313 systemd[1]: Started sshd@3-10.0.0.35:22-10.0.0.1:44664.service - OpenSSH per-connection server daemon (10.0.0.1:44664). Feb 13 15:20:52.544449 systemd-logind[1425]: Removed session 3. Feb 13 15:20:52.592453 sshd[1572]: Accepted publickey for core from 10.0.0.1 port 44664 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:20:52.593800 sshd-session[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:20:52.598665 systemd-logind[1425]: New session 4 of user core. Feb 13 15:20:52.604220 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 15:20:52.656504 sshd[1574]: Connection closed by 10.0.0.1 port 44664 Feb 13 15:20:52.656873 sshd-session[1572]: pam_unix(sshd:session): session closed for user core Feb 13 15:20:52.665230 systemd[1]: sshd@3-10.0.0.35:22-10.0.0.1:44664.service: Deactivated successfully. Feb 13 15:20:52.666638 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 15:20:52.667962 systemd-logind[1425]: Session 4 logged out. Waiting for processes to exit. Feb 13 15:20:52.669150 systemd[1]: Started sshd@4-10.0.0.35:22-10.0.0.1:44672.service - OpenSSH per-connection server daemon (10.0.0.1:44672). Feb 13 15:20:52.669965 systemd-logind[1425]: Removed session 4. Feb 13 15:20:52.714401 sshd[1579]: Accepted publickey for core from 10.0.0.1 port 44672 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:20:52.716081 sshd-session[1579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:20:52.720260 systemd-logind[1425]: New session 5 of user core. Feb 13 15:20:52.727207 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 15:20:52.787325 sudo[1582]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 15:20:52.789538 sudo[1582]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:20:52.803956 sudo[1582]: pam_unix(sudo:session): session closed for user root Feb 13 15:20:52.807776 sshd[1581]: Connection closed by 10.0.0.1 port 44672 Feb 13 15:20:52.807622 sshd-session[1579]: pam_unix(sshd:session): session closed for user core Feb 13 15:20:52.817428 systemd[1]: sshd@4-10.0.0.35:22-10.0.0.1:44672.service: Deactivated successfully. Feb 13 15:20:52.818880 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 15:20:52.821267 systemd-logind[1425]: Session 5 logged out. Waiting for processes to exit. Feb 13 15:20:52.823244 systemd[1]: Started sshd@5-10.0.0.35:22-10.0.0.1:44678.service - OpenSSH per-connection server daemon (10.0.0.1:44678). Feb 13 15:20:52.824653 systemd-logind[1425]: Removed session 5. Feb 13 15:20:52.867355 sshd[1587]: Accepted publickey for core from 10.0.0.1 port 44678 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:20:52.868540 sshd-session[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:20:52.873240 systemd-logind[1425]: New session 6 of user core. Feb 13 15:20:52.885236 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 15:20:52.940631 sudo[1591]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 15:20:52.940905 sudo[1591]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:20:52.944659 sudo[1591]: pam_unix(sudo:session): session closed for user root Feb 13 15:20:52.949862 sudo[1590]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 15:20:52.950176 sudo[1590]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:20:52.971390 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:20:52.997672 augenrules[1613]: No rules Feb 13 15:20:52.998647 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:20:52.998826 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:20:52.999846 sudo[1590]: pam_unix(sudo:session): session closed for user root Feb 13 15:20:53.001379 sshd[1589]: Connection closed by 10.0.0.1 port 44678 Feb 13 15:20:53.001824 sshd-session[1587]: pam_unix(sshd:session): session closed for user core Feb 13 15:20:53.015636 systemd[1]: sshd@5-10.0.0.35:22-10.0.0.1:44678.service: Deactivated successfully. Feb 13 15:20:53.017603 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 15:20:53.020493 systemd-logind[1425]: Session 6 logged out. Waiting for processes to exit. Feb 13 15:20:53.029357 systemd[1]: Started sshd@6-10.0.0.35:22-10.0.0.1:44690.service - OpenSSH per-connection server daemon (10.0.0.1:44690). Feb 13 15:20:53.030521 systemd-logind[1425]: Removed session 6. Feb 13 15:20:53.069524 sshd[1621]: Accepted publickey for core from 10.0.0.1 port 44690 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:20:53.070513 sshd-session[1621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:20:53.074806 systemd-logind[1425]: New session 7 of user core. Feb 13 15:20:53.088224 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 15:20:53.138851 sudo[1624]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 15:20:53.139156 sudo[1624]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:20:53.502329 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 15:20:53.502388 (dockerd)[1644]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 15:20:53.759644 dockerd[1644]: time="2025-02-13T15:20:53.759516251Z" level=info msg="Starting up" Feb 13 15:20:53.907296 dockerd[1644]: time="2025-02-13T15:20:53.907252528Z" level=info msg="Loading containers: start." Feb 13 15:20:54.037083 kernel: Initializing XFRM netlink socket Feb 13 15:20:54.106656 systemd-networkd[1387]: docker0: Link UP Feb 13 15:20:54.230283 dockerd[1644]: time="2025-02-13T15:20:54.230236472Z" level=info msg="Loading containers: done." Feb 13 15:20:54.243320 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1599803961-merged.mount: Deactivated successfully. Feb 13 15:20:54.245275 dockerd[1644]: time="2025-02-13T15:20:54.245222823Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 15:20:54.245365 dockerd[1644]: time="2025-02-13T15:20:54.245317160Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Feb 13 15:20:54.245507 dockerd[1644]: time="2025-02-13T15:20:54.245475243Z" level=info msg="Daemon has completed initialization" Feb 13 15:20:54.272168 dockerd[1644]: time="2025-02-13T15:20:54.272055150Z" level=info msg="API listen on /run/docker.sock" Feb 13 15:20:54.272309 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 15:20:54.870664 containerd[1442]: time="2025-02-13T15:20:54.870621558Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\"" Feb 13 15:20:55.542066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount267351774.mount: Deactivated successfully. Feb 13 15:20:56.586033 containerd[1442]: time="2025-02-13T15:20:56.585974774Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:20:56.586540 containerd[1442]: time="2025-02-13T15:20:56.586493101Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.10: active requests=0, bytes read=29865209" Feb 13 15:20:56.587446 containerd[1442]: time="2025-02-13T15:20:56.587384381Z" level=info msg="ImageCreate event name:\"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:20:56.590193 containerd[1442]: time="2025-02-13T15:20:56.590159827Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:20:56.591425 containerd[1442]: time="2025-02-13T15:20:56.591374721Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.10\" with image id \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\", size \"29862007\" in 1.720705697s" Feb 13 15:20:56.591425 containerd[1442]: time="2025-02-13T15:20:56.591413960Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\" returns image reference \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\"" Feb 13 15:20:56.613511 containerd[1442]: time="2025-02-13T15:20:56.613473758Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\"" Feb 13 15:20:58.179399 containerd[1442]: time="2025-02-13T15:20:58.179159436Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:20:58.180416 containerd[1442]: time="2025-02-13T15:20:58.180374914Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.10: active requests=0, bytes read=26898596" Feb 13 15:20:58.181233 containerd[1442]: time="2025-02-13T15:20:58.181161271Z" level=info msg="ImageCreate event name:\"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:20:58.184573 containerd[1442]: time="2025-02-13T15:20:58.184537546Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:20:58.186474 containerd[1442]: time="2025-02-13T15:20:58.186127930Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.10\" with image id \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\", size \"28302323\" in 1.572613252s" Feb 13 15:20:58.186474 containerd[1442]: time="2025-02-13T15:20:58.186171086Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\" returns image reference \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\"" Feb 13 15:20:58.204628 containerd[1442]: time="2025-02-13T15:20:58.204583982Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\"" Feb 13 15:20:58.628949 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 15:20:58.639230 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:20:58.726900 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:20:58.730712 (kubelet)[1929]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:20:58.772070 kubelet[1929]: E0213 15:20:58.771953 1929 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:20:58.775290 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:20:58.775469 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:20:59.467692 containerd[1442]: time="2025-02-13T15:20:59.467636850Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:20:59.468776 containerd[1442]: time="2025-02-13T15:20:59.468738123Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.10: active requests=0, bytes read=16164936" Feb 13 15:20:59.469674 containerd[1442]: time="2025-02-13T15:20:59.469649521Z" level=info msg="ImageCreate event name:\"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:20:59.472337 containerd[1442]: time="2025-02-13T15:20:59.472301776Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:20:59.473488 containerd[1442]: time="2025-02-13T15:20:59.473451531Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.10\" with image id \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\", size \"17568681\" in 1.268824714s" Feb 13 15:20:59.473528 containerd[1442]: time="2025-02-13T15:20:59.473491237Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\" returns image reference \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\"" Feb 13 15:20:59.492548 containerd[1442]: time="2025-02-13T15:20:59.492391544Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 15:21:00.439109 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1110185083.mount: Deactivated successfully. Feb 13 15:21:00.794479 containerd[1442]: time="2025-02-13T15:21:00.794351461Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:00.795090 containerd[1442]: time="2025-02-13T15:21:00.795045905Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=25663372" Feb 13 15:21:00.796050 containerd[1442]: time="2025-02-13T15:21:00.796011211Z" level=info msg="ImageCreate event name:\"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:00.798059 containerd[1442]: time="2025-02-13T15:21:00.798015901Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:00.798686 containerd[1442]: time="2025-02-13T15:21:00.798648729Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"25662389\" in 1.306222407s" Feb 13 15:21:00.798723 containerd[1442]: time="2025-02-13T15:21:00.798684264Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\"" Feb 13 15:21:00.817496 containerd[1442]: time="2025-02-13T15:21:00.817456688Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 15:21:01.451379 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount884292292.mount: Deactivated successfully. Feb 13 15:21:01.997135 containerd[1442]: time="2025-02-13T15:21:01.997084179Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:01.998590 containerd[1442]: time="2025-02-13T15:21:01.998542313Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Feb 13 15:21:01.999567 containerd[1442]: time="2025-02-13T15:21:01.999530880Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:02.002654 containerd[1442]: time="2025-02-13T15:21:02.002579920Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:02.004504 containerd[1442]: time="2025-02-13T15:21:02.004367694Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.186869182s" Feb 13 15:21:02.004504 containerd[1442]: time="2025-02-13T15:21:02.004417161Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 15:21:02.027791 containerd[1442]: time="2025-02-13T15:21:02.027747536Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 15:21:02.463584 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3156626300.mount: Deactivated successfully. Feb 13 15:21:02.468358 containerd[1442]: time="2025-02-13T15:21:02.468305766Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:02.469226 containerd[1442]: time="2025-02-13T15:21:02.469175279Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Feb 13 15:21:02.470476 containerd[1442]: time="2025-02-13T15:21:02.470204451Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:02.473379 containerd[1442]: time="2025-02-13T15:21:02.473339553Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:02.474368 containerd[1442]: time="2025-02-13T15:21:02.474335600Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 446.549932ms" Feb 13 15:21:02.474592 containerd[1442]: time="2025-02-13T15:21:02.474461453Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 13 15:21:02.494973 containerd[1442]: time="2025-02-13T15:21:02.494930622Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Feb 13 15:21:03.155052 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1962204197.mount: Deactivated successfully. Feb 13 15:21:05.162806 containerd[1442]: time="2025-02-13T15:21:05.162742832Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:05.163967 containerd[1442]: time="2025-02-13T15:21:05.163920885Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" Feb 13 15:21:05.165105 containerd[1442]: time="2025-02-13T15:21:05.165047158Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:05.168433 containerd[1442]: time="2025-02-13T15:21:05.168370197Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:05.169913 containerd[1442]: time="2025-02-13T15:21:05.169784436Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 2.674806672s" Feb 13 15:21:05.169913 containerd[1442]: time="2025-02-13T15:21:05.169820517Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Feb 13 15:21:08.879587 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 15:21:08.890680 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:21:09.019021 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:21:09.025351 (kubelet)[2152]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:21:09.062566 kubelet[2152]: E0213 15:21:09.062512 2152 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:21:09.065411 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:21:09.065711 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:21:10.555192 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:21:10.566280 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:21:10.581197 systemd[1]: Reloading requested from client PID 2168 ('systemctl') (unit session-7.scope)... Feb 13 15:21:10.581216 systemd[1]: Reloading... Feb 13 15:21:10.648053 zram_generator::config[2210]: No configuration found. Feb 13 15:21:10.759893 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:21:10.816339 systemd[1]: Reloading finished in 234 ms. Feb 13 15:21:10.857698 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 15:21:10.857760 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 15:21:10.857974 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:21:10.860319 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:21:10.949129 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:21:10.952757 (kubelet)[2253]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:21:10.993184 kubelet[2253]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:21:10.993184 kubelet[2253]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:21:10.993184 kubelet[2253]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:21:10.996782 kubelet[2253]: I0213 15:21:10.996730 2253 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:21:11.927054 kubelet[2253]: I0213 15:21:11.925256 2253 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 15:21:11.927054 kubelet[2253]: I0213 15:21:11.925287 2253 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:21:11.927054 kubelet[2253]: I0213 15:21:11.925480 2253 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 15:21:11.968069 kubelet[2253]: I0213 15:21:11.967907 2253 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:21:11.968069 kubelet[2253]: E0213 15:21:11.968001 2253 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.35:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.35:6443: connect: connection refused Feb 13 15:21:11.978359 kubelet[2253]: I0213 15:21:11.978311 2253 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:21:11.979666 kubelet[2253]: I0213 15:21:11.979626 2253 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:21:11.979859 kubelet[2253]: I0213 15:21:11.979679 2253 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:21:11.979969 kubelet[2253]: I0213 15:21:11.979924 2253 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:21:11.979969 kubelet[2253]: I0213 15:21:11.979934 2253 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:21:11.980223 kubelet[2253]: I0213 15:21:11.980194 2253 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:21:11.981138 kubelet[2253]: I0213 15:21:11.981114 2253 kubelet.go:400] "Attempting to sync node with API server" Feb 13 15:21:11.981138 kubelet[2253]: I0213 15:21:11.981135 2253 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:21:11.981345 kubelet[2253]: I0213 15:21:11.981326 2253 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:21:11.981458 kubelet[2253]: I0213 15:21:11.981437 2253 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:21:11.981952 kubelet[2253]: W0213 15:21:11.981754 2253 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Feb 13 15:21:11.981952 kubelet[2253]: E0213 15:21:11.981821 2253 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Feb 13 15:21:11.981952 kubelet[2253]: W0213 15:21:11.981879 2253 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.35:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Feb 13 15:21:11.981952 kubelet[2253]: E0213 15:21:11.981921 2253 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.35:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Feb 13 15:21:11.984186 kubelet[2253]: I0213 15:21:11.984166 2253 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:21:11.984724 kubelet[2253]: I0213 15:21:11.984709 2253 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:21:11.985327 kubelet[2253]: W0213 15:21:11.985010 2253 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 15:21:11.985973 kubelet[2253]: I0213 15:21:11.985844 2253 server.go:1264] "Started kubelet" Feb 13 15:21:11.986987 kubelet[2253]: I0213 15:21:11.986956 2253 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:21:11.988085 kubelet[2253]: I0213 15:21:11.988064 2253 server.go:455] "Adding debug handlers to kubelet server" Feb 13 15:21:11.989011 kubelet[2253]: I0213 15:21:11.988953 2253 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:21:11.989256 kubelet[2253]: I0213 15:21:11.989236 2253 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:21:11.990594 kubelet[2253]: I0213 15:21:11.990557 2253 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:21:11.992330 kubelet[2253]: E0213 15:21:11.989252 2253 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.35:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.35:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823cdb82bb0a187 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 15:21:11.985815943 +0000 UTC m=+1.030069342,LastTimestamp:2025-02-13 15:21:11.985815943 +0000 UTC m=+1.030069342,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 15:21:11.993163 kubelet[2253]: E0213 15:21:11.992831 2253 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:21:11.993163 kubelet[2253]: I0213 15:21:11.992945 2253 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:21:11.993163 kubelet[2253]: I0213 15:21:11.993021 2253 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 15:21:11.993516 kubelet[2253]: E0213 15:21:11.993447 2253 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.35:6443: connect: connection refused" interval="200ms" Feb 13 15:21:11.994206 kubelet[2253]: W0213 15:21:11.993914 2253 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Feb 13 15:21:11.994206 kubelet[2253]: E0213 15:21:11.993973 2253 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Feb 13 15:21:11.994206 kubelet[2253]: I0213 15:21:11.994101 2253 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:21:11.994206 kubelet[2253]: I0213 15:21:11.994173 2253 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:21:11.995106 kubelet[2253]: I0213 15:21:11.994311 2253 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:21:11.995106 kubelet[2253]: E0213 15:21:11.994847 2253 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:21:11.995316 kubelet[2253]: I0213 15:21:11.995297 2253 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:21:12.007453 kubelet[2253]: I0213 15:21:12.007416 2253 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:21:12.007453 kubelet[2253]: I0213 15:21:12.007435 2253 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:21:12.007453 kubelet[2253]: I0213 15:21:12.007452 2253 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:21:12.008448 kubelet[2253]: I0213 15:21:12.008358 2253 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:21:12.009494 kubelet[2253]: I0213 15:21:12.009474 2253 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:21:12.009638 kubelet[2253]: I0213 15:21:12.009624 2253 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:21:12.009671 kubelet[2253]: I0213 15:21:12.009644 2253 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 15:21:12.009781 kubelet[2253]: E0213 15:21:12.009684 2253 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:21:12.010237 kubelet[2253]: W0213 15:21:12.010210 2253 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Feb 13 15:21:12.010495 kubelet[2253]: E0213 15:21:12.010355 2253 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Feb 13 15:21:12.073048 kubelet[2253]: I0213 15:21:12.072968 2253 policy_none.go:49] "None policy: Start" Feb 13 15:21:12.073904 kubelet[2253]: I0213 15:21:12.073875 2253 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:21:12.073904 kubelet[2253]: I0213 15:21:12.073906 2253 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:21:12.088575 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 15:21:12.094238 kubelet[2253]: I0213 15:21:12.094205 2253 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:21:12.094568 kubelet[2253]: E0213 15:21:12.094543 2253 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.35:6443/api/v1/nodes\": dial tcp 10.0.0.35:6443: connect: connection refused" node="localhost" Feb 13 15:21:12.101905 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 15:21:12.105136 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 15:21:12.110049 kubelet[2253]: E0213 15:21:12.109996 2253 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 15:21:12.113786 kubelet[2253]: I0213 15:21:12.113747 2253 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:21:12.114043 kubelet[2253]: I0213 15:21:12.113985 2253 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:21:12.114271 kubelet[2253]: I0213 15:21:12.114129 2253 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:21:12.115256 kubelet[2253]: E0213 15:21:12.115205 2253 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 15:21:12.194433 kubelet[2253]: E0213 15:21:12.194309 2253 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.35:6443: connect: connection refused" interval="400ms" Feb 13 15:21:12.295821 kubelet[2253]: I0213 15:21:12.295771 2253 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:21:12.296189 kubelet[2253]: E0213 15:21:12.296156 2253 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.35:6443/api/v1/nodes\": dial tcp 10.0.0.35:6443: connect: connection refused" node="localhost" Feb 13 15:21:12.310268 kubelet[2253]: I0213 15:21:12.310211 2253 topology_manager.go:215] "Topology Admit Handler" podUID="9553977ee1671afe8ed01e8dfb8f454e" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 15:21:12.311506 kubelet[2253]: I0213 15:21:12.311424 2253 topology_manager.go:215] "Topology Admit Handler" podUID="dd3721fb1a67092819e35b40473f4063" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 15:21:12.312676 kubelet[2253]: I0213 15:21:12.312308 2253 topology_manager.go:215] "Topology Admit Handler" podUID="8d610d6c43052dbc8df47eb68906a982" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 15:21:12.318201 systemd[1]: Created slice kubepods-burstable-poddd3721fb1a67092819e35b40473f4063.slice - libcontainer container kubepods-burstable-poddd3721fb1a67092819e35b40473f4063.slice. Feb 13 15:21:12.346111 systemd[1]: Created slice kubepods-burstable-pod9553977ee1671afe8ed01e8dfb8f454e.slice - libcontainer container kubepods-burstable-pod9553977ee1671afe8ed01e8dfb8f454e.slice. Feb 13 15:21:12.362753 systemd[1]: Created slice kubepods-burstable-pod8d610d6c43052dbc8df47eb68906a982.slice - libcontainer container kubepods-burstable-pod8d610d6c43052dbc8df47eb68906a982.slice. Feb 13 15:21:12.395694 kubelet[2253]: I0213 15:21:12.395660 2253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:21:12.395694 kubelet[2253]: I0213 15:21:12.395699 2253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:21:12.395861 kubelet[2253]: I0213 15:21:12.395721 2253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:21:12.395861 kubelet[2253]: I0213 15:21:12.395752 2253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d610d6c43052dbc8df47eb68906a982-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8d610d6c43052dbc8df47eb68906a982\") " pod="kube-system/kube-scheduler-localhost" Feb 13 15:21:12.395861 kubelet[2253]: I0213 15:21:12.395768 2253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9553977ee1671afe8ed01e8dfb8f454e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9553977ee1671afe8ed01e8dfb8f454e\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:21:12.395861 kubelet[2253]: I0213 15:21:12.395797 2253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9553977ee1671afe8ed01e8dfb8f454e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9553977ee1671afe8ed01e8dfb8f454e\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:21:12.395861 kubelet[2253]: I0213 15:21:12.395813 2253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:21:12.395970 kubelet[2253]: I0213 15:21:12.395828 2253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9553977ee1671afe8ed01e8dfb8f454e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9553977ee1671afe8ed01e8dfb8f454e\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:21:12.395970 kubelet[2253]: I0213 15:21:12.395844 2253 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:21:12.595543 kubelet[2253]: E0213 15:21:12.595433 2253 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.35:6443: connect: connection refused" interval="800ms" Feb 13 15:21:12.644499 kubelet[2253]: E0213 15:21:12.644461 2253 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:21:12.645316 containerd[1442]: time="2025-02-13T15:21:12.645270621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd3721fb1a67092819e35b40473f4063,Namespace:kube-system,Attempt:0,}" Feb 13 15:21:12.648414 kubelet[2253]: E0213 15:21:12.648378 2253 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:21:12.648898 containerd[1442]: time="2025-02-13T15:21:12.648861007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9553977ee1671afe8ed01e8dfb8f454e,Namespace:kube-system,Attempt:0,}" Feb 13 15:21:12.665657 kubelet[2253]: E0213 15:21:12.665619 2253 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:21:12.666144 containerd[1442]: time="2025-02-13T15:21:12.666099782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8d610d6c43052dbc8df47eb68906a982,Namespace:kube-system,Attempt:0,}" Feb 13 15:21:12.697480 kubelet[2253]: I0213 15:21:12.697455 2253 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:21:12.697825 kubelet[2253]: E0213 15:21:12.697790 2253 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.35:6443/api/v1/nodes\": dial tcp 10.0.0.35:6443: connect: connection refused" node="localhost" Feb 13 15:21:13.046431 kubelet[2253]: W0213 15:21:13.046354 2253 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Feb 13 15:21:13.046431 kubelet[2253]: E0213 15:21:13.046416 2253 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Feb 13 15:21:13.146174 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1900220454.mount: Deactivated successfully. Feb 13 15:21:13.151295 containerd[1442]: time="2025-02-13T15:21:13.151249902Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:21:13.153186 containerd[1442]: time="2025-02-13T15:21:13.153133974Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:21:13.154344 containerd[1442]: time="2025-02-13T15:21:13.154206819Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:21:13.155109 containerd[1442]: time="2025-02-13T15:21:13.155080328Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:21:13.155569 containerd[1442]: time="2025-02-13T15:21:13.155489805Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:21:13.156135 containerd[1442]: time="2025-02-13T15:21:13.156097175Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Feb 13 15:21:13.156733 containerd[1442]: time="2025-02-13T15:21:13.156704025Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:21:13.159816 containerd[1442]: time="2025-02-13T15:21:13.159775258Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:21:13.160651 containerd[1442]: time="2025-02-13T15:21:13.160527206Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 511.591625ms" Feb 13 15:21:13.163809 containerd[1442]: time="2025-02-13T15:21:13.163777801Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 497.593958ms" Feb 13 15:21:13.164534 containerd[1442]: time="2025-02-13T15:21:13.164509615Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 519.161338ms" Feb 13 15:21:13.302852 kubelet[2253]: W0213 15:21:13.301930 2253 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Feb 13 15:21:13.302852 kubelet[2253]: E0213 15:21:13.301975 2253 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Feb 13 15:21:13.310757 containerd[1442]: time="2025-02-13T15:21:13.310548296Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:21:13.310757 containerd[1442]: time="2025-02-13T15:21:13.310630192Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:21:13.310757 containerd[1442]: time="2025-02-13T15:21:13.310645722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:21:13.311384 containerd[1442]: time="2025-02-13T15:21:13.311327742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:21:13.312756 containerd[1442]: time="2025-02-13T15:21:13.312673571Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:21:13.312756 containerd[1442]: time="2025-02-13T15:21:13.312730410Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:21:13.312756 containerd[1442]: time="2025-02-13T15:21:13.312741937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:21:13.312865 containerd[1442]: time="2025-02-13T15:21:13.312817588Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:21:13.315246 containerd[1442]: time="2025-02-13T15:21:13.314762101Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:21:13.315246 containerd[1442]: time="2025-02-13T15:21:13.314813616Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:21:13.315246 containerd[1442]: time="2025-02-13T15:21:13.314826585Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:21:13.315246 containerd[1442]: time="2025-02-13T15:21:13.314898033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:21:13.339239 systemd[1]: Started cri-containerd-10c749ec66c59f4a36cabf94b2cd7b55271584972d20f73b181042878d460584.scope - libcontainer container 10c749ec66c59f4a36cabf94b2cd7b55271584972d20f73b181042878d460584. Feb 13 15:21:13.340403 systemd[1]: Started cri-containerd-a25fbc683344b5cfaaa8ce48b0aad5d34b2f7374669c3fd173e0aa55e7d8641b.scope - libcontainer container a25fbc683344b5cfaaa8ce48b0aad5d34b2f7374669c3fd173e0aa55e7d8641b. Feb 13 15:21:13.344367 systemd[1]: Started cri-containerd-28f1ce1b0783fc2dee396261968afa86ec0b83f85af2fd55456d0ae489729b14.scope - libcontainer container 28f1ce1b0783fc2dee396261968afa86ec0b83f85af2fd55456d0ae489729b14. Feb 13 15:21:13.375164 containerd[1442]: time="2025-02-13T15:21:13.375010499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9553977ee1671afe8ed01e8dfb8f454e,Namespace:kube-system,Attempt:0,} returns sandbox id \"10c749ec66c59f4a36cabf94b2cd7b55271584972d20f73b181042878d460584\"" Feb 13 15:21:13.382182 containerd[1442]: time="2025-02-13T15:21:13.382010225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8d610d6c43052dbc8df47eb68906a982,Namespace:kube-system,Attempt:0,} returns sandbox id \"a25fbc683344b5cfaaa8ce48b0aad5d34b2f7374669c3fd173e0aa55e7d8641b\"" Feb 13 15:21:13.383055 kubelet[2253]: E0213 15:21:13.382760 2253 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:21:13.383123 kubelet[2253]: E0213 15:21:13.383087 2253 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:21:13.391935 containerd[1442]: time="2025-02-13T15:21:13.389996177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd3721fb1a67092819e35b40473f4063,Namespace:kube-system,Attempt:0,} returns sandbox id \"28f1ce1b0783fc2dee396261968afa86ec0b83f85af2fd55456d0ae489729b14\"" Feb 13 15:21:13.391935 containerd[1442]: time="2025-02-13T15:21:13.391465089Z" level=info msg="CreateContainer within sandbox \"a25fbc683344b5cfaaa8ce48b0aad5d34b2f7374669c3fd173e0aa55e7d8641b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 15:21:13.392053 kubelet[2253]: E0213 15:21:13.390707 2253 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:21:13.392291 containerd[1442]: time="2025-02-13T15:21:13.392265430Z" level=info msg="CreateContainer within sandbox \"10c749ec66c59f4a36cabf94b2cd7b55271584972d20f73b181042878d460584\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 15:21:13.392577 containerd[1442]: time="2025-02-13T15:21:13.392550062Z" level=info msg="CreateContainer within sandbox \"28f1ce1b0783fc2dee396261968afa86ec0b83f85af2fd55456d0ae489729b14\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 15:21:13.396309 kubelet[2253]: E0213 15:21:13.396266 2253 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.35:6443: connect: connection refused" interval="1.6s" Feb 13 15:21:13.408484 containerd[1442]: time="2025-02-13T15:21:13.408408569Z" level=info msg="CreateContainer within sandbox \"a25fbc683344b5cfaaa8ce48b0aad5d34b2f7374669c3fd173e0aa55e7d8641b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2e7066460a450686eefff42181ba5a104c7081868d0a3e30729ba7258f4622b0\"" Feb 13 15:21:13.409399 containerd[1442]: time="2025-02-13T15:21:13.409370258Z" level=info msg="StartContainer for \"2e7066460a450686eefff42181ba5a104c7081868d0a3e30729ba7258f4622b0\"" Feb 13 15:21:13.410386 containerd[1442]: time="2025-02-13T15:21:13.410351121Z" level=info msg="CreateContainer within sandbox \"28f1ce1b0783fc2dee396261968afa86ec0b83f85af2fd55456d0ae489729b14\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"75bbdaae375a6e0294beda2a95b30465b8d19de4d0dc7276c80f008e41010536\"" Feb 13 15:21:13.410791 containerd[1442]: time="2025-02-13T15:21:13.410761398Z" level=info msg="StartContainer for \"75bbdaae375a6e0294beda2a95b30465b8d19de4d0dc7276c80f008e41010536\"" Feb 13 15:21:13.413192 containerd[1442]: time="2025-02-13T15:21:13.413123152Z" level=info msg="CreateContainer within sandbox \"10c749ec66c59f4a36cabf94b2cd7b55271584972d20f73b181042878d460584\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"dba2ae6872795c967624c2b1aebaf3de3a8ad81b0ef41118855c42d57d525097\"" Feb 13 15:21:13.414054 containerd[1442]: time="2025-02-13T15:21:13.413562489Z" level=info msg="StartContainer for \"dba2ae6872795c967624c2b1aebaf3de3a8ad81b0ef41118855c42d57d525097\"" Feb 13 15:21:13.433841 kubelet[2253]: W0213 15:21:13.433743 2253 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.35:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Feb 13 15:21:13.433841 kubelet[2253]: E0213 15:21:13.433814 2253 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.35:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Feb 13 15:21:13.434201 systemd[1]: Started cri-containerd-2e7066460a450686eefff42181ba5a104c7081868d0a3e30729ba7258f4622b0.scope - libcontainer container 2e7066460a450686eefff42181ba5a104c7081868d0a3e30729ba7258f4622b0. Feb 13 15:21:13.438571 systemd[1]: Started cri-containerd-75bbdaae375a6e0294beda2a95b30465b8d19de4d0dc7276c80f008e41010536.scope - libcontainer container 75bbdaae375a6e0294beda2a95b30465b8d19de4d0dc7276c80f008e41010536. Feb 13 15:21:13.439447 systemd[1]: Started cri-containerd-dba2ae6872795c967624c2b1aebaf3de3a8ad81b0ef41118855c42d57d525097.scope - libcontainer container dba2ae6872795c967624c2b1aebaf3de3a8ad81b0ef41118855c42d57d525097. Feb 13 15:21:13.487790 kubelet[2253]: W0213 15:21:13.487696 2253 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Feb 13 15:21:13.487790 kubelet[2253]: E0213 15:21:13.487772 2253 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Feb 13 15:21:13.505999 kubelet[2253]: I0213 15:21:13.504272 2253 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:21:13.505999 kubelet[2253]: E0213 15:21:13.504705 2253 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.35:6443/api/v1/nodes\": dial tcp 10.0.0.35:6443: connect: connection refused" node="localhost" Feb 13 15:21:13.506321 containerd[1442]: time="2025-02-13T15:21:13.506288415Z" level=info msg="StartContainer for \"2e7066460a450686eefff42181ba5a104c7081868d0a3e30729ba7258f4622b0\" returns successfully" Feb 13 15:21:13.506421 containerd[1442]: time="2025-02-13T15:21:13.506306627Z" level=info msg="StartContainer for \"dba2ae6872795c967624c2b1aebaf3de3a8ad81b0ef41118855c42d57d525097\" returns successfully" Feb 13 15:21:13.506452 containerd[1442]: time="2025-02-13T15:21:13.506310110Z" level=info msg="StartContainer for \"75bbdaae375a6e0294beda2a95b30465b8d19de4d0dc7276c80f008e41010536\" returns successfully" Feb 13 15:21:14.016219 kubelet[2253]: E0213 15:21:14.016183 2253 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:21:14.017799 kubelet[2253]: E0213 15:21:14.017780 2253 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:21:14.019403 kubelet[2253]: E0213 15:21:14.019382 2253 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:21:15.006692 kubelet[2253]: E0213 15:21:15.006641 2253 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 13 15:21:15.023134 kubelet[2253]: E0213 15:21:15.023096 2253 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:21:15.106942 kubelet[2253]: I0213 15:21:15.106803 2253 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:21:15.114929 kubelet[2253]: I0213 15:21:15.114897 2253 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 15:21:15.121708 kubelet[2253]: E0213 15:21:15.121675 2253 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:21:15.222302 kubelet[2253]: E0213 15:21:15.222249 2253 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:21:15.322756 kubelet[2253]: E0213 15:21:15.322624 2253 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:21:15.423554 kubelet[2253]: E0213 15:21:15.423496 2253 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:21:15.524442 kubelet[2253]: E0213 15:21:15.524403 2253 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:21:15.625262 kubelet[2253]: E0213 15:21:15.624895 2253 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:21:15.725892 kubelet[2253]: E0213 15:21:15.725841 2253 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:21:15.826427 kubelet[2253]: E0213 15:21:15.826396 2253 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:21:15.926977 kubelet[2253]: E0213 15:21:15.926894 2253 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:21:15.983869 kubelet[2253]: I0213 15:21:15.983816 2253 apiserver.go:52] "Watching apiserver" Feb 13 15:21:15.993447 kubelet[2253]: I0213 15:21:15.993417 2253 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 15:21:16.513998 kubelet[2253]: E0213 15:21:16.513941 2253 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:21:16.515909 kubelet[2253]: E0213 15:21:16.515833 2253 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:21:16.903864 systemd[1]: Reloading requested from client PID 2540 ('systemctl') (unit session-7.scope)... Feb 13 15:21:16.904192 systemd[1]: Reloading... Feb 13 15:21:16.923470 kubelet[2253]: E0213 15:21:16.923431 2253 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:21:16.977099 zram_generator::config[2579]: No configuration found. Feb 13 15:21:17.024396 kubelet[2253]: E0213 15:21:17.024356 2253 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:21:17.025320 kubelet[2253]: E0213 15:21:17.025286 2253 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:21:17.025703 kubelet[2253]: E0213 15:21:17.025670 2253 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:21:17.066283 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:21:17.137831 systemd[1]: Reloading finished in 233 ms. Feb 13 15:21:17.178002 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:21:17.192977 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:21:17.193267 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:21:17.193323 systemd[1]: kubelet.service: Consumed 1.411s CPU time, 116.5M memory peak, 0B memory swap peak. Feb 13 15:21:17.201395 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:21:17.291631 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:21:17.296202 (kubelet)[2621]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:21:17.340385 kubelet[2621]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:21:17.340385 kubelet[2621]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:21:17.340385 kubelet[2621]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:21:17.340711 kubelet[2621]: I0213 15:21:17.340420 2621 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:21:17.345626 kubelet[2621]: I0213 15:21:17.344917 2621 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 15:21:17.345626 kubelet[2621]: I0213 15:21:17.344954 2621 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:21:17.345626 kubelet[2621]: I0213 15:21:17.345300 2621 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 15:21:17.347182 kubelet[2621]: I0213 15:21:17.347150 2621 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 15:21:17.348539 kubelet[2621]: I0213 15:21:17.348506 2621 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:21:17.353675 kubelet[2621]: I0213 15:21:17.353648 2621 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:21:17.353885 kubelet[2621]: I0213 15:21:17.353847 2621 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:21:17.354074 kubelet[2621]: I0213 15:21:17.353878 2621 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:21:17.354145 kubelet[2621]: I0213 15:21:17.354095 2621 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:21:17.354145 kubelet[2621]: I0213 15:21:17.354108 2621 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:21:17.354203 kubelet[2621]: I0213 15:21:17.354149 2621 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:21:17.354271 kubelet[2621]: I0213 15:21:17.354259 2621 kubelet.go:400] "Attempting to sync node with API server" Feb 13 15:21:17.354303 kubelet[2621]: I0213 15:21:17.354274 2621 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:21:17.354335 kubelet[2621]: I0213 15:21:17.354306 2621 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:21:17.354335 kubelet[2621]: I0213 15:21:17.354322 2621 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:21:17.358872 kubelet[2621]: I0213 15:21:17.358804 2621 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:21:17.359039 kubelet[2621]: I0213 15:21:17.359001 2621 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:21:17.359438 kubelet[2621]: I0213 15:21:17.359404 2621 server.go:1264] "Started kubelet" Feb 13 15:21:17.359918 kubelet[2621]: I0213 15:21:17.359761 2621 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:21:17.360310 kubelet[2621]: I0213 15:21:17.360251 2621 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:21:17.360519 kubelet[2621]: I0213 15:21:17.360497 2621 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:21:17.360708 kubelet[2621]: I0213 15:21:17.360679 2621 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:21:17.365039 kubelet[2621]: I0213 15:21:17.363798 2621 server.go:455] "Adding debug handlers to kubelet server" Feb 13 15:21:17.372573 kubelet[2621]: I0213 15:21:17.367135 2621 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:21:17.372573 kubelet[2621]: I0213 15:21:17.367658 2621 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 15:21:17.372573 kubelet[2621]: I0213 15:21:17.367831 2621 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:21:17.372573 kubelet[2621]: I0213 15:21:17.368823 2621 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:21:17.372573 kubelet[2621]: I0213 15:21:17.369737 2621 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:21:17.372573 kubelet[2621]: I0213 15:21:17.369769 2621 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:21:17.372573 kubelet[2621]: I0213 15:21:17.369785 2621 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 15:21:17.372573 kubelet[2621]: E0213 15:21:17.369837 2621 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:21:17.380307 kubelet[2621]: I0213 15:21:17.380276 2621 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:21:17.380307 kubelet[2621]: I0213 15:21:17.380302 2621 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:21:17.382399 kubelet[2621]: I0213 15:21:17.380396 2621 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:21:17.410274 kubelet[2621]: I0213 15:21:17.410230 2621 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:21:17.410274 kubelet[2621]: I0213 15:21:17.410250 2621 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:21:17.410274 kubelet[2621]: I0213 15:21:17.410270 2621 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:21:17.410430 kubelet[2621]: I0213 15:21:17.410413 2621 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 15:21:17.410452 kubelet[2621]: I0213 15:21:17.410424 2621 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 15:21:17.410452 kubelet[2621]: I0213 15:21:17.410442 2621 policy_none.go:49] "None policy: Start" Feb 13 15:21:17.411067 kubelet[2621]: I0213 15:21:17.411047 2621 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:21:17.411140 kubelet[2621]: I0213 15:21:17.411074 2621 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:21:17.411226 kubelet[2621]: I0213 15:21:17.411209 2621 state_mem.go:75] "Updated machine memory state" Feb 13 15:21:17.415138 kubelet[2621]: I0213 15:21:17.414939 2621 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:21:17.415227 kubelet[2621]: I0213 15:21:17.415186 2621 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:21:17.415333 kubelet[2621]: I0213 15:21:17.415316 2621 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:21:17.471138 kubelet[2621]: I0213 15:21:17.469934 2621 topology_manager.go:215] "Topology Admit Handler" podUID="9553977ee1671afe8ed01e8dfb8f454e" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 15:21:17.471138 kubelet[2621]: I0213 15:21:17.470665 2621 topology_manager.go:215] "Topology Admit Handler" podUID="dd3721fb1a67092819e35b40473f4063" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 15:21:17.471138 kubelet[2621]: I0213 15:21:17.470714 2621 topology_manager.go:215] "Topology Admit Handler" podUID="8d610d6c43052dbc8df47eb68906a982" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 15:21:17.471138 kubelet[2621]: I0213 15:21:17.470936 2621 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:21:17.568660 kubelet[2621]: I0213 15:21:17.568607 2621 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:21:17.568660 kubelet[2621]: I0213 15:21:17.568651 2621 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:21:17.568812 kubelet[2621]: I0213 15:21:17.568680 2621 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:21:17.568812 kubelet[2621]: I0213 15:21:17.568701 2621 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d610d6c43052dbc8df47eb68906a982-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8d610d6c43052dbc8df47eb68906a982\") " pod="kube-system/kube-scheduler-localhost" Feb 13 15:21:17.568812 kubelet[2621]: I0213 15:21:17.568717 2621 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:21:17.568812 kubelet[2621]: I0213 15:21:17.568732 2621 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9553977ee1671afe8ed01e8dfb8f454e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9553977ee1671afe8ed01e8dfb8f454e\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:21:17.568812 kubelet[2621]: I0213 15:21:17.568746 2621 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:21:17.568930 kubelet[2621]: I0213 15:21:17.568769 2621 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9553977ee1671afe8ed01e8dfb8f454e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9553977ee1671afe8ed01e8dfb8f454e\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:21:17.568930 kubelet[2621]: I0213 15:21:17.568782 2621 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9553977ee1671afe8ed01e8dfb8f454e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9553977ee1671afe8ed01e8dfb8f454e\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:21:17.569130 kubelet[2621]: E0213 15:21:17.569096 2621 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 13 15:21:17.569292 kubelet[2621]: E0213 15:21:17.569261 2621 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Feb 13 15:21:17.569352 kubelet[2621]: E0213 15:21:17.569327 2621 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 15:21:17.575483 kubelet[2621]: I0213 15:21:17.575451 2621 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Feb 13 15:21:17.575553 kubelet[2621]: I0213 15:21:17.575534 2621 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 15:21:17.871307 kubelet[2621]: E0213 15:21:17.871127 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:21:17.871307 kubelet[2621]: E0213 15:21:17.871249 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:21:17.871493 kubelet[2621]: E0213 15:21:17.871454 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:21:18.355219 kubelet[2621]: I0213 15:21:18.354728 2621 apiserver.go:52] "Watching apiserver" Feb 13 15:21:18.369076 kubelet[2621]: I0213 15:21:18.368137 2621 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 15:21:18.398393 kubelet[2621]: E0213 15:21:18.397901 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:21:18.398864 kubelet[2621]: E0213 15:21:18.398829 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:21:18.416827 kubelet[2621]: E0213 15:21:18.416687 2621 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 15:21:18.417209 kubelet[2621]: E0213 15:21:18.417168 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:21:18.421474 kubelet[2621]: I0213 15:21:18.421341 2621 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.421326531 podStartE2EDuration="2.421326531s" podCreationTimestamp="2025-02-13 15:21:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:21:18.420202942 +0000 UTC m=+1.120644551" watchObservedRunningTime="2025-02-13 15:21:18.421326531 +0000 UTC m=+1.121768140" Feb 13 15:21:18.432226 kubelet[2621]: I0213 15:21:18.432149 2621 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.432135377 podStartE2EDuration="2.432135377s" podCreationTimestamp="2025-02-13 15:21:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:21:18.431145533 +0000 UTC m=+1.131587142" watchObservedRunningTime="2025-02-13 15:21:18.432135377 +0000 UTC m=+1.132576986" Feb 13 15:21:18.444722 kubelet[2621]: I0213 15:21:18.444650 2621 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.444633449 podStartE2EDuration="2.444633449s" podCreationTimestamp="2025-02-13 15:21:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:21:18.444363677 +0000 UTC m=+1.144805286" watchObservedRunningTime="2025-02-13 15:21:18.444633449 +0000 UTC m=+1.145075058" Feb 13 15:21:19.399508 kubelet[2621]: E0213 15:21:19.399472 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:21:21.244846 kubelet[2621]: E0213 15:21:21.244769 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:21:22.204064 sudo[1624]: pam_unix(sudo:session): session closed for user root Feb 13 15:21:22.205916 sshd[1623]: Connection closed by 10.0.0.1 port 44690 Feb 13 15:21:22.206308 sshd-session[1621]: pam_unix(sshd:session): session closed for user core Feb 13 15:21:22.209733 systemd[1]: sshd@6-10.0.0.35:22-10.0.0.1:44690.service: Deactivated successfully. Feb 13 15:21:22.211338 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 15:21:22.211543 systemd[1]: session-7.scope: Consumed 7.690s CPU time, 192.8M memory peak, 0B memory swap peak. Feb 13 15:21:22.212656 systemd-logind[1425]: Session 7 logged out. Waiting for processes to exit. Feb 13 15:21:22.213458 systemd-logind[1425]: Removed session 7. Feb 13 15:21:24.122107 kubelet[2621]: E0213 15:21:24.122069 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:21:24.423108 kubelet[2621]: E0213 15:21:24.420878 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:21:25.738876 kubelet[2621]: E0213 15:21:25.737262 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:21:26.422921 kubelet[2621]: E0213 15:21:26.422629 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:21:31.145631 update_engine[1427]: I20250213 15:21:31.145555 1427 update_attempter.cc:509] Updating boot flags... Feb 13 15:21:31.171111 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2715) Feb 13 15:21:31.194274 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2717) Feb 13 15:21:31.224242 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2717) Feb 13 15:21:31.253105 kubelet[2621]: E0213 15:21:31.252858 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:21:32.190041 kubelet[2621]: I0213 15:21:32.189961 2621 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 15:21:32.202905 containerd[1442]: time="2025-02-13T15:21:32.202828333Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 15:21:32.203291 kubelet[2621]: I0213 15:21:32.203265 2621 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 15:21:32.225903 kubelet[2621]: I0213 15:21:32.225847 2621 topology_manager.go:215] "Topology Admit Handler" podUID="2d37b283-fc05-4ac6-9f19-963d08483943" podNamespace="kube-system" podName="kube-proxy-f84wg" Feb 13 15:21:32.233496 systemd[1]: Created slice kubepods-besteffort-pod2d37b283_fc05_4ac6_9f19_963d08483943.slice - libcontainer container kubepods-besteffort-pod2d37b283_fc05_4ac6_9f19_963d08483943.slice. Feb 13 15:21:32.368545 kubelet[2621]: I0213 15:21:32.368452 2621 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2d37b283-fc05-4ac6-9f19-963d08483943-xtables-lock\") pod \"kube-proxy-f84wg\" (UID: \"2d37b283-fc05-4ac6-9f19-963d08483943\") " pod="kube-system/kube-proxy-f84wg" Feb 13 15:21:32.368545 kubelet[2621]: I0213 15:21:32.368496 2621 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvm2j\" (UniqueName: \"kubernetes.io/projected/2d37b283-fc05-4ac6-9f19-963d08483943-kube-api-access-tvm2j\") pod \"kube-proxy-f84wg\" (UID: \"2d37b283-fc05-4ac6-9f19-963d08483943\") " pod="kube-system/kube-proxy-f84wg" Feb 13 15:21:32.368545 kubelet[2621]: I0213 15:21:32.368520 2621 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2d37b283-fc05-4ac6-9f19-963d08483943-kube-proxy\") pod \"kube-proxy-f84wg\" (UID: \"2d37b283-fc05-4ac6-9f19-963d08483943\") " pod="kube-system/kube-proxy-f84wg" Feb 13 15:21:32.368962 kubelet[2621]: I0213 15:21:32.368563 2621 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2d37b283-fc05-4ac6-9f19-963d08483943-lib-modules\") pod \"kube-proxy-f84wg\" (UID: \"2d37b283-fc05-4ac6-9f19-963d08483943\") " pod="kube-system/kube-proxy-f84wg" Feb 13 15:21:32.480245 kubelet[2621]: E0213 15:21:32.480133 2621 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Feb 13 15:21:32.480245 kubelet[2621]: E0213 15:21:32.480169 2621 projected.go:200] Error preparing data for projected volume kube-api-access-tvm2j for pod kube-system/kube-proxy-f84wg: configmap "kube-root-ca.crt" not found Feb 13 15:21:32.480245 kubelet[2621]: E0213 15:21:32.480231 2621 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2d37b283-fc05-4ac6-9f19-963d08483943-kube-api-access-tvm2j podName:2d37b283-fc05-4ac6-9f19-963d08483943 nodeName:}" failed. No retries permitted until 2025-02-13 15:21:32.980210147 +0000 UTC m=+15.680651756 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tvm2j" (UniqueName: "kubernetes.io/projected/2d37b283-fc05-4ac6-9f19-963d08483943-kube-api-access-tvm2j") pod "kube-proxy-f84wg" (UID: "2d37b283-fc05-4ac6-9f19-963d08483943") : configmap "kube-root-ca.crt" not found Feb 13 15:21:33.148718 kubelet[2621]: E0213 15:21:33.148523 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:21:33.155060 containerd[1442]: time="2025-02-13T15:21:33.153366803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f84wg,Uid:2d37b283-fc05-4ac6-9f19-963d08483943,Namespace:kube-system,Attempt:0,}" Feb 13 15:21:33.179905 containerd[1442]: time="2025-02-13T15:21:33.179583875Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:21:33.179905 containerd[1442]: time="2025-02-13T15:21:33.179635444Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:21:33.179905 containerd[1442]: time="2025-02-13T15:21:33.179645766Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:21:33.179905 containerd[1442]: time="2025-02-13T15:21:33.179710178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:21:33.197220 systemd[1]: Started cri-containerd-5d012c50c0a605e04af58e6d066b38e0eff8f68a52c3b90a988e9edf431a8e32.scope - libcontainer container 5d012c50c0a605e04af58e6d066b38e0eff8f68a52c3b90a988e9edf431a8e32. Feb 13 15:21:33.217689 containerd[1442]: time="2025-02-13T15:21:33.217646267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f84wg,Uid:2d37b283-fc05-4ac6-9f19-963d08483943,Namespace:kube-system,Attempt:0,} returns sandbox id \"5d012c50c0a605e04af58e6d066b38e0eff8f68a52c3b90a988e9edf431a8e32\"" Feb 13 15:21:33.220875 kubelet[2621]: E0213 15:21:33.220526 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:21:33.227895 containerd[1442]: time="2025-02-13T15:21:33.227653326Z" level=info msg="CreateContainer within sandbox \"5d012c50c0a605e04af58e6d066b38e0eff8f68a52c3b90a988e9edf431a8e32\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 15:21:33.240155 containerd[1442]: time="2025-02-13T15:21:33.240104360Z" level=info msg="CreateContainer within sandbox \"5d012c50c0a605e04af58e6d066b38e0eff8f68a52c3b90a988e9edf431a8e32\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"74b807b62c5140e195ef9850795fb7e50307888a84be992e28a1e135e3fad4ec\"" Feb 13 15:21:33.242845 containerd[1442]: time="2025-02-13T15:21:33.242616626Z" level=info msg="StartContainer for \"74b807b62c5140e195ef9850795fb7e50307888a84be992e28a1e135e3fad4ec\"" Feb 13 15:21:33.270246 systemd[1]: Started cri-containerd-74b807b62c5140e195ef9850795fb7e50307888a84be992e28a1e135e3fad4ec.scope - libcontainer container 74b807b62c5140e195ef9850795fb7e50307888a84be992e28a1e135e3fad4ec. Feb 13 15:21:33.302724 kubelet[2621]: I0213 15:21:33.302668 2621 topology_manager.go:215] "Topology Admit Handler" podUID="5add3f63-4eaf-4ea2-9f94-b4cbdab4824c" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-59qg9" Feb 13 15:21:33.316124 systemd[1]: Created slice kubepods-besteffort-pod5add3f63_4eaf_4ea2_9f94_b4cbdab4824c.slice - libcontainer container kubepods-besteffort-pod5add3f63_4eaf_4ea2_9f94_b4cbdab4824c.slice. Feb 13 15:21:33.355366 containerd[1442]: time="2025-02-13T15:21:33.355317167Z" level=info msg="StartContainer for \"74b807b62c5140e195ef9850795fb7e50307888a84be992e28a1e135e3fad4ec\" returns successfully" Feb 13 15:21:33.434350 kubelet[2621]: E0213 15:21:33.434204 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:21:33.447819 kubelet[2621]: I0213 15:21:33.447500 2621 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-f84wg" podStartSLOduration=1.447484372 podStartE2EDuration="1.447484372s" podCreationTimestamp="2025-02-13 15:21:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:21:33.44726009 +0000 UTC m=+16.147701699" watchObservedRunningTime="2025-02-13 15:21:33.447484372 +0000 UTC m=+16.147925981" Feb 13 15:21:33.478328 kubelet[2621]: I0213 15:21:33.478167 2621 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwss2\" (UniqueName: \"kubernetes.io/projected/5add3f63-4eaf-4ea2-9f94-b4cbdab4824c-kube-api-access-vwss2\") pod \"tigera-operator-7bc55997bb-59qg9\" (UID: \"5add3f63-4eaf-4ea2-9f94-b4cbdab4824c\") " pod="tigera-operator/tigera-operator-7bc55997bb-59qg9" Feb 13 15:21:33.478328 kubelet[2621]: I0213 15:21:33.478211 2621 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5add3f63-4eaf-4ea2-9f94-b4cbdab4824c-var-lib-calico\") pod \"tigera-operator-7bc55997bb-59qg9\" (UID: \"5add3f63-4eaf-4ea2-9f94-b4cbdab4824c\") " pod="tigera-operator/tigera-operator-7bc55997bb-59qg9" Feb 13 15:21:33.621421 containerd[1442]: time="2025-02-13T15:21:33.621082027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-59qg9,Uid:5add3f63-4eaf-4ea2-9f94-b4cbdab4824c,Namespace:tigera-operator,Attempt:0,}" Feb 13 15:21:33.650422 containerd[1442]: time="2025-02-13T15:21:33.650326860Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:21:33.650542 containerd[1442]: time="2025-02-13T15:21:33.650409036Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:21:33.650542 containerd[1442]: time="2025-02-13T15:21:33.650426679Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:21:33.650826 containerd[1442]: time="2025-02-13T15:21:33.650793987Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:21:33.669839 systemd[1]: Started cri-containerd-947cac91847aa449381afbb0c4eea7fd100b5ff4d69982c572d8ddcb1767dc7f.scope - libcontainer container 947cac91847aa449381afbb0c4eea7fd100b5ff4d69982c572d8ddcb1767dc7f. Feb 13 15:21:33.705462 containerd[1442]: time="2025-02-13T15:21:33.705359366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-59qg9,Uid:5add3f63-4eaf-4ea2-9f94-b4cbdab4824c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"947cac91847aa449381afbb0c4eea7fd100b5ff4d69982c572d8ddcb1767dc7f\"" Feb 13 15:21:33.711962 containerd[1442]: time="2025-02-13T15:21:33.711911943Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Feb 13 15:21:35.167656 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3751966846.mount: Deactivated successfully. Feb 13 15:21:35.705438 containerd[1442]: time="2025-02-13T15:21:35.705388678Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:35.706436 containerd[1442]: time="2025-02-13T15:21:35.706159803Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19124160" Feb 13 15:21:35.707369 containerd[1442]: time="2025-02-13T15:21:35.707130842Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:35.709466 containerd[1442]: time="2025-02-13T15:21:35.709433738Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:35.710367 containerd[1442]: time="2025-02-13T15:21:35.710340326Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 1.998382775s" Feb 13 15:21:35.710416 containerd[1442]: time="2025-02-13T15:21:35.710373732Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Feb 13 15:21:35.730364 containerd[1442]: time="2025-02-13T15:21:35.730320189Z" level=info msg="CreateContainer within sandbox \"947cac91847aa449381afbb0c4eea7fd100b5ff4d69982c572d8ddcb1767dc7f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 13 15:21:35.749424 containerd[1442]: time="2025-02-13T15:21:35.749290367Z" level=info msg="CreateContainer within sandbox \"947cac91847aa449381afbb0c4eea7fd100b5ff4d69982c572d8ddcb1767dc7f\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"7025b8dbe2919279fda45f37d739154cd2420294cbaea64228690b5b5296d86a\"" Feb 13 15:21:35.749872 containerd[1442]: time="2025-02-13T15:21:35.749826895Z" level=info msg="StartContainer for \"7025b8dbe2919279fda45f37d739154cd2420294cbaea64228690b5b5296d86a\"" Feb 13 15:21:35.778296 systemd[1]: Started cri-containerd-7025b8dbe2919279fda45f37d739154cd2420294cbaea64228690b5b5296d86a.scope - libcontainer container 7025b8dbe2919279fda45f37d739154cd2420294cbaea64228690b5b5296d86a. Feb 13 15:21:35.850229 containerd[1442]: time="2025-02-13T15:21:35.850182964Z" level=info msg="StartContainer for \"7025b8dbe2919279fda45f37d739154cd2420294cbaea64228690b5b5296d86a\" returns successfully" Feb 13 15:21:40.105480 kubelet[2621]: I0213 15:21:40.105397 2621 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-59qg9" podStartSLOduration=5.095574348 podStartE2EDuration="7.105379342s" podCreationTimestamp="2025-02-13 15:21:33 +0000 UTC" firstStartedPulling="2025-02-13 15:21:33.706918815 +0000 UTC m=+16.407360424" lastFinishedPulling="2025-02-13 15:21:35.716723809 +0000 UTC m=+18.417165418" observedRunningTime="2025-02-13 15:21:36.464815046 +0000 UTC m=+19.165256655" watchObservedRunningTime="2025-02-13 15:21:40.105379342 +0000 UTC m=+22.805821031" Feb 13 15:21:40.106468 kubelet[2621]: I0213 15:21:40.105541 2621 topology_manager.go:215] "Topology Admit Handler" podUID="45a2f2e9-b7ab-4303-8b4b-e473e8b9e7fd" podNamespace="calico-system" podName="calico-typha-695f997c9b-4hmlh" Feb 13 15:21:40.119628 systemd[1]: Created slice kubepods-besteffort-pod45a2f2e9_b7ab_4303_8b4b_e473e8b9e7fd.slice - libcontainer container kubepods-besteffort-pod45a2f2e9_b7ab_4303_8b4b_e473e8b9e7fd.slice. Feb 13 15:21:40.182211 kubelet[2621]: I0213 15:21:40.182144 2621 topology_manager.go:215] "Topology Admit Handler" podUID="be3818bd-8c59-4cb5-b6c5-92ae943f6a3a" podNamespace="calico-system" podName="calico-node-czdlp" Feb 13 15:21:40.189223 systemd[1]: Created slice kubepods-besteffort-podbe3818bd_8c59_4cb5_b6c5_92ae943f6a3a.slice - libcontainer container kubepods-besteffort-podbe3818bd_8c59_4cb5_b6c5_92ae943f6a3a.slice. Feb 13 15:21:40.221621 kubelet[2621]: I0213 15:21:40.221346 2621 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/45a2f2e9-b7ab-4303-8b4b-e473e8b9e7fd-typha-certs\") pod \"calico-typha-695f997c9b-4hmlh\" (UID: \"45a2f2e9-b7ab-4303-8b4b-e473e8b9e7fd\") " pod="calico-system/calico-typha-695f997c9b-4hmlh" Feb 13 15:21:40.221621 kubelet[2621]: I0213 15:21:40.221393 2621 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45a2f2e9-b7ab-4303-8b4b-e473e8b9e7fd-tigera-ca-bundle\") pod \"calico-typha-695f997c9b-4hmlh\" (UID: \"45a2f2e9-b7ab-4303-8b4b-e473e8b9e7fd\") " pod="calico-system/calico-typha-695f997c9b-4hmlh" Feb 13 15:21:40.221621 kubelet[2621]: I0213 15:21:40.221414 2621 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxtrk\" (UniqueName: \"kubernetes.io/projected/45a2f2e9-b7ab-4303-8b4b-e473e8b9e7fd-kube-api-access-lxtrk\") pod \"calico-typha-695f997c9b-4hmlh\" (UID: \"45a2f2e9-b7ab-4303-8b4b-e473e8b9e7fd\") " pod="calico-system/calico-typha-695f997c9b-4hmlh" Feb 13 15:21:40.322053 kubelet[2621]: I0213 15:21:40.321976 2621 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm67q\" (UniqueName: \"kubernetes.io/projected/be3818bd-8c59-4cb5-b6c5-92ae943f6a3a-kube-api-access-wm67q\") pod \"calico-node-czdlp\" (UID: \"be3818bd-8c59-4cb5-b6c5-92ae943f6a3a\") " pod="calico-system/calico-node-czdlp" Feb 13 15:21:40.322851 kubelet[2621]: I0213 15:21:40.322330 2621 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/be3818bd-8c59-4cb5-b6c5-92ae943f6a3a-var-lib-calico\") pod \"calico-node-czdlp\" (UID: \"be3818bd-8c59-4cb5-b6c5-92ae943f6a3a\") " pod="calico-system/calico-node-czdlp" Feb 13 15:21:40.322851 kubelet[2621]: I0213 15:21:40.322417 2621 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/be3818bd-8c59-4cb5-b6c5-92ae943f6a3a-var-run-calico\") pod \"calico-node-czdlp\" (UID: \"be3818bd-8c59-4cb5-b6c5-92ae943f6a3a\") " pod="calico-system/calico-node-czdlp" Feb 13 15:21:40.322851 kubelet[2621]: I0213 15:21:40.322443 2621 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/be3818bd-8c59-4cb5-b6c5-92ae943f6a3a-flexvol-driver-host\") pod \"calico-node-czdlp\" (UID: \"be3818bd-8c59-4cb5-b6c5-92ae943f6a3a\") " pod="calico-system/calico-node-czdlp" Feb 13 15:21:40.322851 kubelet[2621]: I0213 15:21:40.322465 2621 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be3818bd-8c59-4cb5-b6c5-92ae943f6a3a-tigera-ca-bundle\") pod \"calico-node-czdlp\" (UID: \"be3818bd-8c59-4cb5-b6c5-92ae943f6a3a\") " pod="calico-system/calico-node-czdlp" Feb 13 15:21:40.322851 kubelet[2621]: I0213 15:21:40.322480 2621 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/be3818bd-8c59-4cb5-b6c5-92ae943f6a3a-cni-bin-dir\") pod \"calico-node-czdlp\" (UID: \"be3818bd-8c59-4cb5-b6c5-92ae943f6a3a\") " pod="calico-system/calico-node-czdlp" Feb 13 15:21:40.323044 kubelet[2621]: I0213 15:21:40.322785 2621 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/be3818bd-8c59-4cb5-b6c5-92ae943f6a3a-cni-log-dir\") pod \"calico-node-czdlp\" (UID: \"be3818bd-8c59-4cb5-b6c5-92ae943f6a3a\") " pod="calico-system/calico-node-czdlp" Feb 13 15:21:40.323044 kubelet[2621]: I0213 15:21:40.322811 2621 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/be3818bd-8c59-4cb5-b6c5-92ae943f6a3a-lib-modules\") pod \"calico-node-czdlp\" (UID: \"be3818bd-8c59-4cb5-b6c5-92ae943f6a3a\") " pod="calico-system/calico-node-czdlp" Feb 13 15:21:40.323099 kubelet[2621]: I0213 15:21:40.323068 2621 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/be3818bd-8c59-4cb5-b6c5-92ae943f6a3a-xtables-lock\") pod \"calico-node-czdlp\" (UID: \"be3818bd-8c59-4cb5-b6c5-92ae943f6a3a\") " pod="calico-system/calico-node-czdlp" Feb 13 15:21:40.323125 kubelet[2621]: I0213 15:21:40.323097 2621 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/be3818bd-8c59-4cb5-b6c5-92ae943f6a3a-cni-net-dir\") pod \"calico-node-czdlp\" (UID: \"be3818bd-8c59-4cb5-b6c5-92ae943f6a3a\") " pod="calico-system/calico-node-czdlp" Feb 13 15:21:40.324115 kubelet[2621]: I0213 15:21:40.323307 2621 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/be3818bd-8c59-4cb5-b6c5-92ae943f6a3a-policysync\") pod \"calico-node-czdlp\" (UID: \"be3818bd-8c59-4cb5-b6c5-92ae943f6a3a\") " pod="calico-system/calico-node-czdlp" Feb 13 15:21:40.324210 kubelet[2621]: I0213 15:21:40.324128 2621 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/be3818bd-8c59-4cb5-b6c5-92ae943f6a3a-node-certs\") pod \"calico-node-czdlp\" (UID: \"be3818bd-8c59-4cb5-b6c5-92ae943f6a3a\") " pod="calico-system/calico-node-czdlp" Feb 13 15:21:40.350346 kubelet[2621]: I0213 15:21:40.350039 2621 topology_manager.go:215] "Topology Admit Handler" podUID="14b2995f-bdfb-4265-9dc0-06ae16e4bb6c" podNamespace="calico-system" podName="csi-node-driver-8vvjv" Feb 13 15:21:40.350953 kubelet[2621]: E0213 15:21:40.350914 2621 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8vvjv" podUID="14b2995f-bdfb-4265-9dc0-06ae16e4bb6c" Feb 13 15:21:40.426591 kubelet[2621]: I0213 15:21:40.425254 2621 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9k27\" (UniqueName: \"kubernetes.io/projected/14b2995f-bdfb-4265-9dc0-06ae16e4bb6c-kube-api-access-z9k27\") pod \"csi-node-driver-8vvjv\" (UID: \"14b2995f-bdfb-4265-9dc0-06ae16e4bb6c\") " pod="calico-system/csi-node-driver-8vvjv" Feb 13 15:21:40.426591 kubelet[2621]: I0213 15:21:40.425360 2621 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/14b2995f-bdfb-4265-9dc0-06ae16e4bb6c-varrun\") pod \"csi-node-driver-8vvjv\" (UID: \"14b2995f-bdfb-4265-9dc0-06ae16e4bb6c\") " pod="calico-system/csi-node-driver-8vvjv" Feb 13 15:21:40.426591 kubelet[2621]: I0213 15:21:40.426004 2621 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/14b2995f-bdfb-4265-9dc0-06ae16e4bb6c-socket-dir\") pod \"csi-node-driver-8vvjv\" (UID: \"14b2995f-bdfb-4265-9dc0-06ae16e4bb6c\") " pod="calico-system/csi-node-driver-8vvjv" Feb 13 15:21:40.426591 kubelet[2621]: I0213 15:21:40.426057 2621 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/14b2995f-bdfb-4265-9dc0-06ae16e4bb6c-kubelet-dir\") pod \"csi-node-driver-8vvjv\" (UID: \"14b2995f-bdfb-4265-9dc0-06ae16e4bb6c\") " pod="calico-system/csi-node-driver-8vvjv" Feb 13 15:21:40.426591 kubelet[2621]: I0213 15:21:40.426097 2621 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/14b2995f-bdfb-4265-9dc0-06ae16e4bb6c-registration-dir\") pod \"csi-node-driver-8vvjv\" (UID: \"14b2995f-bdfb-4265-9dc0-06ae16e4bb6c\") " pod="calico-system/csi-node-driver-8vvjv" Feb 13 15:21:40.430585 kubelet[2621]: E0213 15:21:40.426833 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:21:40.430824 kubelet[2621]: E0213 15:21:40.430167 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:40.430899 kubelet[2621]: W0213 15:21:40.430881 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:40.430966 kubelet[2621]: E0213 15:21:40.430954 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:40.434883 containerd[1442]: time="2025-02-13T15:21:40.433839709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-695f997c9b-4hmlh,Uid:45a2f2e9-b7ab-4303-8b4b-e473e8b9e7fd,Namespace:calico-system,Attempt:0,}" Feb 13 15:21:40.449697 kubelet[2621]: E0213 15:21:40.449594 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:40.452576 kubelet[2621]: W0213 15:21:40.449622 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:40.452576 kubelet[2621]: E0213 15:21:40.450961 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:40.467344 containerd[1442]: time="2025-02-13T15:21:40.467195654Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:21:40.467344 containerd[1442]: time="2025-02-13T15:21:40.467305948Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:21:40.468410 containerd[1442]: time="2025-02-13T15:21:40.468329829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:21:40.468639 containerd[1442]: time="2025-02-13T15:21:40.468605101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:21:40.488261 systemd[1]: Started cri-containerd-eb630df38d625d6913a077a762dc6810a16178166d0a462b2a2207d75270d04c.scope - libcontainer container eb630df38d625d6913a077a762dc6810a16178166d0a462b2a2207d75270d04c. Feb 13 15:21:40.494075 kubelet[2621]: E0213 15:21:40.492324 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:21:40.494990 containerd[1442]: time="2025-02-13T15:21:40.494952097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-czdlp,Uid:be3818bd-8c59-4cb5-b6c5-92ae943f6a3a,Namespace:calico-system,Attempt:0,}" Feb 13 15:21:40.519854 containerd[1442]: time="2025-02-13T15:21:40.519727948Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:21:40.519854 containerd[1442]: time="2025-02-13T15:21:40.519811197Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:21:40.519854 containerd[1442]: time="2025-02-13T15:21:40.519823519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:21:40.520541 containerd[1442]: time="2025-02-13T15:21:40.520482117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:21:40.524656 containerd[1442]: time="2025-02-13T15:21:40.524622927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-695f997c9b-4hmlh,Uid:45a2f2e9-b7ab-4303-8b4b-e473e8b9e7fd,Namespace:calico-system,Attempt:0,} returns sandbox id \"eb630df38d625d6913a077a762dc6810a16178166d0a462b2a2207d75270d04c\"" Feb 13 15:21:40.526346 kubelet[2621]: E0213 15:21:40.525438 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:21:40.526435 containerd[1442]: time="2025-02-13T15:21:40.526065817Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Feb 13 15:21:40.526894 kubelet[2621]: E0213 15:21:40.526870 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:40.526894 kubelet[2621]: W0213 15:21:40.526887 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:40.527367 kubelet[2621]: E0213 15:21:40.526904 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:40.527367 kubelet[2621]: E0213 15:21:40.527111 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:40.527367 kubelet[2621]: W0213 15:21:40.527121 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:40.527367 kubelet[2621]: E0213 15:21:40.527130 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:40.527626 kubelet[2621]: E0213 15:21:40.527611 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:40.527899 kubelet[2621]: W0213 15:21:40.527625 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:40.527899 kubelet[2621]: E0213 15:21:40.527749 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:40.528108 kubelet[2621]: E0213 15:21:40.528092 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:40.528263 kubelet[2621]: W0213 15:21:40.528168 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:40.528263 kubelet[2621]: E0213 15:21:40.528196 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:40.528590 kubelet[2621]: E0213 15:21:40.528522 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:40.528590 kubelet[2621]: W0213 15:21:40.528535 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:40.528765 kubelet[2621]: E0213 15:21:40.528701 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:40.529092 kubelet[2621]: E0213 15:21:40.528979 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:40.529092 kubelet[2621]: W0213 15:21:40.528992 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:40.529092 kubelet[2621]: E0213 15:21:40.529052 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:40.529354 kubelet[2621]: E0213 15:21:40.529274 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:40.529354 kubelet[2621]: W0213 15:21:40.529300 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:40.529354 kubelet[2621]: E0213 15:21:40.529332 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:40.530060 kubelet[2621]: E0213 15:21:40.529602 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:40.530060 kubelet[2621]: W0213 15:21:40.529966 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:40.530060 kubelet[2621]: E0213 15:21:40.530018 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:40.530914 kubelet[2621]: E0213 15:21:40.530851 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:40.530914 kubelet[2621]: W0213 15:21:40.530914 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:40.531293 kubelet[2621]: E0213 15:21:40.531191 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:40.531353 kubelet[2621]: E0213 15:21:40.531307 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:40.531353 kubelet[2621]: W0213 15:21:40.531316 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:40.531476 kubelet[2621]: E0213 15:21:40.531432 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:40.531692 kubelet[2621]: E0213 15:21:40.531661 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:40.531741 kubelet[2621]: W0213 15:21:40.531692 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:40.531898 kubelet[2621]: E0213 15:21:40.531782 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:40.532084 kubelet[2621]: E0213 15:21:40.532057 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:40.532084 kubelet[2621]: W0213 15:21:40.532071 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:40.532166 kubelet[2621]: E0213 15:21:40.532107 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:40.532283 kubelet[2621]: E0213 15:21:40.532269 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:40.532283 kubelet[2621]: W0213 15:21:40.532282 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:40.532567 kubelet[2621]: E0213 15:21:40.532551 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:40.532782 kubelet[2621]: E0213 15:21:40.532766 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:40.532782 kubelet[2621]: W0213 15:21:40.532780 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:40.532871 kubelet[2621]: E0213 15:21:40.532850 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:40.534060 kubelet[2621]: E0213 15:21:40.533110 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:40.534060 kubelet[2621]: W0213 15:21:40.533125 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:40.534060 kubelet[2621]: E0213 15:21:40.533177 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:40.534060 kubelet[2621]: E0213 15:21:40.533345 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:40.534060 kubelet[2621]: W0213 15:21:40.533357 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:40.534060 kubelet[2621]: E0213 15:21:40.533566 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:40.534060 kubelet[2621]: E0213 15:21:40.533761 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:40.534060 kubelet[2621]: W0213 15:21:40.533771 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:40.534060 kubelet[2621]: E0213 15:21:40.533859 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:40.534060 kubelet[2621]: E0213 15:21:40.533939 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:40.534330 kubelet[2621]: W0213 15:21:40.533954 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:40.534330 kubelet[2621]: E0213 15:21:40.534005 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:40.534330 kubelet[2621]: E0213 15:21:40.534133 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:40.534330 kubelet[2621]: W0213 15:21:40.534140 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:40.534330 kubelet[2621]: E0213 15:21:40.534155 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:40.534688 kubelet[2621]: E0213 15:21:40.534614 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:40.534688 kubelet[2621]: W0213 15:21:40.534628 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:40.534688 kubelet[2621]: E0213 15:21:40.534645 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:40.534844 kubelet[2621]: E0213 15:21:40.534824 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:40.534844 kubelet[2621]: W0213 15:21:40.534837 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:40.534844 kubelet[2621]: E0213 15:21:40.534850 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:40.535290 kubelet[2621]: E0213 15:21:40.535151 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:40.535290 kubelet[2621]: W0213 15:21:40.535167 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:40.535290 kubelet[2621]: E0213 15:21:40.535200 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:40.535472 kubelet[2621]: E0213 15:21:40.535436 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:40.535472 kubelet[2621]: W0213 15:21:40.535451 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:40.535538 kubelet[2621]: E0213 15:21:40.535522 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:40.535883 kubelet[2621]: E0213 15:21:40.535860 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:40.535883 kubelet[2621]: W0213 15:21:40.535874 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:40.536008 kubelet[2621]: E0213 15:21:40.535889 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:40.536400 kubelet[2621]: E0213 15:21:40.536376 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:40.536400 kubelet[2621]: W0213 15:21:40.536394 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:40.536492 kubelet[2621]: E0213 15:21:40.536408 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:40.544766 kubelet[2621]: E0213 15:21:40.544739 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:40.544766 kubelet[2621]: W0213 15:21:40.544759 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:40.544874 kubelet[2621]: E0213 15:21:40.544777 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:40.553250 systemd[1]: Started cri-containerd-c875a564d6b96d77ac8b4ddf50ea95930a6ff3c37798b08421eeb3727e921e06.scope - libcontainer container c875a564d6b96d77ac8b4ddf50ea95930a6ff3c37798b08421eeb3727e921e06. Feb 13 15:21:40.574359 containerd[1442]: time="2025-02-13T15:21:40.574236034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-czdlp,Uid:be3818bd-8c59-4cb5-b6c5-92ae943f6a3a,Namespace:calico-system,Attempt:0,} returns sandbox id \"c875a564d6b96d77ac8b4ddf50ea95930a6ff3c37798b08421eeb3727e921e06\"" Feb 13 15:21:40.574903 kubelet[2621]: E0213 15:21:40.574883 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:21:41.678680 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3812848608.mount: Deactivated successfully. Feb 13 15:21:42.088065 containerd[1442]: time="2025-02-13T15:21:42.087936540Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:42.089401 containerd[1442]: time="2025-02-13T15:21:42.089181629Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29231308" Feb 13 15:21:42.090108 containerd[1442]: time="2025-02-13T15:21:42.090079362Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:42.092576 containerd[1442]: time="2025-02-13T15:21:42.092527297Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:42.093133 containerd[1442]: time="2025-02-13T15:21:42.093099876Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 1.567006096s" Feb 13 15:21:42.093184 containerd[1442]: time="2025-02-13T15:21:42.093135480Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Feb 13 15:21:42.101047 containerd[1442]: time="2025-02-13T15:21:42.100912488Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 15:21:42.114770 containerd[1442]: time="2025-02-13T15:21:42.114612713Z" level=info msg="CreateContainer within sandbox \"eb630df38d625d6913a077a762dc6810a16178166d0a462b2a2207d75270d04c\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 13 15:21:42.134060 containerd[1442]: time="2025-02-13T15:21:42.133979366Z" level=info msg="CreateContainer within sandbox \"eb630df38d625d6913a077a762dc6810a16178166d0a462b2a2207d75270d04c\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1cb581e4b443aa6fc4c0bee114e26c3ee10c99b59670c37c24ec9301b4802afc\"" Feb 13 15:21:42.135320 containerd[1442]: time="2025-02-13T15:21:42.135285662Z" level=info msg="StartContainer for \"1cb581e4b443aa6fc4c0bee114e26c3ee10c99b59670c37c24ec9301b4802afc\"" Feb 13 15:21:42.159208 systemd[1]: Started cri-containerd-1cb581e4b443aa6fc4c0bee114e26c3ee10c99b59670c37c24ec9301b4802afc.scope - libcontainer container 1cb581e4b443aa6fc4c0bee114e26c3ee10c99b59670c37c24ec9301b4802afc. Feb 13 15:21:42.195048 containerd[1442]: time="2025-02-13T15:21:42.194625350Z" level=info msg="StartContainer for \"1cb581e4b443aa6fc4c0bee114e26c3ee10c99b59670c37c24ec9301b4802afc\" returns successfully" Feb 13 15:21:42.370191 kubelet[2621]: E0213 15:21:42.370077 2621 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8vvjv" podUID="14b2995f-bdfb-4265-9dc0-06ae16e4bb6c" Feb 13 15:21:42.463610 kubelet[2621]: E0213 15:21:42.463454 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:21:42.536816 kubelet[2621]: E0213 15:21:42.536764 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:42.536816 kubelet[2621]: W0213 15:21:42.536792 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:42.536816 kubelet[2621]: E0213 15:21:42.536814 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:42.537002 kubelet[2621]: E0213 15:21:42.536969 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:42.537002 kubelet[2621]: W0213 15:21:42.536979 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:42.537002 kubelet[2621]: E0213 15:21:42.536987 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:42.537164 kubelet[2621]: E0213 15:21:42.537136 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:42.537164 kubelet[2621]: W0213 15:21:42.537154 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:42.537164 kubelet[2621]: E0213 15:21:42.537164 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:42.537323 kubelet[2621]: E0213 15:21:42.537310 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:42.537323 kubelet[2621]: W0213 15:21:42.537321 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:42.537382 kubelet[2621]: E0213 15:21:42.537329 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:42.537498 kubelet[2621]: E0213 15:21:42.537487 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:42.537498 kubelet[2621]: W0213 15:21:42.537496 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:42.537554 kubelet[2621]: E0213 15:21:42.537504 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:42.537655 kubelet[2621]: E0213 15:21:42.537633 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:42.537655 kubelet[2621]: W0213 15:21:42.537650 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:42.537709 kubelet[2621]: E0213 15:21:42.537658 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:42.537795 kubelet[2621]: E0213 15:21:42.537784 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:42.537817 kubelet[2621]: W0213 15:21:42.537798 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:42.537817 kubelet[2621]: E0213 15:21:42.537806 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:42.537944 kubelet[2621]: E0213 15:21:42.537934 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:42.537967 kubelet[2621]: W0213 15:21:42.537947 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:42.537967 kubelet[2621]: E0213 15:21:42.537955 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:42.538112 kubelet[2621]: E0213 15:21:42.538101 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:42.538140 kubelet[2621]: W0213 15:21:42.538112 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:42.538140 kubelet[2621]: E0213 15:21:42.538119 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:42.538250 kubelet[2621]: E0213 15:21:42.538241 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:42.538279 kubelet[2621]: W0213 15:21:42.538257 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:42.538279 kubelet[2621]: E0213 15:21:42.538265 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:42.538399 kubelet[2621]: E0213 15:21:42.538389 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:42.538431 kubelet[2621]: W0213 15:21:42.538398 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:42.538431 kubelet[2621]: E0213 15:21:42.538410 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:42.538560 kubelet[2621]: E0213 15:21:42.538545 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:42.538584 kubelet[2621]: W0213 15:21:42.538559 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:42.538584 kubelet[2621]: E0213 15:21:42.538567 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:42.538711 kubelet[2621]: E0213 15:21:42.538701 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:42.538736 kubelet[2621]: W0213 15:21:42.538714 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:42.538736 kubelet[2621]: E0213 15:21:42.538724 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:42.538864 kubelet[2621]: E0213 15:21:42.538852 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:42.538889 kubelet[2621]: W0213 15:21:42.538866 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:42.538889 kubelet[2621]: E0213 15:21:42.538875 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:42.539009 kubelet[2621]: E0213 15:21:42.538999 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:42.539043 kubelet[2621]: W0213 15:21:42.539012 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:42.539043 kubelet[2621]: E0213 15:21:42.539020 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:42.546414 kubelet[2621]: E0213 15:21:42.546386 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:42.546414 kubelet[2621]: W0213 15:21:42.546404 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:42.546499 kubelet[2621]: E0213 15:21:42.546418 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:42.546696 kubelet[2621]: E0213 15:21:42.546677 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:42.546696 kubelet[2621]: W0213 15:21:42.546690 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:42.546797 kubelet[2621]: E0213 15:21:42.546703 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:42.546911 kubelet[2621]: E0213 15:21:42.546886 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:42.546911 kubelet[2621]: W0213 15:21:42.546901 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:42.546911 kubelet[2621]: E0213 15:21:42.546916 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:42.547144 kubelet[2621]: E0213 15:21:42.547133 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:42.547144 kubelet[2621]: W0213 15:21:42.547144 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:42.547208 kubelet[2621]: E0213 15:21:42.547162 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:42.547318 kubelet[2621]: E0213 15:21:42.547306 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:42.547318 kubelet[2621]: W0213 15:21:42.547317 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:42.547370 kubelet[2621]: E0213 15:21:42.547328 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:42.547953 kubelet[2621]: E0213 15:21:42.547530 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:42.547953 kubelet[2621]: W0213 15:21:42.547542 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:42.547953 kubelet[2621]: E0213 15:21:42.547556 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:42.547953 kubelet[2621]: E0213 15:21:42.547807 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:42.547953 kubelet[2621]: W0213 15:21:42.547818 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:42.547953 kubelet[2621]: E0213 15:21:42.547835 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:42.548157 kubelet[2621]: E0213 15:21:42.548012 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:42.548157 kubelet[2621]: W0213 15:21:42.548020 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:42.548157 kubelet[2621]: E0213 15:21:42.548040 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:42.548241 kubelet[2621]: E0213 15:21:42.548220 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:42.548241 kubelet[2621]: W0213 15:21:42.548230 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:42.548304 kubelet[2621]: E0213 15:21:42.548242 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:42.548390 kubelet[2621]: E0213 15:21:42.548379 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:42.548390 kubelet[2621]: W0213 15:21:42.548388 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:42.548461 kubelet[2621]: E0213 15:21:42.548400 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:42.548829 kubelet[2621]: E0213 15:21:42.548574 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:42.548829 kubelet[2621]: W0213 15:21:42.548584 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:42.548829 kubelet[2621]: E0213 15:21:42.548593 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:42.549665 kubelet[2621]: E0213 15:21:42.548917 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:42.549665 kubelet[2621]: W0213 15:21:42.548929 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:42.549665 kubelet[2621]: E0213 15:21:42.548942 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:42.549665 kubelet[2621]: E0213 15:21:42.549122 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:42.549665 kubelet[2621]: W0213 15:21:42.549130 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:42.549665 kubelet[2621]: E0213 15:21:42.549142 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:42.549665 kubelet[2621]: E0213 15:21:42.549291 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:42.549665 kubelet[2621]: W0213 15:21:42.549299 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:42.549665 kubelet[2621]: E0213 15:21:42.549306 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:42.549665 kubelet[2621]: E0213 15:21:42.549451 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:42.549897 kubelet[2621]: W0213 15:21:42.549458 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:42.549897 kubelet[2621]: E0213 15:21:42.549465 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:42.549897 kubelet[2621]: E0213 15:21:42.549626 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:42.549897 kubelet[2621]: W0213 15:21:42.549632 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:42.549897 kubelet[2621]: E0213 15:21:42.549639 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:42.549897 kubelet[2621]: E0213 15:21:42.549783 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:42.549897 kubelet[2621]: W0213 15:21:42.549789 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:42.549897 kubelet[2621]: E0213 15:21:42.549797 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:42.550066 kubelet[2621]: E0213 15:21:42.550046 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:42.550066 kubelet[2621]: W0213 15:21:42.550054 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:42.550066 kubelet[2621]: E0213 15:21:42.550062 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:43.464853 kubelet[2621]: I0213 15:21:43.464826 2621 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:21:43.466660 kubelet[2621]: E0213 15:21:43.466236 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:21:43.546807 kubelet[2621]: E0213 15:21:43.546777 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:43.546807 kubelet[2621]: W0213 15:21:43.546801 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:43.546973 kubelet[2621]: E0213 15:21:43.546823 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:43.547071 kubelet[2621]: E0213 15:21:43.547059 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:43.547258 kubelet[2621]: W0213 15:21:43.547071 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:43.547258 kubelet[2621]: E0213 15:21:43.547080 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:43.547258 kubelet[2621]: E0213 15:21:43.547255 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:43.547351 kubelet[2621]: W0213 15:21:43.547268 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:43.547351 kubelet[2621]: E0213 15:21:43.547279 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:43.547447 kubelet[2621]: E0213 15:21:43.547436 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:43.547447 kubelet[2621]: W0213 15:21:43.547447 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:43.547503 kubelet[2621]: E0213 15:21:43.547455 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:43.547642 kubelet[2621]: E0213 15:21:43.547631 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:43.547642 kubelet[2621]: W0213 15:21:43.547642 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:43.547702 kubelet[2621]: E0213 15:21:43.547651 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:43.547800 kubelet[2621]: E0213 15:21:43.547789 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:43.547800 kubelet[2621]: W0213 15:21:43.547799 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:43.547852 kubelet[2621]: E0213 15:21:43.547807 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:43.547957 kubelet[2621]: E0213 15:21:43.547932 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:43.547957 kubelet[2621]: W0213 15:21:43.547945 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:43.547957 kubelet[2621]: E0213 15:21:43.547952 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:43.548134 kubelet[2621]: E0213 15:21:43.548092 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:43.548134 kubelet[2621]: W0213 15:21:43.548100 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:43.548134 kubelet[2621]: E0213 15:21:43.548108 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:43.548285 kubelet[2621]: E0213 15:21:43.548274 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:43.548285 kubelet[2621]: W0213 15:21:43.548285 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:43.548344 kubelet[2621]: E0213 15:21:43.548293 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:43.548433 kubelet[2621]: E0213 15:21:43.548424 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:43.548433 kubelet[2621]: W0213 15:21:43.548433 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:43.548487 kubelet[2621]: E0213 15:21:43.548441 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:43.548582 kubelet[2621]: E0213 15:21:43.548572 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:43.548582 kubelet[2621]: W0213 15:21:43.548582 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:43.548645 kubelet[2621]: E0213 15:21:43.548589 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:43.548766 kubelet[2621]: E0213 15:21:43.548715 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:43.548766 kubelet[2621]: W0213 15:21:43.548724 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:43.548766 kubelet[2621]: E0213 15:21:43.548731 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:43.548928 kubelet[2621]: E0213 15:21:43.548880 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:43.548928 kubelet[2621]: W0213 15:21:43.548889 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:43.548928 kubelet[2621]: E0213 15:21:43.548898 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:43.549069 kubelet[2621]: E0213 15:21:43.549058 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:43.549069 kubelet[2621]: W0213 15:21:43.549069 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:43.549134 kubelet[2621]: E0213 15:21:43.549078 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:43.549214 kubelet[2621]: E0213 15:21:43.549204 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:43.549246 kubelet[2621]: W0213 15:21:43.549216 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:43.549246 kubelet[2621]: E0213 15:21:43.549224 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:43.557695 kubelet[2621]: E0213 15:21:43.557678 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:43.558009 kubelet[2621]: W0213 15:21:43.557877 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:43.558009 kubelet[2621]: E0213 15:21:43.557898 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:43.558206 kubelet[2621]: E0213 15:21:43.558194 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:43.558265 kubelet[2621]: W0213 15:21:43.558253 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:43.558326 kubelet[2621]: E0213 15:21:43.558316 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:43.558584 kubelet[2621]: E0213 15:21:43.558555 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:43.558584 kubelet[2621]: W0213 15:21:43.558573 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:43.558652 kubelet[2621]: E0213 15:21:43.558590 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:43.558807 kubelet[2621]: E0213 15:21:43.558740 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:43.558807 kubelet[2621]: W0213 15:21:43.558750 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:43.558807 kubelet[2621]: E0213 15:21:43.558758 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:43.558972 kubelet[2621]: E0213 15:21:43.558963 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:43.559001 kubelet[2621]: W0213 15:21:43.558972 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:43.559001 kubelet[2621]: E0213 15:21:43.558992 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:43.559232 kubelet[2621]: E0213 15:21:43.559219 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:43.559232 kubelet[2621]: W0213 15:21:43.559231 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:43.559286 kubelet[2621]: E0213 15:21:43.559245 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:43.559477 kubelet[2621]: E0213 15:21:43.559464 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:43.559514 kubelet[2621]: W0213 15:21:43.559478 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:43.559514 kubelet[2621]: E0213 15:21:43.559493 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:43.559755 kubelet[2621]: E0213 15:21:43.559696 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:43.559755 kubelet[2621]: W0213 15:21:43.559709 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:43.559755 kubelet[2621]: E0213 15:21:43.559723 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:43.559890 kubelet[2621]: E0213 15:21:43.559874 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:43.559928 kubelet[2621]: W0213 15:21:43.559892 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:43.559928 kubelet[2621]: E0213 15:21:43.559901 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:43.560332 kubelet[2621]: E0213 15:21:43.560306 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:43.560332 kubelet[2621]: W0213 15:21:43.560324 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:43.560520 kubelet[2621]: E0213 15:21:43.560339 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:43.562004 kubelet[2621]: E0213 15:21:43.561905 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:43.562004 kubelet[2621]: W0213 15:21:43.561956 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:43.562161 kubelet[2621]: E0213 15:21:43.562046 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:43.562661 kubelet[2621]: E0213 15:21:43.562631 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:43.563074 kubelet[2621]: W0213 15:21:43.562649 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:43.563189 kubelet[2621]: E0213 15:21:43.563155 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:43.564815 kubelet[2621]: E0213 15:21:43.564753 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:43.564815 kubelet[2621]: W0213 15:21:43.564776 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:43.564931 kubelet[2621]: E0213 15:21:43.564814 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:43.565364 kubelet[2621]: E0213 15:21:43.565286 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:43.565364 kubelet[2621]: W0213 15:21:43.565327 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:43.565510 kubelet[2621]: E0213 15:21:43.565481 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:43.566055 kubelet[2621]: E0213 15:21:43.566019 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:43.566055 kubelet[2621]: W0213 15:21:43.566055 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:43.566135 kubelet[2621]: E0213 15:21:43.566068 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:43.567162 kubelet[2621]: E0213 15:21:43.567144 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:43.567162 kubelet[2621]: W0213 15:21:43.567160 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:43.567385 kubelet[2621]: E0213 15:21:43.567173 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:43.567511 kubelet[2621]: E0213 15:21:43.567496 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:43.567550 kubelet[2621]: W0213 15:21:43.567512 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:43.567578 kubelet[2621]: E0213 15:21:43.567549 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:43.568136 kubelet[2621]: E0213 15:21:43.568050 2621 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:21:43.568136 kubelet[2621]: W0213 15:21:43.568067 2621 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:21:43.568136 kubelet[2621]: E0213 15:21:43.568080 2621 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:21:43.736282 containerd[1442]: time="2025-02-13T15:21:43.736183779Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:43.738808 containerd[1442]: time="2025-02-13T15:21:43.737935870Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5117811" Feb 13 15:21:43.739532 containerd[1442]: time="2025-02-13T15:21:43.739501663Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:43.741559 containerd[1442]: time="2025-02-13T15:21:43.741517979Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:43.742119 containerd[1442]: time="2025-02-13T15:21:43.742047231Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.641100019s" Feb 13 15:21:43.742119 containerd[1442]: time="2025-02-13T15:21:43.742082194Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Feb 13 15:21:43.744443 containerd[1442]: time="2025-02-13T15:21:43.744420782Z" level=info msg="CreateContainer within sandbox \"c875a564d6b96d77ac8b4ddf50ea95930a6ff3c37798b08421eeb3727e921e06\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 15:21:43.756086 containerd[1442]: time="2025-02-13T15:21:43.755993390Z" level=info msg="CreateContainer within sandbox \"c875a564d6b96d77ac8b4ddf50ea95930a6ff3c37798b08421eeb3727e921e06\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d928cb4df6e0bf6e8e4cb9d06a590bd1f576afef23826ac7ff2d2fb0b9897fd2\"" Feb 13 15:21:43.756530 containerd[1442]: time="2025-02-13T15:21:43.756502159Z" level=info msg="StartContainer for \"d928cb4df6e0bf6e8e4cb9d06a590bd1f576afef23826ac7ff2d2fb0b9897fd2\"" Feb 13 15:21:43.796211 systemd[1]: Started cri-containerd-d928cb4df6e0bf6e8e4cb9d06a590bd1f576afef23826ac7ff2d2fb0b9897fd2.scope - libcontainer container d928cb4df6e0bf6e8e4cb9d06a590bd1f576afef23826ac7ff2d2fb0b9897fd2. Feb 13 15:21:43.820514 containerd[1442]: time="2025-02-13T15:21:43.820395546Z" level=info msg="StartContainer for \"d928cb4df6e0bf6e8e4cb9d06a590bd1f576afef23826ac7ff2d2fb0b9897fd2\" returns successfully" Feb 13 15:21:43.898232 systemd[1]: cri-containerd-d928cb4df6e0bf6e8e4cb9d06a590bd1f576afef23826ac7ff2d2fb0b9897fd2.scope: Deactivated successfully. Feb 13 15:21:43.925259 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d928cb4df6e0bf6e8e4cb9d06a590bd1f576afef23826ac7ff2d2fb0b9897fd2-rootfs.mount: Deactivated successfully. Feb 13 15:21:43.945721 containerd[1442]: time="2025-02-13T15:21:43.936585389Z" level=info msg="shim disconnected" id=d928cb4df6e0bf6e8e4cb9d06a590bd1f576afef23826ac7ff2d2fb0b9897fd2 namespace=k8s.io Feb 13 15:21:43.945721 containerd[1442]: time="2025-02-13T15:21:43.945722560Z" level=warning msg="cleaning up after shim disconnected" id=d928cb4df6e0bf6e8e4cb9d06a590bd1f576afef23826ac7ff2d2fb0b9897fd2 namespace=k8s.io Feb 13 15:21:43.945942 containerd[1442]: time="2025-02-13T15:21:43.945737441Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:21:44.370452 kubelet[2621]: E0213 15:21:44.370406 2621 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8vvjv" podUID="14b2995f-bdfb-4265-9dc0-06ae16e4bb6c" Feb 13 15:21:44.468848 kubelet[2621]: E0213 15:21:44.468811 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:21:44.471172 containerd[1442]: time="2025-02-13T15:21:44.471113259Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 15:21:44.489721 kubelet[2621]: I0213 15:21:44.489304 2621 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-695f997c9b-4hmlh" podStartSLOduration=2.914632145 podStartE2EDuration="4.489284599s" podCreationTimestamp="2025-02-13 15:21:40 +0000 UTC" firstStartedPulling="2025-02-13 15:21:40.525878795 +0000 UTC m=+23.226320404" lastFinishedPulling="2025-02-13 15:21:42.100531249 +0000 UTC m=+24.800972858" observedRunningTime="2025-02-13 15:21:42.475567074 +0000 UTC m=+25.176008683" watchObservedRunningTime="2025-02-13 15:21:44.489284599 +0000 UTC m=+27.189726208" Feb 13 15:21:45.555228 systemd[1]: Started sshd@7-10.0.0.35:22-10.0.0.1:33776.service - OpenSSH per-connection server daemon (10.0.0.1:33776). Feb 13 15:21:45.602742 sshd[3333]: Accepted publickey for core from 10.0.0.1 port 33776 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:21:45.603706 sshd-session[3333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:21:45.614873 systemd-logind[1425]: New session 8 of user core. Feb 13 15:21:45.630190 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 15:21:45.760456 sshd[3335]: Connection closed by 10.0.0.1 port 33776 Feb 13 15:21:45.760783 sshd-session[3333]: pam_unix(sshd:session): session closed for user core Feb 13 15:21:45.764249 systemd[1]: sshd@7-10.0.0.35:22-10.0.0.1:33776.service: Deactivated successfully. Feb 13 15:21:45.768419 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 15:21:45.769145 systemd-logind[1425]: Session 8 logged out. Waiting for processes to exit. Feb 13 15:21:45.769936 systemd-logind[1425]: Removed session 8. Feb 13 15:21:46.370082 kubelet[2621]: E0213 15:21:46.370020 2621 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8vvjv" podUID="14b2995f-bdfb-4265-9dc0-06ae16e4bb6c" Feb 13 15:21:48.371060 kubelet[2621]: E0213 15:21:48.370989 2621 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8vvjv" podUID="14b2995f-bdfb-4265-9dc0-06ae16e4bb6c" Feb 13 15:21:49.401383 containerd[1442]: time="2025-02-13T15:21:49.401330143Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:49.403200 containerd[1442]: time="2025-02-13T15:21:49.403155304Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Feb 13 15:21:49.404102 containerd[1442]: time="2025-02-13T15:21:49.404069245Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:49.407180 containerd[1442]: time="2025-02-13T15:21:49.407141368Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:49.407641 containerd[1442]: time="2025-02-13T15:21:49.407602078Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 4.936442055s" Feb 13 15:21:49.407683 containerd[1442]: time="2025-02-13T15:21:49.407639721Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Feb 13 15:21:49.410775 containerd[1442]: time="2025-02-13T15:21:49.410737166Z" level=info msg="CreateContainer within sandbox \"c875a564d6b96d77ac8b4ddf50ea95930a6ff3c37798b08421eeb3727e921e06\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 15:21:49.475317 containerd[1442]: time="2025-02-13T15:21:49.475277716Z" level=info msg="CreateContainer within sandbox \"c875a564d6b96d77ac8b4ddf50ea95930a6ff3c37798b08421eeb3727e921e06\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"82949e3c41ef0738e34d8a3423431fcb4fa092628f7871b93f3d9fbf77ed8d34\"" Feb 13 15:21:49.480576 containerd[1442]: time="2025-02-13T15:21:49.480536224Z" level=info msg="StartContainer for \"82949e3c41ef0738e34d8a3423431fcb4fa092628f7871b93f3d9fbf77ed8d34\"" Feb 13 15:21:49.524459 systemd[1]: Started cri-containerd-82949e3c41ef0738e34d8a3423431fcb4fa092628f7871b93f3d9fbf77ed8d34.scope - libcontainer container 82949e3c41ef0738e34d8a3423431fcb4fa092628f7871b93f3d9fbf77ed8d34. Feb 13 15:21:49.657838 containerd[1442]: time="2025-02-13T15:21:49.657097947Z" level=info msg="StartContainer for \"82949e3c41ef0738e34d8a3423431fcb4fa092628f7871b93f3d9fbf77ed8d34\" returns successfully" Feb 13 15:21:50.147904 systemd[1]: cri-containerd-82949e3c41ef0738e34d8a3423431fcb4fa092628f7871b93f3d9fbf77ed8d34.scope: Deactivated successfully. Feb 13 15:21:50.168758 kubelet[2621]: I0213 15:21:50.168719 2621 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 15:21:50.188580 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-82949e3c41ef0738e34d8a3423431fcb4fa092628f7871b93f3d9fbf77ed8d34-rootfs.mount: Deactivated successfully. Feb 13 15:21:50.210274 containerd[1442]: time="2025-02-13T15:21:50.209923332Z" level=info msg="shim disconnected" id=82949e3c41ef0738e34d8a3423431fcb4fa092628f7871b93f3d9fbf77ed8d34 namespace=k8s.io Feb 13 15:21:50.210274 containerd[1442]: time="2025-02-13T15:21:50.209978615Z" level=warning msg="cleaning up after shim disconnected" id=82949e3c41ef0738e34d8a3423431fcb4fa092628f7871b93f3d9fbf77ed8d34 namespace=k8s.io Feb 13 15:21:50.210274 containerd[1442]: time="2025-02-13T15:21:50.209987536Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:21:50.213688 kubelet[2621]: I0213 15:21:50.213272 2621 topology_manager.go:215] "Topology Admit Handler" podUID="c285582e-9191-432e-99ea-cc7fe7db7fbb" podNamespace="kube-system" podName="coredns-7db6d8ff4d-qhltz" Feb 13 15:21:50.215013 kubelet[2621]: I0213 15:21:50.214081 2621 topology_manager.go:215] "Topology Admit Handler" podUID="de1246bc-b473-496c-be26-bb64afe860ad" podNamespace="calico-apiserver" podName="calico-apiserver-564fc96ccb-dqvv5" Feb 13 15:21:50.222263 kubelet[2621]: I0213 15:21:50.222072 2621 topology_manager.go:215] "Topology Admit Handler" podUID="87c71f20-054e-44ba-a99e-c4fefdae6457" podNamespace="calico-system" podName="calico-kube-controllers-658db9fb4b-xcbwf" Feb 13 15:21:50.223851 kubelet[2621]: I0213 15:21:50.223625 2621 topology_manager.go:215] "Topology Admit Handler" podUID="84c3b521-a067-4602-ad8f-cbca4249dad7" podNamespace="calico-apiserver" podName="calico-apiserver-564fc96ccb-fw9ln" Feb 13 15:21:50.225007 kubelet[2621]: I0213 15:21:50.224612 2621 topology_manager.go:215] "Topology Admit Handler" podUID="deed100f-387a-4ac5-9252-5efbd4c9fe2b" podNamespace="kube-system" podName="coredns-7db6d8ff4d-2hpnn" Feb 13 15:21:50.231943 systemd[1]: Created slice kubepods-burstable-podc285582e_9191_432e_99ea_cc7fe7db7fbb.slice - libcontainer container kubepods-burstable-podc285582e_9191_432e_99ea_cc7fe7db7fbb.slice. Feb 13 15:21:50.240661 systemd[1]: Created slice kubepods-besteffort-podde1246bc_b473_496c_be26_bb64afe860ad.slice - libcontainer container kubepods-besteffort-podde1246bc_b473_496c_be26_bb64afe860ad.slice. Feb 13 15:21:50.252037 systemd[1]: Created slice kubepods-besteffort-pod87c71f20_054e_44ba_a99e_c4fefdae6457.slice - libcontainer container kubepods-besteffort-pod87c71f20_054e_44ba_a99e_c4fefdae6457.slice. Feb 13 15:21:50.258847 systemd[1]: Created slice kubepods-besteffort-pod84c3b521_a067_4602_ad8f_cbca4249dad7.slice - libcontainer container kubepods-besteffort-pod84c3b521_a067_4602_ad8f_cbca4249dad7.slice. Feb 13 15:21:50.267128 systemd[1]: Created slice kubepods-burstable-poddeed100f_387a_4ac5_9252_5efbd4c9fe2b.slice - libcontainer container kubepods-burstable-poddeed100f_387a_4ac5_9252_5efbd4c9fe2b.slice. Feb 13 15:21:50.303947 kubelet[2621]: I0213 15:21:50.303619 2621 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87c71f20-054e-44ba-a99e-c4fefdae6457-tigera-ca-bundle\") pod \"calico-kube-controllers-658db9fb4b-xcbwf\" (UID: \"87c71f20-054e-44ba-a99e-c4fefdae6457\") " pod="calico-system/calico-kube-controllers-658db9fb4b-xcbwf" Feb 13 15:21:50.303947 kubelet[2621]: I0213 15:21:50.303668 2621 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/84c3b521-a067-4602-ad8f-cbca4249dad7-calico-apiserver-certs\") pod \"calico-apiserver-564fc96ccb-fw9ln\" (UID: \"84c3b521-a067-4602-ad8f-cbca4249dad7\") " pod="calico-apiserver/calico-apiserver-564fc96ccb-fw9ln" Feb 13 15:21:50.303947 kubelet[2621]: I0213 15:21:50.303685 2621 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6jwj\" (UniqueName: \"kubernetes.io/projected/deed100f-387a-4ac5-9252-5efbd4c9fe2b-kube-api-access-n6jwj\") pod \"coredns-7db6d8ff4d-2hpnn\" (UID: \"deed100f-387a-4ac5-9252-5efbd4c9fe2b\") " pod="kube-system/coredns-7db6d8ff4d-2hpnn" Feb 13 15:21:50.303947 kubelet[2621]: I0213 15:21:50.303704 2621 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcqmr\" (UniqueName: \"kubernetes.io/projected/de1246bc-b473-496c-be26-bb64afe860ad-kube-api-access-qcqmr\") pod \"calico-apiserver-564fc96ccb-dqvv5\" (UID: \"de1246bc-b473-496c-be26-bb64afe860ad\") " pod="calico-apiserver/calico-apiserver-564fc96ccb-dqvv5" Feb 13 15:21:50.303947 kubelet[2621]: I0213 15:21:50.303722 2621 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxgs5\" (UniqueName: \"kubernetes.io/projected/87c71f20-054e-44ba-a99e-c4fefdae6457-kube-api-access-mxgs5\") pod \"calico-kube-controllers-658db9fb4b-xcbwf\" (UID: \"87c71f20-054e-44ba-a99e-c4fefdae6457\") " pod="calico-system/calico-kube-controllers-658db9fb4b-xcbwf" Feb 13 15:21:50.304221 kubelet[2621]: I0213 15:21:50.303739 2621 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/deed100f-387a-4ac5-9252-5efbd4c9fe2b-config-volume\") pod \"coredns-7db6d8ff4d-2hpnn\" (UID: \"deed100f-387a-4ac5-9252-5efbd4c9fe2b\") " pod="kube-system/coredns-7db6d8ff4d-2hpnn" Feb 13 15:21:50.304221 kubelet[2621]: I0213 15:21:50.303761 2621 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpkbv\" (UniqueName: \"kubernetes.io/projected/84c3b521-a067-4602-ad8f-cbca4249dad7-kube-api-access-rpkbv\") pod \"calico-apiserver-564fc96ccb-fw9ln\" (UID: \"84c3b521-a067-4602-ad8f-cbca4249dad7\") " pod="calico-apiserver/calico-apiserver-564fc96ccb-fw9ln" Feb 13 15:21:50.304221 kubelet[2621]: I0213 15:21:50.303778 2621 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxs5p\" (UniqueName: \"kubernetes.io/projected/c285582e-9191-432e-99ea-cc7fe7db7fbb-kube-api-access-jxs5p\") pod \"coredns-7db6d8ff4d-qhltz\" (UID: \"c285582e-9191-432e-99ea-cc7fe7db7fbb\") " pod="kube-system/coredns-7db6d8ff4d-qhltz" Feb 13 15:21:50.304221 kubelet[2621]: I0213 15:21:50.303796 2621 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c285582e-9191-432e-99ea-cc7fe7db7fbb-config-volume\") pod \"coredns-7db6d8ff4d-qhltz\" (UID: \"c285582e-9191-432e-99ea-cc7fe7db7fbb\") " pod="kube-system/coredns-7db6d8ff4d-qhltz" Feb 13 15:21:50.304221 kubelet[2621]: I0213 15:21:50.303813 2621 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/de1246bc-b473-496c-be26-bb64afe860ad-calico-apiserver-certs\") pod \"calico-apiserver-564fc96ccb-dqvv5\" (UID: \"de1246bc-b473-496c-be26-bb64afe860ad\") " pod="calico-apiserver/calico-apiserver-564fc96ccb-dqvv5" Feb 13 15:21:50.376147 systemd[1]: Created slice kubepods-besteffort-pod14b2995f_bdfb_4265_9dc0_06ae16e4bb6c.slice - libcontainer container kubepods-besteffort-pod14b2995f_bdfb_4265_9dc0_06ae16e4bb6c.slice. Feb 13 15:21:50.378567 containerd[1442]: time="2025-02-13T15:21:50.378529610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8vvjv,Uid:14b2995f-bdfb-4265-9dc0-06ae16e4bb6c,Namespace:calico-system,Attempt:0,}" Feb 13 15:21:50.480128 kubelet[2621]: E0213 15:21:50.478900 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:21:50.486463 containerd[1442]: time="2025-02-13T15:21:50.486423414Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 15:21:50.537959 kubelet[2621]: E0213 15:21:50.537918 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:21:50.539228 containerd[1442]: time="2025-02-13T15:21:50.538715610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qhltz,Uid:c285582e-9191-432e-99ea-cc7fe7db7fbb,Namespace:kube-system,Attempt:0,}" Feb 13 15:21:50.548812 containerd[1442]: time="2025-02-13T15:21:50.548765452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-564fc96ccb-dqvv5,Uid:de1246bc-b473-496c-be26-bb64afe860ad,Namespace:calico-apiserver,Attempt:0,}" Feb 13 15:21:50.559679 containerd[1442]: time="2025-02-13T15:21:50.559424685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-658db9fb4b-xcbwf,Uid:87c71f20-054e-44ba-a99e-c4fefdae6457,Namespace:calico-system,Attempt:0,}" Feb 13 15:21:50.571318 kubelet[2621]: E0213 15:21:50.570990 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:21:50.573086 containerd[1442]: time="2025-02-13T15:21:50.573009751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-564fc96ccb-fw9ln,Uid:84c3b521-a067-4602-ad8f-cbca4249dad7,Namespace:calico-apiserver,Attempt:0,}" Feb 13 15:21:50.573373 containerd[1442]: time="2025-02-13T15:21:50.573339608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2hpnn,Uid:deed100f-387a-4ac5-9252-5efbd4c9fe2b,Namespace:kube-system,Attempt:0,}" Feb 13 15:21:50.652846 containerd[1442]: time="2025-02-13T15:21:50.652780214Z" level=error msg="Failed to destroy network for sandbox \"d96ae196292a605b5cd4d41b835a3232ce63bc7c919a65833fa2c6f5c8cba484\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:50.663794 containerd[1442]: time="2025-02-13T15:21:50.663642498Z" level=error msg="encountered an error cleaning up failed sandbox \"d96ae196292a605b5cd4d41b835a3232ce63bc7c919a65833fa2c6f5c8cba484\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:50.663794 containerd[1442]: time="2025-02-13T15:21:50.663743744Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8vvjv,Uid:14b2995f-bdfb-4265-9dc0-06ae16e4bb6c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d96ae196292a605b5cd4d41b835a3232ce63bc7c919a65833fa2c6f5c8cba484\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:50.666851 kubelet[2621]: E0213 15:21:50.666788 2621 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d96ae196292a605b5cd4d41b835a3232ce63bc7c919a65833fa2c6f5c8cba484\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:50.667206 kubelet[2621]: E0213 15:21:50.666882 2621 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d96ae196292a605b5cd4d41b835a3232ce63bc7c919a65833fa2c6f5c8cba484\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8vvjv" Feb 13 15:21:50.667284 kubelet[2621]: E0213 15:21:50.667253 2621 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d96ae196292a605b5cd4d41b835a3232ce63bc7c919a65833fa2c6f5c8cba484\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8vvjv" Feb 13 15:21:50.667848 kubelet[2621]: E0213 15:21:50.667343 2621 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-8vvjv_calico-system(14b2995f-bdfb-4265-9dc0-06ae16e4bb6c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-8vvjv_calico-system(14b2995f-bdfb-4265-9dc0-06ae16e4bb6c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d96ae196292a605b5cd4d41b835a3232ce63bc7c919a65833fa2c6f5c8cba484\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8vvjv" podUID="14b2995f-bdfb-4265-9dc0-06ae16e4bb6c" Feb 13 15:21:50.675286 containerd[1442]: time="2025-02-13T15:21:50.675194178Z" level=error msg="Failed to destroy network for sandbox \"b21e1a8f17e86fe71cf0ca5f7bf255d3e92b462acaa3367bcccf912c971932fc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:50.675607 containerd[1442]: time="2025-02-13T15:21:50.675580038Z" level=error msg="encountered an error cleaning up failed sandbox \"b21e1a8f17e86fe71cf0ca5f7bf255d3e92b462acaa3367bcccf912c971932fc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:50.675699 containerd[1442]: time="2025-02-13T15:21:50.675679084Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-564fc96ccb-dqvv5,Uid:de1246bc-b473-496c-be26-bb64afe860ad,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b21e1a8f17e86fe71cf0ca5f7bf255d3e92b462acaa3367bcccf912c971932fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:50.675889 kubelet[2621]: E0213 15:21:50.675863 2621 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b21e1a8f17e86fe71cf0ca5f7bf255d3e92b462acaa3367bcccf912c971932fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:50.675946 kubelet[2621]: E0213 15:21:50.675910 2621 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b21e1a8f17e86fe71cf0ca5f7bf255d3e92b462acaa3367bcccf912c971932fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-564fc96ccb-dqvv5" Feb 13 15:21:50.675946 kubelet[2621]: E0213 15:21:50.675929 2621 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b21e1a8f17e86fe71cf0ca5f7bf255d3e92b462acaa3367bcccf912c971932fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-564fc96ccb-dqvv5" Feb 13 15:21:50.676008 kubelet[2621]: E0213 15:21:50.675971 2621 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-564fc96ccb-dqvv5_calico-apiserver(de1246bc-b473-496c-be26-bb64afe860ad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-564fc96ccb-dqvv5_calico-apiserver(de1246bc-b473-496c-be26-bb64afe860ad)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b21e1a8f17e86fe71cf0ca5f7bf255d3e92b462acaa3367bcccf912c971932fc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-564fc96ccb-dqvv5" podUID="de1246bc-b473-496c-be26-bb64afe860ad" Feb 13 15:21:50.679952 containerd[1442]: time="2025-02-13T15:21:50.679484281Z" level=error msg="Failed to destroy network for sandbox \"74aa2dc63f90f0fe66425516e5fd4cd00e009a8fcb748a38adc86fdedb44b077\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:50.679952 containerd[1442]: time="2025-02-13T15:21:50.679501082Z" level=error msg="Failed to destroy network for sandbox \"c4baf65475d91bc2c15306315ac291509c501cd40c636754fa7bb756cbcea258\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:50.680101 containerd[1442]: time="2025-02-13T15:21:50.679982867Z" level=error msg="encountered an error cleaning up failed sandbox \"74aa2dc63f90f0fe66425516e5fd4cd00e009a8fcb748a38adc86fdedb44b077\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:50.680170 containerd[1442]: time="2025-02-13T15:21:50.680139875Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qhltz,Uid:c285582e-9191-432e-99ea-cc7fe7db7fbb,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"74aa2dc63f90f0fe66425516e5fd4cd00e009a8fcb748a38adc86fdedb44b077\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:50.680250 containerd[1442]: time="2025-02-13T15:21:50.680225240Z" level=error msg="encountered an error cleaning up failed sandbox \"c4baf65475d91bc2c15306315ac291509c501cd40c636754fa7bb756cbcea258\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:50.680340 containerd[1442]: time="2025-02-13T15:21:50.680317964Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-658db9fb4b-xcbwf,Uid:87c71f20-054e-44ba-a99e-c4fefdae6457,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c4baf65475d91bc2c15306315ac291509c501cd40c636754fa7bb756cbcea258\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:50.680484 kubelet[2621]: E0213 15:21:50.680412 2621 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74aa2dc63f90f0fe66425516e5fd4cd00e009a8fcb748a38adc86fdedb44b077\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:50.680484 kubelet[2621]: E0213 15:21:50.680445 2621 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4baf65475d91bc2c15306315ac291509c501cd40c636754fa7bb756cbcea258\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:50.680484 kubelet[2621]: E0213 15:21:50.680460 2621 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74aa2dc63f90f0fe66425516e5fd4cd00e009a8fcb748a38adc86fdedb44b077\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-qhltz" Feb 13 15:21:50.680484 kubelet[2621]: E0213 15:21:50.680480 2621 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74aa2dc63f90f0fe66425516e5fd4cd00e009a8fcb748a38adc86fdedb44b077\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-qhltz" Feb 13 15:21:50.680645 kubelet[2621]: E0213 15:21:50.680501 2621 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4baf65475d91bc2c15306315ac291509c501cd40c636754fa7bb756cbcea258\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-658db9fb4b-xcbwf" Feb 13 15:21:50.680645 kubelet[2621]: E0213 15:21:50.680627 2621 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4baf65475d91bc2c15306315ac291509c501cd40c636754fa7bb756cbcea258\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-658db9fb4b-xcbwf" Feb 13 15:21:50.680768 kubelet[2621]: E0213 15:21:50.680665 2621 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-658db9fb4b-xcbwf_calico-system(87c71f20-054e-44ba-a99e-c4fefdae6457)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-658db9fb4b-xcbwf_calico-system(87c71f20-054e-44ba-a99e-c4fefdae6457)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c4baf65475d91bc2c15306315ac291509c501cd40c636754fa7bb756cbcea258\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-658db9fb4b-xcbwf" podUID="87c71f20-054e-44ba-a99e-c4fefdae6457" Feb 13 15:21:50.680768 kubelet[2621]: E0213 15:21:50.680526 2621 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-qhltz_kube-system(c285582e-9191-432e-99ea-cc7fe7db7fbb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-qhltz_kube-system(c285582e-9191-432e-99ea-cc7fe7db7fbb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"74aa2dc63f90f0fe66425516e5fd4cd00e009a8fcb748a38adc86fdedb44b077\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-qhltz" podUID="c285582e-9191-432e-99ea-cc7fe7db7fbb" Feb 13 15:21:50.703782 containerd[1442]: time="2025-02-13T15:21:50.703726860Z" level=error msg="Failed to destroy network for sandbox \"892160d9b3fdcfef5dfbb8a15226ee03d30c00255d1c8fffacad07d070348325\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:50.704139 containerd[1442]: time="2025-02-13T15:21:50.704112800Z" level=error msg="encountered an error cleaning up failed sandbox \"892160d9b3fdcfef5dfbb8a15226ee03d30c00255d1c8fffacad07d070348325\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:50.704191 containerd[1442]: time="2025-02-13T15:21:50.704172443Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-564fc96ccb-fw9ln,Uid:84c3b521-a067-4602-ad8f-cbca4249dad7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"892160d9b3fdcfef5dfbb8a15226ee03d30c00255d1c8fffacad07d070348325\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:50.704675 kubelet[2621]: E0213 15:21:50.704637 2621 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"892160d9b3fdcfef5dfbb8a15226ee03d30c00255d1c8fffacad07d070348325\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:50.704740 kubelet[2621]: E0213 15:21:50.704703 2621 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"892160d9b3fdcfef5dfbb8a15226ee03d30c00255d1c8fffacad07d070348325\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-564fc96ccb-fw9ln" Feb 13 15:21:50.704740 kubelet[2621]: E0213 15:21:50.704725 2621 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"892160d9b3fdcfef5dfbb8a15226ee03d30c00255d1c8fffacad07d070348325\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-564fc96ccb-fw9ln" Feb 13 15:21:50.704793 kubelet[2621]: E0213 15:21:50.704765 2621 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-564fc96ccb-fw9ln_calico-apiserver(84c3b521-a067-4602-ad8f-cbca4249dad7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-564fc96ccb-fw9ln_calico-apiserver(84c3b521-a067-4602-ad8f-cbca4249dad7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"892160d9b3fdcfef5dfbb8a15226ee03d30c00255d1c8fffacad07d070348325\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-564fc96ccb-fw9ln" podUID="84c3b521-a067-4602-ad8f-cbca4249dad7" Feb 13 15:21:50.714885 containerd[1442]: time="2025-02-13T15:21:50.714842078Z" level=error msg="Failed to destroy network for sandbox \"f77271c961d7b4a7b309d3aeaa83ef8ef8c4ac2d6b4080280f4c6b5d24455417\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:50.715188 containerd[1442]: time="2025-02-13T15:21:50.715163694Z" level=error msg="encountered an error cleaning up failed sandbox \"f77271c961d7b4a7b309d3aeaa83ef8ef8c4ac2d6b4080280f4c6b5d24455417\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:50.715242 containerd[1442]: time="2025-02-13T15:21:50.715221737Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2hpnn,Uid:deed100f-387a-4ac5-9252-5efbd4c9fe2b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f77271c961d7b4a7b309d3aeaa83ef8ef8c4ac2d6b4080280f4c6b5d24455417\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:50.715431 kubelet[2621]: E0213 15:21:50.715397 2621 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f77271c961d7b4a7b309d3aeaa83ef8ef8c4ac2d6b4080280f4c6b5d24455417\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:50.715499 kubelet[2621]: E0213 15:21:50.715449 2621 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f77271c961d7b4a7b309d3aeaa83ef8ef8c4ac2d6b4080280f4c6b5d24455417\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-2hpnn" Feb 13 15:21:50.715499 kubelet[2621]: E0213 15:21:50.715467 2621 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f77271c961d7b4a7b309d3aeaa83ef8ef8c4ac2d6b4080280f4c6b5d24455417\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-2hpnn" Feb 13 15:21:50.715611 kubelet[2621]: E0213 15:21:50.715516 2621 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-2hpnn_kube-system(deed100f-387a-4ac5-9252-5efbd4c9fe2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-2hpnn_kube-system(deed100f-387a-4ac5-9252-5efbd4c9fe2b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f77271c961d7b4a7b309d3aeaa83ef8ef8c4ac2d6b4080280f4c6b5d24455417\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-2hpnn" podUID="deed100f-387a-4ac5-9252-5efbd4c9fe2b" Feb 13 15:21:50.771541 systemd[1]: Started sshd@8-10.0.0.35:22-10.0.0.1:33786.service - OpenSSH per-connection server daemon (10.0.0.1:33786). Feb 13 15:21:50.820765 sshd[3644]: Accepted publickey for core from 10.0.0.1 port 33786 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:21:50.822331 sshd-session[3644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:21:50.826074 systemd-logind[1425]: New session 9 of user core. Feb 13 15:21:50.834176 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 15:21:50.944231 sshd[3646]: Connection closed by 10.0.0.1 port 33786 Feb 13 15:21:50.944567 sshd-session[3644]: pam_unix(sshd:session): session closed for user core Feb 13 15:21:50.947697 systemd[1]: sshd@8-10.0.0.35:22-10.0.0.1:33786.service: Deactivated successfully. Feb 13 15:21:50.949345 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 15:21:50.949932 systemd-logind[1425]: Session 9 logged out. Waiting for processes to exit. Feb 13 15:21:50.951017 systemd-logind[1425]: Removed session 9. Feb 13 15:21:51.462792 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-74aa2dc63f90f0fe66425516e5fd4cd00e009a8fcb748a38adc86fdedb44b077-shm.mount: Deactivated successfully. Feb 13 15:21:51.462884 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b21e1a8f17e86fe71cf0ca5f7bf255d3e92b462acaa3367bcccf912c971932fc-shm.mount: Deactivated successfully. Feb 13 15:21:51.462937 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d96ae196292a605b5cd4d41b835a3232ce63bc7c919a65833fa2c6f5c8cba484-shm.mount: Deactivated successfully. Feb 13 15:21:51.485752 kubelet[2621]: I0213 15:21:51.485716 2621 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b21e1a8f17e86fe71cf0ca5f7bf255d3e92b462acaa3367bcccf912c971932fc" Feb 13 15:21:51.486831 kubelet[2621]: I0213 15:21:51.486804 2621 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f77271c961d7b4a7b309d3aeaa83ef8ef8c4ac2d6b4080280f4c6b5d24455417" Feb 13 15:21:51.487506 containerd[1442]: time="2025-02-13T15:21:51.487411424Z" level=info msg="StopPodSandbox for \"f77271c961d7b4a7b309d3aeaa83ef8ef8c4ac2d6b4080280f4c6b5d24455417\"" Feb 13 15:21:51.487506 containerd[1442]: time="2025-02-13T15:21:51.487473587Z" level=info msg="StopPodSandbox for \"b21e1a8f17e86fe71cf0ca5f7bf255d3e92b462acaa3367bcccf912c971932fc\"" Feb 13 15:21:51.487758 containerd[1442]: time="2025-02-13T15:21:51.487620234Z" level=info msg="Ensure that sandbox b21e1a8f17e86fe71cf0ca5f7bf255d3e92b462acaa3367bcccf912c971932fc in task-service has been cleanup successfully" Feb 13 15:21:51.488434 containerd[1442]: time="2025-02-13T15:21:51.488265627Z" level=info msg="Ensure that sandbox f77271c961d7b4a7b309d3aeaa83ef8ef8c4ac2d6b4080280f4c6b5d24455417 in task-service has been cleanup successfully" Feb 13 15:21:51.489519 kubelet[2621]: I0213 15:21:51.488800 2621 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="892160d9b3fdcfef5dfbb8a15226ee03d30c00255d1c8fffacad07d070348325" Feb 13 15:21:51.489587 containerd[1442]: time="2025-02-13T15:21:51.488810814Z" level=info msg="TearDown network for sandbox \"b21e1a8f17e86fe71cf0ca5f7bf255d3e92b462acaa3367bcccf912c971932fc\" successfully" Feb 13 15:21:51.489587 containerd[1442]: time="2025-02-13T15:21:51.488839576Z" level=info msg="StopPodSandbox for \"b21e1a8f17e86fe71cf0ca5f7bf255d3e92b462acaa3367bcccf912c971932fc\" returns successfully" Feb 13 15:21:51.489587 containerd[1442]: time="2025-02-13T15:21:51.489112150Z" level=info msg="TearDown network for sandbox \"f77271c961d7b4a7b309d3aeaa83ef8ef8c4ac2d6b4080280f4c6b5d24455417\" successfully" Feb 13 15:21:51.489587 containerd[1442]: time="2025-02-13T15:21:51.489161272Z" level=info msg="StopPodSandbox for \"f77271c961d7b4a7b309d3aeaa83ef8ef8c4ac2d6b4080280f4c6b5d24455417\" returns successfully" Feb 13 15:21:51.489587 containerd[1442]: time="2025-02-13T15:21:51.489261477Z" level=info msg="StopPodSandbox for \"892160d9b3fdcfef5dfbb8a15226ee03d30c00255d1c8fffacad07d070348325\"" Feb 13 15:21:51.489587 containerd[1442]: time="2025-02-13T15:21:51.489388204Z" level=info msg="Ensure that sandbox 892160d9b3fdcfef5dfbb8a15226ee03d30c00255d1c8fffacad07d070348325 in task-service has been cleanup successfully" Feb 13 15:21:51.489759 containerd[1442]: time="2025-02-13T15:21:51.489739541Z" level=info msg="TearDown network for sandbox \"892160d9b3fdcfef5dfbb8a15226ee03d30c00255d1c8fffacad07d070348325\" successfully" Feb 13 15:21:51.489825 containerd[1442]: time="2025-02-13T15:21:51.489811345Z" level=info msg="StopPodSandbox for \"892160d9b3fdcfef5dfbb8a15226ee03d30c00255d1c8fffacad07d070348325\" returns successfully" Feb 13 15:21:51.490006 systemd[1]: run-netns-cni\x2dc56c6c0c\x2d8423\x2dc3c5\x2d776d\x2da846034490fa.mount: Deactivated successfully. Feb 13 15:21:51.490113 systemd[1]: run-netns-cni\x2dc2781ff2\x2d1262\x2d7f3e\x2d0d72\x2d0e39b47001fd.mount: Deactivated successfully. Feb 13 15:21:51.490484 kubelet[2621]: E0213 15:21:51.490252 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:21:51.491175 containerd[1442]: time="2025-02-13T15:21:51.490779914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-564fc96ccb-fw9ln,Uid:84c3b521-a067-4602-ad8f-cbca4249dad7,Namespace:calico-apiserver,Attempt:1,}" Feb 13 15:21:51.491175 containerd[1442]: time="2025-02-13T15:21:51.491164893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2hpnn,Uid:deed100f-387a-4ac5-9252-5efbd4c9fe2b,Namespace:kube-system,Attempt:1,}" Feb 13 15:21:51.492405 containerd[1442]: time="2025-02-13T15:21:51.492379595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-564fc96ccb-dqvv5,Uid:de1246bc-b473-496c-be26-bb64afe860ad,Namespace:calico-apiserver,Attempt:1,}" Feb 13 15:21:51.493533 kubelet[2621]: I0213 15:21:51.493198 2621 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74aa2dc63f90f0fe66425516e5fd4cd00e009a8fcb748a38adc86fdedb44b077" Feb 13 15:21:51.493740 containerd[1442]: time="2025-02-13T15:21:51.493717982Z" level=info msg="StopPodSandbox for \"74aa2dc63f90f0fe66425516e5fd4cd00e009a8fcb748a38adc86fdedb44b077\"" Feb 13 15:21:51.493897 containerd[1442]: time="2025-02-13T15:21:51.493849189Z" level=info msg="Ensure that sandbox 74aa2dc63f90f0fe66425516e5fd4cd00e009a8fcb748a38adc86fdedb44b077 in task-service has been cleanup successfully" Feb 13 15:21:51.494532 systemd[1]: run-netns-cni\x2d20f11954\x2d486e\x2d8d2b\x2dc2e2\x2ddd8fad673565.mount: Deactivated successfully. Feb 13 15:21:51.495697 kubelet[2621]: I0213 15:21:51.494919 2621 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d96ae196292a605b5cd4d41b835a3232ce63bc7c919a65833fa2c6f5c8cba484" Feb 13 15:21:51.495776 containerd[1442]: time="2025-02-13T15:21:51.495341064Z" level=info msg="TearDown network for sandbox \"74aa2dc63f90f0fe66425516e5fd4cd00e009a8fcb748a38adc86fdedb44b077\" successfully" Feb 13 15:21:51.495776 containerd[1442]: time="2025-02-13T15:21:51.495362865Z" level=info msg="StopPodSandbox for \"74aa2dc63f90f0fe66425516e5fd4cd00e009a8fcb748a38adc86fdedb44b077\" returns successfully" Feb 13 15:21:51.495776 containerd[1442]: time="2025-02-13T15:21:51.495448230Z" level=info msg="StopPodSandbox for \"d96ae196292a605b5cd4d41b835a3232ce63bc7c919a65833fa2c6f5c8cba484\"" Feb 13 15:21:51.495776 containerd[1442]: time="2025-02-13T15:21:51.495628639Z" level=info msg="Ensure that sandbox d96ae196292a605b5cd4d41b835a3232ce63bc7c919a65833fa2c6f5c8cba484 in task-service has been cleanup successfully" Feb 13 15:21:51.495995 containerd[1442]: time="2025-02-13T15:21:51.495967936Z" level=info msg="TearDown network for sandbox \"d96ae196292a605b5cd4d41b835a3232ce63bc7c919a65833fa2c6f5c8cba484\" successfully" Feb 13 15:21:51.495995 containerd[1442]: time="2025-02-13T15:21:51.495990537Z" level=info msg="StopPodSandbox for \"d96ae196292a605b5cd4d41b835a3232ce63bc7c919a65833fa2c6f5c8cba484\" returns successfully" Feb 13 15:21:51.496672 kubelet[2621]: E0213 15:21:51.496646 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:21:51.497260 systemd[1]: run-netns-cni\x2d6930ddf0\x2dec8e\x2d5535\x2d2af7\x2d6c36b39370db.mount: Deactivated successfully. Feb 13 15:21:51.497342 systemd[1]: run-netns-cni\x2df6214c23\x2dd747\x2d9f8b\x2dcaf5\x2d74e9ece8240c.mount: Deactivated successfully. Feb 13 15:21:51.498308 containerd[1442]: time="2025-02-13T15:21:51.498265612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8vvjv,Uid:14b2995f-bdfb-4265-9dc0-06ae16e4bb6c,Namespace:calico-system,Attempt:1,}" Feb 13 15:21:51.498378 containerd[1442]: time="2025-02-13T15:21:51.498326655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qhltz,Uid:c285582e-9191-432e-99ea-cc7fe7db7fbb,Namespace:kube-system,Attempt:1,}" Feb 13 15:21:51.498921 kubelet[2621]: I0213 15:21:51.498901 2621 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4baf65475d91bc2c15306315ac291509c501cd40c636754fa7bb756cbcea258" Feb 13 15:21:51.499914 containerd[1442]: time="2025-02-13T15:21:51.499534436Z" level=info msg="StopPodSandbox for \"c4baf65475d91bc2c15306315ac291509c501cd40c636754fa7bb756cbcea258\"" Feb 13 15:21:51.499914 containerd[1442]: time="2025-02-13T15:21:51.499716085Z" level=info msg="Ensure that sandbox c4baf65475d91bc2c15306315ac291509c501cd40c636754fa7bb756cbcea258 in task-service has been cleanup successfully" Feb 13 15:21:51.500115 containerd[1442]: time="2025-02-13T15:21:51.500090944Z" level=info msg="TearDown network for sandbox \"c4baf65475d91bc2c15306315ac291509c501cd40c636754fa7bb756cbcea258\" successfully" Feb 13 15:21:51.500115 containerd[1442]: time="2025-02-13T15:21:51.500111065Z" level=info msg="StopPodSandbox for \"c4baf65475d91bc2c15306315ac291509c501cd40c636754fa7bb756cbcea258\" returns successfully" Feb 13 15:21:51.500678 containerd[1442]: time="2025-02-13T15:21:51.500633211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-658db9fb4b-xcbwf,Uid:87c71f20-054e-44ba-a99e-c4fefdae6457,Namespace:calico-system,Attempt:1,}" Feb 13 15:21:51.626207 containerd[1442]: time="2025-02-13T15:21:51.626141989Z" level=error msg="Failed to destroy network for sandbox \"a3d2c5311b68c754ed4f52b532c1dbe545a75c82f3b62899ff9015ee5110f3ad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:51.635342 containerd[1442]: time="2025-02-13T15:21:51.635206887Z" level=error msg="encountered an error cleaning up failed sandbox \"a3d2c5311b68c754ed4f52b532c1dbe545a75c82f3b62899ff9015ee5110f3ad\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:51.635342 containerd[1442]: time="2025-02-13T15:21:51.635292571Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-564fc96ccb-fw9ln,Uid:84c3b521-a067-4602-ad8f-cbca4249dad7,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"a3d2c5311b68c754ed4f52b532c1dbe545a75c82f3b62899ff9015ee5110f3ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:51.635719 kubelet[2621]: E0213 15:21:51.635684 2621 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3d2c5311b68c754ed4f52b532c1dbe545a75c82f3b62899ff9015ee5110f3ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:51.635824 kubelet[2621]: E0213 15:21:51.635743 2621 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3d2c5311b68c754ed4f52b532c1dbe545a75c82f3b62899ff9015ee5110f3ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-564fc96ccb-fw9ln" Feb 13 15:21:51.635824 kubelet[2621]: E0213 15:21:51.635764 2621 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3d2c5311b68c754ed4f52b532c1dbe545a75c82f3b62899ff9015ee5110f3ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-564fc96ccb-fw9ln" Feb 13 15:21:51.635824 kubelet[2621]: E0213 15:21:51.635800 2621 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-564fc96ccb-fw9ln_calico-apiserver(84c3b521-a067-4602-ad8f-cbca4249dad7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-564fc96ccb-fw9ln_calico-apiserver(84c3b521-a067-4602-ad8f-cbca4249dad7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a3d2c5311b68c754ed4f52b532c1dbe545a75c82f3b62899ff9015ee5110f3ad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-564fc96ccb-fw9ln" podUID="84c3b521-a067-4602-ad8f-cbca4249dad7" Feb 13 15:21:51.640400 containerd[1442]: time="2025-02-13T15:21:51.640351227Z" level=error msg="Failed to destroy network for sandbox \"5fc5ff480b0ac02e88a3e59e79abeae223a5c6ef1c9e47fd64658d555c0f5c92\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:51.640900 containerd[1442]: time="2025-02-13T15:21:51.640871293Z" level=error msg="encountered an error cleaning up failed sandbox \"5fc5ff480b0ac02e88a3e59e79abeae223a5c6ef1c9e47fd64658d555c0f5c92\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:51.640962 containerd[1442]: time="2025-02-13T15:21:51.640943297Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8vvjv,Uid:14b2995f-bdfb-4265-9dc0-06ae16e4bb6c,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"5fc5ff480b0ac02e88a3e59e79abeae223a5c6ef1c9e47fd64658d555c0f5c92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:51.641360 kubelet[2621]: E0213 15:21:51.641328 2621 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5fc5ff480b0ac02e88a3e59e79abeae223a5c6ef1c9e47fd64658d555c0f5c92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:51.641402 kubelet[2621]: E0213 15:21:51.641380 2621 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5fc5ff480b0ac02e88a3e59e79abeae223a5c6ef1c9e47fd64658d555c0f5c92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8vvjv" Feb 13 15:21:51.641424 kubelet[2621]: E0213 15:21:51.641399 2621 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5fc5ff480b0ac02e88a3e59e79abeae223a5c6ef1c9e47fd64658d555c0f5c92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8vvjv" Feb 13 15:21:51.641454 kubelet[2621]: E0213 15:21:51.641435 2621 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-8vvjv_calico-system(14b2995f-bdfb-4265-9dc0-06ae16e4bb6c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-8vvjv_calico-system(14b2995f-bdfb-4265-9dc0-06ae16e4bb6c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5fc5ff480b0ac02e88a3e59e79abeae223a5c6ef1c9e47fd64658d555c0f5c92\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8vvjv" podUID="14b2995f-bdfb-4265-9dc0-06ae16e4bb6c" Feb 13 15:21:51.644033 containerd[1442]: time="2025-02-13T15:21:51.643990811Z" level=error msg="Failed to destroy network for sandbox \"ec5f837bdef6209b21d66e9a11f472ff6e4172b2c0d38f2552319bfe446e9725\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:51.644676 containerd[1442]: time="2025-02-13T15:21:51.644641043Z" level=error msg="encountered an error cleaning up failed sandbox \"ec5f837bdef6209b21d66e9a11f472ff6e4172b2c0d38f2552319bfe446e9725\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:51.644926 containerd[1442]: time="2025-02-13T15:21:51.644899456Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2hpnn,Uid:deed100f-387a-4ac5-9252-5efbd4c9fe2b,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"ec5f837bdef6209b21d66e9a11f472ff6e4172b2c0d38f2552319bfe446e9725\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:51.645721 kubelet[2621]: E0213 15:21:51.645600 2621 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec5f837bdef6209b21d66e9a11f472ff6e4172b2c0d38f2552319bfe446e9725\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:51.645974 kubelet[2621]: E0213 15:21:51.645942 2621 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec5f837bdef6209b21d66e9a11f472ff6e4172b2c0d38f2552319bfe446e9725\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-2hpnn" Feb 13 15:21:51.646070 kubelet[2621]: E0213 15:21:51.645980 2621 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec5f837bdef6209b21d66e9a11f472ff6e4172b2c0d38f2552319bfe446e9725\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-2hpnn" Feb 13 15:21:51.646446 kubelet[2621]: E0213 15:21:51.646343 2621 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-2hpnn_kube-system(deed100f-387a-4ac5-9252-5efbd4c9fe2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-2hpnn_kube-system(deed100f-387a-4ac5-9252-5efbd4c9fe2b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ec5f837bdef6209b21d66e9a11f472ff6e4172b2c0d38f2552319bfe446e9725\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-2hpnn" podUID="deed100f-387a-4ac5-9252-5efbd4c9fe2b" Feb 13 15:21:51.659224 containerd[1442]: time="2025-02-13T15:21:51.659174097Z" level=error msg="Failed to destroy network for sandbox \"1290b96754b10ffaddc73a37c85cf3d37997bcbba54c2c7727a2d695f71242d3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:51.659516 containerd[1442]: time="2025-02-13T15:21:51.659488073Z" level=error msg="encountered an error cleaning up failed sandbox \"1290b96754b10ffaddc73a37c85cf3d37997bcbba54c2c7727a2d695f71242d3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:51.659564 containerd[1442]: time="2025-02-13T15:21:51.659546436Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-564fc96ccb-dqvv5,Uid:de1246bc-b473-496c-be26-bb64afe860ad,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"1290b96754b10ffaddc73a37c85cf3d37997bcbba54c2c7727a2d695f71242d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:51.659924 kubelet[2621]: E0213 15:21:51.659872 2621 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1290b96754b10ffaddc73a37c85cf3d37997bcbba54c2c7727a2d695f71242d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:51.662572 containerd[1442]: time="2025-02-13T15:21:51.662522266Z" level=error msg="Failed to destroy network for sandbox \"068df4c4fbceb251808403da7d7f70417a6f0acfb26bbe7e26797b860d1eeaef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:51.664550 containerd[1442]: time="2025-02-13T15:21:51.664434523Z" level=error msg="encountered an error cleaning up failed sandbox \"068df4c4fbceb251808403da7d7f70417a6f0acfb26bbe7e26797b860d1eeaef\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:51.664550 containerd[1442]: time="2025-02-13T15:21:51.664500286Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qhltz,Uid:c285582e-9191-432e-99ea-cc7fe7db7fbb,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"068df4c4fbceb251808403da7d7f70417a6f0acfb26bbe7e26797b860d1eeaef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:51.664959 kubelet[2621]: E0213 15:21:51.664786 2621 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"068df4c4fbceb251808403da7d7f70417a6f0acfb26bbe7e26797b860d1eeaef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:51.664959 kubelet[2621]: E0213 15:21:51.664845 2621 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"068df4c4fbceb251808403da7d7f70417a6f0acfb26bbe7e26797b860d1eeaef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-qhltz" Feb 13 15:21:51.664959 kubelet[2621]: E0213 15:21:51.664864 2621 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"068df4c4fbceb251808403da7d7f70417a6f0acfb26bbe7e26797b860d1eeaef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-qhltz" Feb 13 15:21:51.665098 kubelet[2621]: E0213 15:21:51.664906 2621 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-qhltz_kube-system(c285582e-9191-432e-99ea-cc7fe7db7fbb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-qhltz_kube-system(c285582e-9191-432e-99ea-cc7fe7db7fbb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"068df4c4fbceb251808403da7d7f70417a6f0acfb26bbe7e26797b860d1eeaef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-qhltz" podUID="c285582e-9191-432e-99ea-cc7fe7db7fbb" Feb 13 15:21:51.666768 kubelet[2621]: E0213 15:21:51.666639 2621 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1290b96754b10ffaddc73a37c85cf3d37997bcbba54c2c7727a2d695f71242d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-564fc96ccb-dqvv5" Feb 13 15:21:51.666768 kubelet[2621]: E0213 15:21:51.666683 2621 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1290b96754b10ffaddc73a37c85cf3d37997bcbba54c2c7727a2d695f71242d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-564fc96ccb-dqvv5" Feb 13 15:21:51.666768 kubelet[2621]: E0213 15:21:51.666722 2621 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-564fc96ccb-dqvv5_calico-apiserver(de1246bc-b473-496c-be26-bb64afe860ad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-564fc96ccb-dqvv5_calico-apiserver(de1246bc-b473-496c-be26-bb64afe860ad)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1290b96754b10ffaddc73a37c85cf3d37997bcbba54c2c7727a2d695f71242d3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-564fc96ccb-dqvv5" podUID="de1246bc-b473-496c-be26-bb64afe860ad" Feb 13 15:21:51.668188 containerd[1442]: time="2025-02-13T15:21:51.668151311Z" level=error msg="Failed to destroy network for sandbox \"222fe074a6a0fa30aabf41e153b4175978e6e304dfcf93e3a4c0de8e32655c0a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:51.668939 containerd[1442]: time="2025-02-13T15:21:51.668889668Z" level=error msg="encountered an error cleaning up failed sandbox \"222fe074a6a0fa30aabf41e153b4175978e6e304dfcf93e3a4c0de8e32655c0a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:51.668972 containerd[1442]: time="2025-02-13T15:21:51.668954551Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-658db9fb4b-xcbwf,Uid:87c71f20-054e-44ba-a99e-c4fefdae6457,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"222fe074a6a0fa30aabf41e153b4175978e6e304dfcf93e3a4c0de8e32655c0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:51.669236 kubelet[2621]: E0213 15:21:51.669206 2621 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"222fe074a6a0fa30aabf41e153b4175978e6e304dfcf93e3a4c0de8e32655c0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:51.669285 kubelet[2621]: E0213 15:21:51.669250 2621 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"222fe074a6a0fa30aabf41e153b4175978e6e304dfcf93e3a4c0de8e32655c0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-658db9fb4b-xcbwf" Feb 13 15:21:51.669285 kubelet[2621]: E0213 15:21:51.669271 2621 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"222fe074a6a0fa30aabf41e153b4175978e6e304dfcf93e3a4c0de8e32655c0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-658db9fb4b-xcbwf" Feb 13 15:21:51.669389 kubelet[2621]: E0213 15:21:51.669308 2621 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-658db9fb4b-xcbwf_calico-system(87c71f20-054e-44ba-a99e-c4fefdae6457)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-658db9fb4b-xcbwf_calico-system(87c71f20-054e-44ba-a99e-c4fefdae6457)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"222fe074a6a0fa30aabf41e153b4175978e6e304dfcf93e3a4c0de8e32655c0a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-658db9fb4b-xcbwf" podUID="87c71f20-054e-44ba-a99e-c4fefdae6457" Feb 13 15:21:52.463608 systemd[1]: run-netns-cni\x2db37de113\x2db941\x2d44a7\x2d7063\x2df6b25dfdaac9.mount: Deactivated successfully. Feb 13 15:21:52.650622 kubelet[2621]: I0213 15:21:52.650400 2621 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec5f837bdef6209b21d66e9a11f472ff6e4172b2c0d38f2552319bfe446e9725" Feb 13 15:21:52.652139 containerd[1442]: time="2025-02-13T15:21:52.651677507Z" level=info msg="StopPodSandbox for \"ec5f837bdef6209b21d66e9a11f472ff6e4172b2c0d38f2552319bfe446e9725\"" Feb 13 15:21:52.652139 containerd[1442]: time="2025-02-13T15:21:52.651835834Z" level=info msg="Ensure that sandbox ec5f837bdef6209b21d66e9a11f472ff6e4172b2c0d38f2552319bfe446e9725 in task-service has been cleanup successfully" Feb 13 15:21:52.654143 systemd[1]: run-netns-cni\x2d52f83f38\x2d9cf6\x2d75ed\x2de7d2\x2dd0f5baadffe7.mount: Deactivated successfully. Feb 13 15:21:52.654697 containerd[1442]: time="2025-02-13T15:21:52.654666093Z" level=info msg="TearDown network for sandbox \"ec5f837bdef6209b21d66e9a11f472ff6e4172b2c0d38f2552319bfe446e9725\" successfully" Feb 13 15:21:52.654697 containerd[1442]: time="2025-02-13T15:21:52.654692935Z" level=info msg="StopPodSandbox for \"ec5f837bdef6209b21d66e9a11f472ff6e4172b2c0d38f2552319bfe446e9725\" returns successfully" Feb 13 15:21:52.654803 kubelet[2621]: I0213 15:21:52.654775 2621 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3d2c5311b68c754ed4f52b532c1dbe545a75c82f3b62899ff9015ee5110f3ad" Feb 13 15:21:52.655373 containerd[1442]: time="2025-02-13T15:21:52.655343927Z" level=info msg="StopPodSandbox for \"f77271c961d7b4a7b309d3aeaa83ef8ef8c4ac2d6b4080280f4c6b5d24455417\"" Feb 13 15:21:52.655495 containerd[1442]: time="2025-02-13T15:21:52.655439011Z" level=info msg="TearDown network for sandbox \"f77271c961d7b4a7b309d3aeaa83ef8ef8c4ac2d6b4080280f4c6b5d24455417\" successfully" Feb 13 15:21:52.655495 containerd[1442]: time="2025-02-13T15:21:52.655449172Z" level=info msg="StopPodSandbox for \"f77271c961d7b4a7b309d3aeaa83ef8ef8c4ac2d6b4080280f4c6b5d24455417\" returns successfully" Feb 13 15:21:52.656355 containerd[1442]: time="2025-02-13T15:21:52.656310454Z" level=info msg="StopPodSandbox for \"a3d2c5311b68c754ed4f52b532c1dbe545a75c82f3b62899ff9015ee5110f3ad\"" Feb 13 15:21:52.656606 kubelet[2621]: E0213 15:21:52.656574 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:21:52.657705 containerd[1442]: time="2025-02-13T15:21:52.657663240Z" level=info msg="Ensure that sandbox a3d2c5311b68c754ed4f52b532c1dbe545a75c82f3b62899ff9015ee5110f3ad in task-service has been cleanup successfully" Feb 13 15:21:52.657991 containerd[1442]: time="2025-02-13T15:21:52.657908093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2hpnn,Uid:deed100f-387a-4ac5-9252-5efbd4c9fe2b,Namespace:kube-system,Attempt:2,}" Feb 13 15:21:52.661010 kubelet[2621]: I0213 15:21:52.660794 2621 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="068df4c4fbceb251808403da7d7f70417a6f0acfb26bbe7e26797b860d1eeaef" Feb 13 15:21:52.664427 containerd[1442]: time="2025-02-13T15:21:52.662305148Z" level=info msg="StopPodSandbox for \"068df4c4fbceb251808403da7d7f70417a6f0acfb26bbe7e26797b860d1eeaef\"" Feb 13 15:21:52.662801 systemd[1]: run-netns-cni\x2d3474cd02\x2d8a38\x2dcfe1\x2d6e86\x2dbe5da157b6b1.mount: Deactivated successfully. Feb 13 15:21:52.665145 containerd[1442]: time="2025-02-13T15:21:52.665095685Z" level=info msg="Ensure that sandbox 068df4c4fbceb251808403da7d7f70417a6f0acfb26bbe7e26797b860d1eeaef in task-service has been cleanup successfully" Feb 13 15:21:52.666116 containerd[1442]: time="2025-02-13T15:21:52.665749838Z" level=info msg="TearDown network for sandbox \"a3d2c5311b68c754ed4f52b532c1dbe545a75c82f3b62899ff9015ee5110f3ad\" successfully" Feb 13 15:21:52.666116 containerd[1442]: time="2025-02-13T15:21:52.665777759Z" level=info msg="StopPodSandbox for \"a3d2c5311b68c754ed4f52b532c1dbe545a75c82f3b62899ff9015ee5110f3ad\" returns successfully" Feb 13 15:21:52.667722 containerd[1442]: time="2025-02-13T15:21:52.667679092Z" level=info msg="TearDown network for sandbox \"068df4c4fbceb251808403da7d7f70417a6f0acfb26bbe7e26797b860d1eeaef\" successfully" Feb 13 15:21:52.667722 containerd[1442]: time="2025-02-13T15:21:52.667708454Z" level=info msg="StopPodSandbox for \"068df4c4fbceb251808403da7d7f70417a6f0acfb26bbe7e26797b860d1eeaef\" returns successfully" Feb 13 15:21:52.668046 systemd[1]: run-netns-cni\x2d1159664e\x2d3437\x2d0048\x2d2e62\x2d0e7d5dde940c.mount: Deactivated successfully. Feb 13 15:21:52.668825 containerd[1442]: time="2025-02-13T15:21:52.668315564Z" level=info msg="StopPodSandbox for \"892160d9b3fdcfef5dfbb8a15226ee03d30c00255d1c8fffacad07d070348325\"" Feb 13 15:21:52.668825 containerd[1442]: time="2025-02-13T15:21:52.668439530Z" level=info msg="StopPodSandbox for \"74aa2dc63f90f0fe66425516e5fd4cd00e009a8fcb748a38adc86fdedb44b077\"" Feb 13 15:21:52.668825 containerd[1442]: time="2025-02-13T15:21:52.668693022Z" level=info msg="TearDown network for sandbox \"74aa2dc63f90f0fe66425516e5fd4cd00e009a8fcb748a38adc86fdedb44b077\" successfully" Feb 13 15:21:52.668825 containerd[1442]: time="2025-02-13T15:21:52.668786747Z" level=info msg="StopPodSandbox for \"74aa2dc63f90f0fe66425516e5fd4cd00e009a8fcb748a38adc86fdedb44b077\" returns successfully" Feb 13 15:21:52.669092 containerd[1442]: time="2025-02-13T15:21:52.668504413Z" level=info msg="TearDown network for sandbox \"892160d9b3fdcfef5dfbb8a15226ee03d30c00255d1c8fffacad07d070348325\" successfully" Feb 13 15:21:52.669148 containerd[1442]: time="2025-02-13T15:21:52.669091602Z" level=info msg="StopPodSandbox for \"892160d9b3fdcfef5dfbb8a15226ee03d30c00255d1c8fffacad07d070348325\" returns successfully" Feb 13 15:21:52.669959 kubelet[2621]: E0213 15:21:52.669934 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:21:52.670409 containerd[1442]: time="2025-02-13T15:21:52.670379985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-564fc96ccb-fw9ln,Uid:84c3b521-a067-4602-ad8f-cbca4249dad7,Namespace:calico-apiserver,Attempt:2,}" Feb 13 15:21:52.671618 containerd[1442]: time="2025-02-13T15:21:52.671583004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qhltz,Uid:c285582e-9191-432e-99ea-cc7fe7db7fbb,Namespace:kube-system,Attempt:2,}" Feb 13 15:21:52.673338 kubelet[2621]: I0213 15:21:52.673123 2621 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="222fe074a6a0fa30aabf41e153b4175978e6e304dfcf93e3a4c0de8e32655c0a" Feb 13 15:21:52.674177 containerd[1442]: time="2025-02-13T15:21:52.674144130Z" level=info msg="StopPodSandbox for \"222fe074a6a0fa30aabf41e153b4175978e6e304dfcf93e3a4c0de8e32655c0a\"" Feb 13 15:21:52.674318 containerd[1442]: time="2025-02-13T15:21:52.674299497Z" level=info msg="Ensure that sandbox 222fe074a6a0fa30aabf41e153b4175978e6e304dfcf93e3a4c0de8e32655c0a in task-service has been cleanup successfully" Feb 13 15:21:52.677642 containerd[1442]: time="2025-02-13T15:21:52.677577258Z" level=info msg="TearDown network for sandbox \"222fe074a6a0fa30aabf41e153b4175978e6e304dfcf93e3a4c0de8e32655c0a\" successfully" Feb 13 15:21:52.677642 containerd[1442]: time="2025-02-13T15:21:52.677610380Z" level=info msg="StopPodSandbox for \"222fe074a6a0fa30aabf41e153b4175978e6e304dfcf93e3a4c0de8e32655c0a\" returns successfully" Feb 13 15:21:52.678199 containerd[1442]: time="2025-02-13T15:21:52.678174808Z" level=info msg="StopPodSandbox for \"c4baf65475d91bc2c15306315ac291509c501cd40c636754fa7bb756cbcea258\"" Feb 13 15:21:52.678274 containerd[1442]: time="2025-02-13T15:21:52.678256532Z" level=info msg="TearDown network for sandbox \"c4baf65475d91bc2c15306315ac291509c501cd40c636754fa7bb756cbcea258\" successfully" Feb 13 15:21:52.678274 containerd[1442]: time="2025-02-13T15:21:52.678269852Z" level=info msg="StopPodSandbox for \"c4baf65475d91bc2c15306315ac291509c501cd40c636754fa7bb756cbcea258\" returns successfully" Feb 13 15:21:52.678904 containerd[1442]: time="2025-02-13T15:21:52.678872682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-658db9fb4b-xcbwf,Uid:87c71f20-054e-44ba-a99e-c4fefdae6457,Namespace:calico-system,Attempt:2,}" Feb 13 15:21:52.679316 kubelet[2621]: I0213 15:21:52.679293 2621 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1290b96754b10ffaddc73a37c85cf3d37997bcbba54c2c7727a2d695f71242d3" Feb 13 15:21:52.679945 containerd[1442]: time="2025-02-13T15:21:52.679911653Z" level=info msg="StopPodSandbox for \"1290b96754b10ffaddc73a37c85cf3d37997bcbba54c2c7727a2d695f71242d3\"" Feb 13 15:21:52.680128 containerd[1442]: time="2025-02-13T15:21:52.680083101Z" level=info msg="Ensure that sandbox 1290b96754b10ffaddc73a37c85cf3d37997bcbba54c2c7727a2d695f71242d3 in task-service has been cleanup successfully" Feb 13 15:21:52.682327 containerd[1442]: time="2025-02-13T15:21:52.682291250Z" level=info msg="TearDown network for sandbox \"1290b96754b10ffaddc73a37c85cf3d37997bcbba54c2c7727a2d695f71242d3\" successfully" Feb 13 15:21:52.682327 containerd[1442]: time="2025-02-13T15:21:52.682322091Z" level=info msg="StopPodSandbox for \"1290b96754b10ffaddc73a37c85cf3d37997bcbba54c2c7727a2d695f71242d3\" returns successfully" Feb 13 15:21:52.683342 kubelet[2621]: I0213 15:21:52.683171 2621 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5fc5ff480b0ac02e88a3e59e79abeae223a5c6ef1c9e47fd64658d555c0f5c92" Feb 13 15:21:52.683408 containerd[1442]: time="2025-02-13T15:21:52.683355302Z" level=info msg="StopPodSandbox for \"b21e1a8f17e86fe71cf0ca5f7bf255d3e92b462acaa3367bcccf912c971932fc\"" Feb 13 15:21:52.683450 containerd[1442]: time="2025-02-13T15:21:52.683439466Z" level=info msg="TearDown network for sandbox \"b21e1a8f17e86fe71cf0ca5f7bf255d3e92b462acaa3367bcccf912c971932fc\" successfully" Feb 13 15:21:52.683473 containerd[1442]: time="2025-02-13T15:21:52.683451147Z" level=info msg="StopPodSandbox for \"b21e1a8f17e86fe71cf0ca5f7bf255d3e92b462acaa3367bcccf912c971932fc\" returns successfully" Feb 13 15:21:52.684429 containerd[1442]: time="2025-02-13T15:21:52.683927490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-564fc96ccb-dqvv5,Uid:de1246bc-b473-496c-be26-bb64afe860ad,Namespace:calico-apiserver,Attempt:2,}" Feb 13 15:21:52.684429 containerd[1442]: time="2025-02-13T15:21:52.684071537Z" level=info msg="StopPodSandbox for \"5fc5ff480b0ac02e88a3e59e79abeae223a5c6ef1c9e47fd64658d555c0f5c92\"" Feb 13 15:21:52.684429 containerd[1442]: time="2025-02-13T15:21:52.684239585Z" level=info msg="Ensure that sandbox 5fc5ff480b0ac02e88a3e59e79abeae223a5c6ef1c9e47fd64658d555c0f5c92 in task-service has been cleanup successfully" Feb 13 15:21:52.685622 containerd[1442]: time="2025-02-13T15:21:52.685589092Z" level=info msg="TearDown network for sandbox \"5fc5ff480b0ac02e88a3e59e79abeae223a5c6ef1c9e47fd64658d555c0f5c92\" successfully" Feb 13 15:21:52.685757 containerd[1442]: time="2025-02-13T15:21:52.685741459Z" level=info msg="StopPodSandbox for \"5fc5ff480b0ac02e88a3e59e79abeae223a5c6ef1c9e47fd64658d555c0f5c92\" returns successfully" Feb 13 15:21:52.687620 containerd[1442]: time="2025-02-13T15:21:52.687585870Z" level=info msg="StopPodSandbox for \"d96ae196292a605b5cd4d41b835a3232ce63bc7c919a65833fa2c6f5c8cba484\"" Feb 13 15:21:52.687721 containerd[1442]: time="2025-02-13T15:21:52.687680154Z" level=info msg="TearDown network for sandbox \"d96ae196292a605b5cd4d41b835a3232ce63bc7c919a65833fa2c6f5c8cba484\" successfully" Feb 13 15:21:52.687721 containerd[1442]: time="2025-02-13T15:21:52.687690475Z" level=info msg="StopPodSandbox for \"d96ae196292a605b5cd4d41b835a3232ce63bc7c919a65833fa2c6f5c8cba484\" returns successfully" Feb 13 15:21:52.688502 containerd[1442]: time="2025-02-13T15:21:52.688476073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8vvjv,Uid:14b2995f-bdfb-4265-9dc0-06ae16e4bb6c,Namespace:calico-system,Attempt:2,}" Feb 13 15:21:52.807575 containerd[1442]: time="2025-02-13T15:21:52.807441955Z" level=error msg="Failed to destroy network for sandbox \"5a990fb5682c636e25fb3a50d8fe0024788508fe708016321710033a0552c448\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:52.810067 containerd[1442]: time="2025-02-13T15:21:52.808865864Z" level=error msg="encountered an error cleaning up failed sandbox \"5a990fb5682c636e25fb3a50d8fe0024788508fe708016321710033a0552c448\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:52.810067 containerd[1442]: time="2025-02-13T15:21:52.808932268Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2hpnn,Uid:deed100f-387a-4ac5-9252-5efbd4c9fe2b,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"5a990fb5682c636e25fb3a50d8fe0024788508fe708016321710033a0552c448\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:52.810208 kubelet[2621]: E0213 15:21:52.809181 2621 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a990fb5682c636e25fb3a50d8fe0024788508fe708016321710033a0552c448\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:52.810208 kubelet[2621]: E0213 15:21:52.809247 2621 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a990fb5682c636e25fb3a50d8fe0024788508fe708016321710033a0552c448\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-2hpnn" Feb 13 15:21:52.810208 kubelet[2621]: E0213 15:21:52.809270 2621 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a990fb5682c636e25fb3a50d8fe0024788508fe708016321710033a0552c448\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-2hpnn" Feb 13 15:21:52.810342 kubelet[2621]: E0213 15:21:52.809317 2621 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-2hpnn_kube-system(deed100f-387a-4ac5-9252-5efbd4c9fe2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-2hpnn_kube-system(deed100f-387a-4ac5-9252-5efbd4c9fe2b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5a990fb5682c636e25fb3a50d8fe0024788508fe708016321710033a0552c448\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-2hpnn" podUID="deed100f-387a-4ac5-9252-5efbd4c9fe2b" Feb 13 15:21:52.817742 containerd[1442]: time="2025-02-13T15:21:52.817687978Z" level=error msg="Failed to destroy network for sandbox \"74cd138a61eecc2e6903a038baafdc496135da9e7c240b4fd5953c85a00d83ec\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:52.818127 containerd[1442]: time="2025-02-13T15:21:52.818017594Z" level=error msg="encountered an error cleaning up failed sandbox \"74cd138a61eecc2e6903a038baafdc496135da9e7c240b4fd5953c85a00d83ec\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:52.818195 containerd[1442]: time="2025-02-13T15:21:52.818167361Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-564fc96ccb-fw9ln,Uid:84c3b521-a067-4602-ad8f-cbca4249dad7,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"74cd138a61eecc2e6903a038baafdc496135da9e7c240b4fd5953c85a00d83ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:52.818406 kubelet[2621]: E0213 15:21:52.818371 2621 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74cd138a61eecc2e6903a038baafdc496135da9e7c240b4fd5953c85a00d83ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:52.818474 kubelet[2621]: E0213 15:21:52.818425 2621 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74cd138a61eecc2e6903a038baafdc496135da9e7c240b4fd5953c85a00d83ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-564fc96ccb-fw9ln" Feb 13 15:21:52.818474 kubelet[2621]: E0213 15:21:52.818445 2621 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74cd138a61eecc2e6903a038baafdc496135da9e7c240b4fd5953c85a00d83ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-564fc96ccb-fw9ln" Feb 13 15:21:52.818548 kubelet[2621]: E0213 15:21:52.818485 2621 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-564fc96ccb-fw9ln_calico-apiserver(84c3b521-a067-4602-ad8f-cbca4249dad7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-564fc96ccb-fw9ln_calico-apiserver(84c3b521-a067-4602-ad8f-cbca4249dad7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"74cd138a61eecc2e6903a038baafdc496135da9e7c240b4fd5953c85a00d83ec\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-564fc96ccb-fw9ln" podUID="84c3b521-a067-4602-ad8f-cbca4249dad7" Feb 13 15:21:52.824158 containerd[1442]: time="2025-02-13T15:21:52.824102733Z" level=error msg="Failed to destroy network for sandbox \"f0b420624c0ef4de8293125d4aa3764bbbc9ad891299e4836ec706ef73b01bc2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:52.824632 containerd[1442]: time="2025-02-13T15:21:52.824597077Z" level=error msg="encountered an error cleaning up failed sandbox \"f0b420624c0ef4de8293125d4aa3764bbbc9ad891299e4836ec706ef73b01bc2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:52.824693 containerd[1442]: time="2025-02-13T15:21:52.824665440Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qhltz,Uid:c285582e-9191-432e-99ea-cc7fe7db7fbb,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"f0b420624c0ef4de8293125d4aa3764bbbc9ad891299e4836ec706ef73b01bc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:52.824885 kubelet[2621]: E0213 15:21:52.824849 2621 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0b420624c0ef4de8293125d4aa3764bbbc9ad891299e4836ec706ef73b01bc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:52.824981 kubelet[2621]: E0213 15:21:52.824900 2621 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0b420624c0ef4de8293125d4aa3764bbbc9ad891299e4836ec706ef73b01bc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-qhltz" Feb 13 15:21:52.824981 kubelet[2621]: E0213 15:21:52.824921 2621 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0b420624c0ef4de8293125d4aa3764bbbc9ad891299e4836ec706ef73b01bc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-qhltz" Feb 13 15:21:52.824981 kubelet[2621]: E0213 15:21:52.824963 2621 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-qhltz_kube-system(c285582e-9191-432e-99ea-cc7fe7db7fbb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-qhltz_kube-system(c285582e-9191-432e-99ea-cc7fe7db7fbb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f0b420624c0ef4de8293125d4aa3764bbbc9ad891299e4836ec706ef73b01bc2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-qhltz" podUID="c285582e-9191-432e-99ea-cc7fe7db7fbb" Feb 13 15:21:52.862199 containerd[1442]: time="2025-02-13T15:21:52.862121519Z" level=error msg="Failed to destroy network for sandbox \"2fb37880992bf25b4ffdc0417195ecffe00d1f91c138c1d050e3c108ce682cc8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:52.864190 containerd[1442]: time="2025-02-13T15:21:52.864146059Z" level=error msg="encountered an error cleaning up failed sandbox \"2fb37880992bf25b4ffdc0417195ecffe00d1f91c138c1d050e3c108ce682cc8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:52.864328 containerd[1442]: time="2025-02-13T15:21:52.864223903Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-658db9fb4b-xcbwf,Uid:87c71f20-054e-44ba-a99e-c4fefdae6457,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"2fb37880992bf25b4ffdc0417195ecffe00d1f91c138c1d050e3c108ce682cc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:52.864644 kubelet[2621]: E0213 15:21:52.864525 2621 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fb37880992bf25b4ffdc0417195ecffe00d1f91c138c1d050e3c108ce682cc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:52.864644 kubelet[2621]: E0213 15:21:52.864595 2621 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fb37880992bf25b4ffdc0417195ecffe00d1f91c138c1d050e3c108ce682cc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-658db9fb4b-xcbwf" Feb 13 15:21:52.864644 kubelet[2621]: E0213 15:21:52.864614 2621 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fb37880992bf25b4ffdc0417195ecffe00d1f91c138c1d050e3c108ce682cc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-658db9fb4b-xcbwf" Feb 13 15:21:52.864822 kubelet[2621]: E0213 15:21:52.864658 2621 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-658db9fb4b-xcbwf_calico-system(87c71f20-054e-44ba-a99e-c4fefdae6457)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-658db9fb4b-xcbwf_calico-system(87c71f20-054e-44ba-a99e-c4fefdae6457)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2fb37880992bf25b4ffdc0417195ecffe00d1f91c138c1d050e3c108ce682cc8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-658db9fb4b-xcbwf" podUID="87c71f20-054e-44ba-a99e-c4fefdae6457" Feb 13 15:21:52.881230 containerd[1442]: time="2025-02-13T15:21:52.881163774Z" level=error msg="Failed to destroy network for sandbox \"f2a4be37ee45e0a2e04d64278828b4f8471e3f621e93d94fac9f67321674f84b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:52.882122 containerd[1442]: time="2025-02-13T15:21:52.881998935Z" level=error msg="encountered an error cleaning up failed sandbox \"f2a4be37ee45e0a2e04d64278828b4f8471e3f621e93d94fac9f67321674f84b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:52.882122 containerd[1442]: time="2025-02-13T15:21:52.882089940Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8vvjv,Uid:14b2995f-bdfb-4265-9dc0-06ae16e4bb6c,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"f2a4be37ee45e0a2e04d64278828b4f8471e3f621e93d94fac9f67321674f84b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:52.882625 kubelet[2621]: E0213 15:21:52.882587 2621 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2a4be37ee45e0a2e04d64278828b4f8471e3f621e93d94fac9f67321674f84b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:52.882706 kubelet[2621]: E0213 15:21:52.882643 2621 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2a4be37ee45e0a2e04d64278828b4f8471e3f621e93d94fac9f67321674f84b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8vvjv" Feb 13 15:21:52.882706 kubelet[2621]: E0213 15:21:52.882662 2621 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2a4be37ee45e0a2e04d64278828b4f8471e3f621e93d94fac9f67321674f84b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8vvjv" Feb 13 15:21:52.882759 kubelet[2621]: E0213 15:21:52.882713 2621 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-8vvjv_calico-system(14b2995f-bdfb-4265-9dc0-06ae16e4bb6c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-8vvjv_calico-system(14b2995f-bdfb-4265-9dc0-06ae16e4bb6c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f2a4be37ee45e0a2e04d64278828b4f8471e3f621e93d94fac9f67321674f84b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8vvjv" podUID="14b2995f-bdfb-4265-9dc0-06ae16e4bb6c" Feb 13 15:21:52.890967 containerd[1442]: time="2025-02-13T15:21:52.890925934Z" level=error msg="Failed to destroy network for sandbox \"76d4658126becba445517a90b6b43c46a1c22581049ca91a8e0b6be5d84a6a5b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:52.891583 containerd[1442]: time="2025-02-13T15:21:52.891542764Z" level=error msg="encountered an error cleaning up failed sandbox \"76d4658126becba445517a90b6b43c46a1c22581049ca91a8e0b6be5d84a6a5b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:52.891676 containerd[1442]: time="2025-02-13T15:21:52.891630608Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-564fc96ccb-dqvv5,Uid:de1246bc-b473-496c-be26-bb64afe860ad,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"76d4658126becba445517a90b6b43c46a1c22581049ca91a8e0b6be5d84a6a5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:52.891873 kubelet[2621]: E0213 15:21:52.891836 2621 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76d4658126becba445517a90b6b43c46a1c22581049ca91a8e0b6be5d84a6a5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:52.892043 kubelet[2621]: E0213 15:21:52.891901 2621 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76d4658126becba445517a90b6b43c46a1c22581049ca91a8e0b6be5d84a6a5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-564fc96ccb-dqvv5" Feb 13 15:21:52.892043 kubelet[2621]: E0213 15:21:52.891921 2621 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76d4658126becba445517a90b6b43c46a1c22581049ca91a8e0b6be5d84a6a5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-564fc96ccb-dqvv5" Feb 13 15:21:52.892043 kubelet[2621]: E0213 15:21:52.891965 2621 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-564fc96ccb-dqvv5_calico-apiserver(de1246bc-b473-496c-be26-bb64afe860ad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-564fc96ccb-dqvv5_calico-apiserver(de1246bc-b473-496c-be26-bb64afe860ad)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"76d4658126becba445517a90b6b43c46a1c22581049ca91a8e0b6be5d84a6a5b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-564fc96ccb-dqvv5" podUID="de1246bc-b473-496c-be26-bb64afe860ad" Feb 13 15:21:53.463837 systemd[1]: run-netns-cni\x2dd06e71dc\x2de495\x2d702f\x2d6b74\x2d29baa18f5257.mount: Deactivated successfully. Feb 13 15:21:53.463918 systemd[1]: run-netns-cni\x2d8a82b3f1\x2dc5cd\x2d3e60\x2d062f\x2deb162c7bbe06.mount: Deactivated successfully. Feb 13 15:21:53.463964 systemd[1]: run-netns-cni\x2dfcffe9f1\x2de042\x2dafe3\x2dbdd1\x2df6fc8cd3c54f.mount: Deactivated successfully. Feb 13 15:21:53.687822 kubelet[2621]: I0213 15:21:53.687181 2621 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a990fb5682c636e25fb3a50d8fe0024788508fe708016321710033a0552c448" Feb 13 15:21:53.688299 containerd[1442]: time="2025-02-13T15:21:53.688260312Z" level=info msg="StopPodSandbox for \"5a990fb5682c636e25fb3a50d8fe0024788508fe708016321710033a0552c448\"" Feb 13 15:21:53.688506 containerd[1442]: time="2025-02-13T15:21:53.688458121Z" level=info msg="Ensure that sandbox 5a990fb5682c636e25fb3a50d8fe0024788508fe708016321710033a0552c448 in task-service has been cleanup successfully" Feb 13 15:21:53.689196 containerd[1442]: time="2025-02-13T15:21:53.689161395Z" level=info msg="TearDown network for sandbox \"5a990fb5682c636e25fb3a50d8fe0024788508fe708016321710033a0552c448\" successfully" Feb 13 15:21:53.689196 containerd[1442]: time="2025-02-13T15:21:53.689193036Z" level=info msg="StopPodSandbox for \"5a990fb5682c636e25fb3a50d8fe0024788508fe708016321710033a0552c448\" returns successfully" Feb 13 15:21:53.690494 systemd[1]: run-netns-cni\x2d68ee09fc\x2d3208\x2db8c0\x2d415e\x2dd9e448445faa.mount: Deactivated successfully. Feb 13 15:21:53.693193 containerd[1442]: time="2025-02-13T15:21:53.691342779Z" level=info msg="StopPodSandbox for \"ec5f837bdef6209b21d66e9a11f472ff6e4172b2c0d38f2552319bfe446e9725\"" Feb 13 15:21:53.693193 containerd[1442]: time="2025-02-13T15:21:53.691763559Z" level=info msg="TearDown network for sandbox \"ec5f837bdef6209b21d66e9a11f472ff6e4172b2c0d38f2552319bfe446e9725\" successfully" Feb 13 15:21:53.693193 containerd[1442]: time="2025-02-13T15:21:53.691783360Z" level=info msg="StopPodSandbox for \"ec5f837bdef6209b21d66e9a11f472ff6e4172b2c0d38f2552319bfe446e9725\" returns successfully" Feb 13 15:21:53.693193 containerd[1442]: time="2025-02-13T15:21:53.693009019Z" level=info msg="StopPodSandbox for \"f77271c961d7b4a7b309d3aeaa83ef8ef8c4ac2d6b4080280f4c6b5d24455417\"" Feb 13 15:21:53.693193 containerd[1442]: time="2025-02-13T15:21:53.693137665Z" level=info msg="TearDown network for sandbox \"f77271c961d7b4a7b309d3aeaa83ef8ef8c4ac2d6b4080280f4c6b5d24455417\" successfully" Feb 13 15:21:53.693193 containerd[1442]: time="2025-02-13T15:21:53.693161666Z" level=info msg="StopPodSandbox for \"f77271c961d7b4a7b309d3aeaa83ef8ef8c4ac2d6b4080280f4c6b5d24455417\" returns successfully" Feb 13 15:21:53.693815 kubelet[2621]: E0213 15:21:53.693377 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:21:53.694192 containerd[1442]: time="2025-02-13T15:21:53.694160474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2hpnn,Uid:deed100f-387a-4ac5-9252-5efbd4c9fe2b,Namespace:kube-system,Attempt:3,}" Feb 13 15:21:53.695872 kubelet[2621]: I0213 15:21:53.695848 2621 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74cd138a61eecc2e6903a038baafdc496135da9e7c240b4fd5953c85a00d83ec" Feb 13 15:21:53.696620 containerd[1442]: time="2025-02-13T15:21:53.696581429Z" level=info msg="StopPodSandbox for \"74cd138a61eecc2e6903a038baafdc496135da9e7c240b4fd5953c85a00d83ec\"" Feb 13 15:21:53.696788 containerd[1442]: time="2025-02-13T15:21:53.696767398Z" level=info msg="Ensure that sandbox 74cd138a61eecc2e6903a038baafdc496135da9e7c240b4fd5953c85a00d83ec in task-service has been cleanup successfully" Feb 13 15:21:53.699415 systemd[1]: run-netns-cni\x2da2142682\x2d7c97\x2d10c5\x2d06d1\x2d9b643d00239b.mount: Deactivated successfully. Feb 13 15:21:53.701851 containerd[1442]: time="2025-02-13T15:21:53.701817159Z" level=info msg="TearDown network for sandbox \"74cd138a61eecc2e6903a038baafdc496135da9e7c240b4fd5953c85a00d83ec\" successfully" Feb 13 15:21:53.701851 containerd[1442]: time="2025-02-13T15:21:53.701844400Z" level=info msg="StopPodSandbox for \"74cd138a61eecc2e6903a038baafdc496135da9e7c240b4fd5953c85a00d83ec\" returns successfully" Feb 13 15:21:53.701944 kubelet[2621]: I0213 15:21:53.701918 2621 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0b420624c0ef4de8293125d4aa3764bbbc9ad891299e4836ec706ef73b01bc2" Feb 13 15:21:53.702486 containerd[1442]: time="2025-02-13T15:21:53.702350625Z" level=info msg="StopPodSandbox for \"a3d2c5311b68c754ed4f52b532c1dbe545a75c82f3b62899ff9015ee5110f3ad\"" Feb 13 15:21:53.702486 containerd[1442]: time="2025-02-13T15:21:53.702431829Z" level=info msg="TearDown network for sandbox \"a3d2c5311b68c754ed4f52b532c1dbe545a75c82f3b62899ff9015ee5110f3ad\" successfully" Feb 13 15:21:53.702486 containerd[1442]: time="2025-02-13T15:21:53.702441829Z" level=info msg="StopPodSandbox for \"a3d2c5311b68c754ed4f52b532c1dbe545a75c82f3b62899ff9015ee5110f3ad\" returns successfully" Feb 13 15:21:53.703475 containerd[1442]: time="2025-02-13T15:21:53.703314111Z" level=info msg="StopPodSandbox for \"892160d9b3fdcfef5dfbb8a15226ee03d30c00255d1c8fffacad07d070348325\"" Feb 13 15:21:53.703475 containerd[1442]: time="2025-02-13T15:21:53.703398635Z" level=info msg="TearDown network for sandbox \"892160d9b3fdcfef5dfbb8a15226ee03d30c00255d1c8fffacad07d070348325\" successfully" Feb 13 15:21:53.703475 containerd[1442]: time="2025-02-13T15:21:53.703408835Z" level=info msg="StopPodSandbox for \"892160d9b3fdcfef5dfbb8a15226ee03d30c00255d1c8fffacad07d070348325\" returns successfully" Feb 13 15:21:53.703475 containerd[1442]: time="2025-02-13T15:21:53.703459678Z" level=info msg="StopPodSandbox for \"f0b420624c0ef4de8293125d4aa3764bbbc9ad891299e4836ec706ef73b01bc2\"" Feb 13 15:21:53.703599 containerd[1442]: time="2025-02-13T15:21:53.703572283Z" level=info msg="Ensure that sandbox f0b420624c0ef4de8293125d4aa3764bbbc9ad891299e4836ec706ef73b01bc2 in task-service has been cleanup successfully" Feb 13 15:21:53.704088 containerd[1442]: time="2025-02-13T15:21:53.704061706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-564fc96ccb-fw9ln,Uid:84c3b521-a067-4602-ad8f-cbca4249dad7,Namespace:calico-apiserver,Attempt:3,}" Feb 13 15:21:53.704794 containerd[1442]: time="2025-02-13T15:21:53.704765860Z" level=info msg="TearDown network for sandbox \"f0b420624c0ef4de8293125d4aa3764bbbc9ad891299e4836ec706ef73b01bc2\" successfully" Feb 13 15:21:53.704948 containerd[1442]: time="2025-02-13T15:21:53.704791541Z" level=info msg="StopPodSandbox for \"f0b420624c0ef4de8293125d4aa3764bbbc9ad891299e4836ec706ef73b01bc2\" returns successfully" Feb 13 15:21:53.705354 containerd[1442]: time="2025-02-13T15:21:53.705154159Z" level=info msg="StopPodSandbox for \"068df4c4fbceb251808403da7d7f70417a6f0acfb26bbe7e26797b860d1eeaef\"" Feb 13 15:21:53.705354 containerd[1442]: time="2025-02-13T15:21:53.705245923Z" level=info msg="TearDown network for sandbox \"068df4c4fbceb251808403da7d7f70417a6f0acfb26bbe7e26797b860d1eeaef\" successfully" Feb 13 15:21:53.705354 containerd[1442]: time="2025-02-13T15:21:53.705255763Z" level=info msg="StopPodSandbox for \"068df4c4fbceb251808403da7d7f70417a6f0acfb26bbe7e26797b860d1eeaef\" returns successfully" Feb 13 15:21:53.705575 containerd[1442]: time="2025-02-13T15:21:53.705526176Z" level=info msg="StopPodSandbox for \"74aa2dc63f90f0fe66425516e5fd4cd00e009a8fcb748a38adc86fdedb44b077\"" Feb 13 15:21:53.705629 containerd[1442]: time="2025-02-13T15:21:53.705610300Z" level=info msg="TearDown network for sandbox \"74aa2dc63f90f0fe66425516e5fd4cd00e009a8fcb748a38adc86fdedb44b077\" successfully" Feb 13 15:21:53.705629 containerd[1442]: time="2025-02-13T15:21:53.705624501Z" level=info msg="StopPodSandbox for \"74aa2dc63f90f0fe66425516e5fd4cd00e009a8fcb748a38adc86fdedb44b077\" returns successfully" Feb 13 15:21:53.705857 kubelet[2621]: E0213 15:21:53.705828 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:21:53.706083 containerd[1442]: time="2025-02-13T15:21:53.706059722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qhltz,Uid:c285582e-9191-432e-99ea-cc7fe7db7fbb,Namespace:kube-system,Attempt:3,}" Feb 13 15:21:53.706397 kubelet[2621]: I0213 15:21:53.706372 2621 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2a4be37ee45e0a2e04d64278828b4f8471e3f621e93d94fac9f67321674f84b" Feb 13 15:21:53.706810 containerd[1442]: time="2025-02-13T15:21:53.706786996Z" level=info msg="StopPodSandbox for \"f2a4be37ee45e0a2e04d64278828b4f8471e3f621e93d94fac9f67321674f84b\"" Feb 13 15:21:53.707352 containerd[1442]: time="2025-02-13T15:21:53.707325982Z" level=info msg="Ensure that sandbox f2a4be37ee45e0a2e04d64278828b4f8471e3f621e93d94fac9f67321674f84b in task-service has been cleanup successfully" Feb 13 15:21:53.707907 containerd[1442]: time="2025-02-13T15:21:53.707881929Z" level=info msg="TearDown network for sandbox \"f2a4be37ee45e0a2e04d64278828b4f8471e3f621e93d94fac9f67321674f84b\" successfully" Feb 13 15:21:53.707997 containerd[1442]: time="2025-02-13T15:21:53.707971373Z" level=info msg="StopPodSandbox for \"f2a4be37ee45e0a2e04d64278828b4f8471e3f621e93d94fac9f67321674f84b\" returns successfully" Feb 13 15:21:53.709836 containerd[1442]: time="2025-02-13T15:21:53.709692735Z" level=info msg="StopPodSandbox for \"5fc5ff480b0ac02e88a3e59e79abeae223a5c6ef1c9e47fd64658d555c0f5c92\"" Feb 13 15:21:53.709836 containerd[1442]: time="2025-02-13T15:21:53.709769259Z" level=info msg="TearDown network for sandbox \"5fc5ff480b0ac02e88a3e59e79abeae223a5c6ef1c9e47fd64658d555c0f5c92\" successfully" Feb 13 15:21:53.709836 containerd[1442]: time="2025-02-13T15:21:53.709779659Z" level=info msg="StopPodSandbox for \"5fc5ff480b0ac02e88a3e59e79abeae223a5c6ef1c9e47fd64658d555c0f5c92\" returns successfully" Feb 13 15:21:53.710651 containerd[1442]: time="2025-02-13T15:21:53.710431130Z" level=info msg="StopPodSandbox for \"d96ae196292a605b5cd4d41b835a3232ce63bc7c919a65833fa2c6f5c8cba484\"" Feb 13 15:21:53.710924 systemd[1]: run-netns-cni\x2d088698b9\x2dcdee\x2dc6e9\x2d399d\x2d0293875f1bbe.mount: Deactivated successfully. Feb 13 15:21:53.712083 containerd[1442]: time="2025-02-13T15:21:53.711018599Z" level=info msg="TearDown network for sandbox \"d96ae196292a605b5cd4d41b835a3232ce63bc7c919a65833fa2c6f5c8cba484\" successfully" Feb 13 15:21:53.712083 containerd[1442]: time="2025-02-13T15:21:53.711049760Z" level=info msg="StopPodSandbox for \"d96ae196292a605b5cd4d41b835a3232ce63bc7c919a65833fa2c6f5c8cba484\" returns successfully" Feb 13 15:21:53.711011 systemd[1]: run-netns-cni\x2d107e9411\x2db365\x2d3825\x2d73a9\x2db5c18865414a.mount: Deactivated successfully. Feb 13 15:21:53.714571 containerd[1442]: time="2025-02-13T15:21:53.714303195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8vvjv,Uid:14b2995f-bdfb-4265-9dc0-06ae16e4bb6c,Namespace:calico-system,Attempt:3,}" Feb 13 15:21:53.719110 kubelet[2621]: I0213 15:21:53.718729 2621 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76d4658126becba445517a90b6b43c46a1c22581049ca91a8e0b6be5d84a6a5b" Feb 13 15:21:53.719380 containerd[1442]: time="2025-02-13T15:21:53.719187509Z" level=info msg="StopPodSandbox for \"76d4658126becba445517a90b6b43c46a1c22581049ca91a8e0b6be5d84a6a5b\"" Feb 13 15:21:53.719380 containerd[1442]: time="2025-02-13T15:21:53.719347876Z" level=info msg="Ensure that sandbox 76d4658126becba445517a90b6b43c46a1c22581049ca91a8e0b6be5d84a6a5b in task-service has been cleanup successfully" Feb 13 15:21:53.719886 containerd[1442]: time="2025-02-13T15:21:53.719865621Z" level=info msg="TearDown network for sandbox \"76d4658126becba445517a90b6b43c46a1c22581049ca91a8e0b6be5d84a6a5b\" successfully" Feb 13 15:21:53.720017 containerd[1442]: time="2025-02-13T15:21:53.720001067Z" level=info msg="StopPodSandbox for \"76d4658126becba445517a90b6b43c46a1c22581049ca91a8e0b6be5d84a6a5b\" returns successfully" Feb 13 15:21:53.720873 containerd[1442]: time="2025-02-13T15:21:53.720531533Z" level=info msg="StopPodSandbox for \"1290b96754b10ffaddc73a37c85cf3d37997bcbba54c2c7727a2d695f71242d3\"" Feb 13 15:21:53.720873 containerd[1442]: time="2025-02-13T15:21:53.720611777Z" level=info msg="TearDown network for sandbox \"1290b96754b10ffaddc73a37c85cf3d37997bcbba54c2c7727a2d695f71242d3\" successfully" Feb 13 15:21:53.720873 containerd[1442]: time="2025-02-13T15:21:53.720621857Z" level=info msg="StopPodSandbox for \"1290b96754b10ffaddc73a37c85cf3d37997bcbba54c2c7727a2d695f71242d3\" returns successfully" Feb 13 15:21:53.721390 containerd[1442]: time="2025-02-13T15:21:53.721200965Z" level=info msg="StopPodSandbox for \"b21e1a8f17e86fe71cf0ca5f7bf255d3e92b462acaa3367bcccf912c971932fc\"" Feb 13 15:21:53.721643 containerd[1442]: time="2025-02-13T15:21:53.721623665Z" level=info msg="TearDown network for sandbox \"b21e1a8f17e86fe71cf0ca5f7bf255d3e92b462acaa3367bcccf912c971932fc\" successfully" Feb 13 15:21:53.721643 containerd[1442]: time="2025-02-13T15:21:53.721639666Z" level=info msg="StopPodSandbox for \"b21e1a8f17e86fe71cf0ca5f7bf255d3e92b462acaa3367bcccf912c971932fc\" returns successfully" Feb 13 15:21:53.722231 kubelet[2621]: I0213 15:21:53.721865 2621 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2fb37880992bf25b4ffdc0417195ecffe00d1f91c138c1d050e3c108ce682cc8" Feb 13 15:21:53.722468 containerd[1442]: time="2025-02-13T15:21:53.722438064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-564fc96ccb-dqvv5,Uid:de1246bc-b473-496c-be26-bb64afe860ad,Namespace:calico-apiserver,Attempt:3,}" Feb 13 15:21:53.722749 containerd[1442]: time="2025-02-13T15:21:53.722720077Z" level=info msg="StopPodSandbox for \"2fb37880992bf25b4ffdc0417195ecffe00d1f91c138c1d050e3c108ce682cc8\"" Feb 13 15:21:53.723874 containerd[1442]: time="2025-02-13T15:21:53.723703564Z" level=info msg="Ensure that sandbox 2fb37880992bf25b4ffdc0417195ecffe00d1f91c138c1d050e3c108ce682cc8 in task-service has been cleanup successfully" Feb 13 15:21:53.724638 containerd[1442]: time="2025-02-13T15:21:53.724486842Z" level=info msg="TearDown network for sandbox \"2fb37880992bf25b4ffdc0417195ecffe00d1f91c138c1d050e3c108ce682cc8\" successfully" Feb 13 15:21:53.724638 containerd[1442]: time="2025-02-13T15:21:53.724513283Z" level=info msg="StopPodSandbox for \"2fb37880992bf25b4ffdc0417195ecffe00d1f91c138c1d050e3c108ce682cc8\" returns successfully" Feb 13 15:21:53.725476 containerd[1442]: time="2025-02-13T15:21:53.725452728Z" level=info msg="StopPodSandbox for \"222fe074a6a0fa30aabf41e153b4175978e6e304dfcf93e3a4c0de8e32655c0a\"" Feb 13 15:21:53.725954 containerd[1442]: time="2025-02-13T15:21:53.725866507Z" level=info msg="TearDown network for sandbox \"222fe074a6a0fa30aabf41e153b4175978e6e304dfcf93e3a4c0de8e32655c0a\" successfully" Feb 13 15:21:53.725954 containerd[1442]: time="2025-02-13T15:21:53.725887588Z" level=info msg="StopPodSandbox for \"222fe074a6a0fa30aabf41e153b4175978e6e304dfcf93e3a4c0de8e32655c0a\" returns successfully" Feb 13 15:21:53.726372 containerd[1442]: time="2025-02-13T15:21:53.726322249Z" level=info msg="StopPodSandbox for \"c4baf65475d91bc2c15306315ac291509c501cd40c636754fa7bb756cbcea258\"" Feb 13 15:21:53.726966 containerd[1442]: time="2025-02-13T15:21:53.726531019Z" level=info msg="TearDown network for sandbox \"c4baf65475d91bc2c15306315ac291509c501cd40c636754fa7bb756cbcea258\" successfully" Feb 13 15:21:53.726966 containerd[1442]: time="2025-02-13T15:21:53.726546820Z" level=info msg="StopPodSandbox for \"c4baf65475d91bc2c15306315ac291509c501cd40c636754fa7bb756cbcea258\" returns successfully" Feb 13 15:21:53.727541 containerd[1442]: time="2025-02-13T15:21:53.727504226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-658db9fb4b-xcbwf,Uid:87c71f20-054e-44ba-a99e-c4fefdae6457,Namespace:calico-system,Attempt:3,}" Feb 13 15:21:53.937256 containerd[1442]: time="2025-02-13T15:21:53.937202718Z" level=error msg="Failed to destroy network for sandbox \"11929aadc81cdf1da7658e28ef4d74baafd42d9d50e9e28282ba41964a92a3a7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:53.937585 containerd[1442]: time="2025-02-13T15:21:53.937551174Z" level=error msg="encountered an error cleaning up failed sandbox \"11929aadc81cdf1da7658e28ef4d74baafd42d9d50e9e28282ba41964a92a3a7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:53.940891 containerd[1442]: time="2025-02-13T15:21:53.940822691Z" level=error msg="Failed to destroy network for sandbox \"0e00bbd7bcb14cbd456352c063bed067e90a3795d0de450c8bec1e4ae2edb640\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:53.941584 containerd[1442]: time="2025-02-13T15:21:53.941378037Z" level=error msg="encountered an error cleaning up failed sandbox \"0e00bbd7bcb14cbd456352c063bed067e90a3795d0de450c8bec1e4ae2edb640\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:53.944550 containerd[1442]: time="2025-02-13T15:21:53.944445384Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-658db9fb4b-xcbwf,Uid:87c71f20-054e-44ba-a99e-c4fefdae6457,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"11929aadc81cdf1da7658e28ef4d74baafd42d9d50e9e28282ba41964a92a3a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:53.944672 containerd[1442]: time="2025-02-13T15:21:53.944542868Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qhltz,Uid:c285582e-9191-432e-99ea-cc7fe7db7fbb,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"0e00bbd7bcb14cbd456352c063bed067e90a3795d0de450c8bec1e4ae2edb640\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:53.944794 kubelet[2621]: E0213 15:21:53.944745 2621 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e00bbd7bcb14cbd456352c063bed067e90a3795d0de450c8bec1e4ae2edb640\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:53.945191 kubelet[2621]: E0213 15:21:53.944800 2621 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e00bbd7bcb14cbd456352c063bed067e90a3795d0de450c8bec1e4ae2edb640\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-qhltz" Feb 13 15:21:53.945191 kubelet[2621]: E0213 15:21:53.944824 2621 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e00bbd7bcb14cbd456352c063bed067e90a3795d0de450c8bec1e4ae2edb640\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-qhltz" Feb 13 15:21:53.945191 kubelet[2621]: E0213 15:21:53.944819 2621 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11929aadc81cdf1da7658e28ef4d74baafd42d9d50e9e28282ba41964a92a3a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:53.945191 kubelet[2621]: E0213 15:21:53.944860 2621 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11929aadc81cdf1da7658e28ef4d74baafd42d9d50e9e28282ba41964a92a3a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-658db9fb4b-xcbwf" Feb 13 15:21:53.945315 kubelet[2621]: E0213 15:21:53.944866 2621 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-qhltz_kube-system(c285582e-9191-432e-99ea-cc7fe7db7fbb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-qhltz_kube-system(c285582e-9191-432e-99ea-cc7fe7db7fbb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0e00bbd7bcb14cbd456352c063bed067e90a3795d0de450c8bec1e4ae2edb640\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-qhltz" podUID="c285582e-9191-432e-99ea-cc7fe7db7fbb" Feb 13 15:21:53.945315 kubelet[2621]: E0213 15:21:53.944879 2621 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11929aadc81cdf1da7658e28ef4d74baafd42d9d50e9e28282ba41964a92a3a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-658db9fb4b-xcbwf" Feb 13 15:21:53.945315 kubelet[2621]: E0213 15:21:53.944952 2621 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-658db9fb4b-xcbwf_calico-system(87c71f20-054e-44ba-a99e-c4fefdae6457)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-658db9fb4b-xcbwf_calico-system(87c71f20-054e-44ba-a99e-c4fefdae6457)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"11929aadc81cdf1da7658e28ef4d74baafd42d9d50e9e28282ba41964a92a3a7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-658db9fb4b-xcbwf" podUID="87c71f20-054e-44ba-a99e-c4fefdae6457" Feb 13 15:21:53.958204 containerd[1442]: time="2025-02-13T15:21:53.958133357Z" level=error msg="Failed to destroy network for sandbox \"8db7eb7e96e9c3fedc10436101d0f44d3179d411a6afb2c8436bc1e4908d2830\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:53.958503 containerd[1442]: time="2025-02-13T15:21:53.958449972Z" level=error msg="encountered an error cleaning up failed sandbox \"8db7eb7e96e9c3fedc10436101d0f44d3179d411a6afb2c8436bc1e4908d2830\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:53.958545 containerd[1442]: time="2025-02-13T15:21:53.958512735Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8vvjv,Uid:14b2995f-bdfb-4265-9dc0-06ae16e4bb6c,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"8db7eb7e96e9c3fedc10436101d0f44d3179d411a6afb2c8436bc1e4908d2830\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:53.959151 kubelet[2621]: E0213 15:21:53.958776 2621 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8db7eb7e96e9c3fedc10436101d0f44d3179d411a6afb2c8436bc1e4908d2830\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:53.959151 kubelet[2621]: E0213 15:21:53.958834 2621 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8db7eb7e96e9c3fedc10436101d0f44d3179d411a6afb2c8436bc1e4908d2830\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8vvjv" Feb 13 15:21:53.959151 kubelet[2621]: E0213 15:21:53.958902 2621 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8db7eb7e96e9c3fedc10436101d0f44d3179d411a6afb2c8436bc1e4908d2830\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8vvjv" Feb 13 15:21:53.959281 kubelet[2621]: E0213 15:21:53.958942 2621 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-8vvjv_calico-system(14b2995f-bdfb-4265-9dc0-06ae16e4bb6c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-8vvjv_calico-system(14b2995f-bdfb-4265-9dc0-06ae16e4bb6c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8db7eb7e96e9c3fedc10436101d0f44d3179d411a6afb2c8436bc1e4908d2830\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8vvjv" podUID="14b2995f-bdfb-4265-9dc0-06ae16e4bb6c" Feb 13 15:21:53.971926 containerd[1442]: time="2025-02-13T15:21:53.971829971Z" level=error msg="Failed to destroy network for sandbox \"d3f7525b6c6753b8ce43153e40018478056e9589dd30a82916f36bcfc6177342\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:53.973463 containerd[1442]: time="2025-02-13T15:21:53.973387605Z" level=error msg="encountered an error cleaning up failed sandbox \"d3f7525b6c6753b8ce43153e40018478056e9589dd30a82916f36bcfc6177342\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:53.973463 containerd[1442]: time="2025-02-13T15:21:53.973452129Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-564fc96ccb-fw9ln,Uid:84c3b521-a067-4602-ad8f-cbca4249dad7,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"d3f7525b6c6753b8ce43153e40018478056e9589dd30a82916f36bcfc6177342\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:53.973658 kubelet[2621]: E0213 15:21:53.973631 2621 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3f7525b6c6753b8ce43153e40018478056e9589dd30a82916f36bcfc6177342\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:53.973724 kubelet[2621]: E0213 15:21:53.973679 2621 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3f7525b6c6753b8ce43153e40018478056e9589dd30a82916f36bcfc6177342\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-564fc96ccb-fw9ln" Feb 13 15:21:53.973724 kubelet[2621]: E0213 15:21:53.973699 2621 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3f7525b6c6753b8ce43153e40018478056e9589dd30a82916f36bcfc6177342\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-564fc96ccb-fw9ln" Feb 13 15:21:53.973778 kubelet[2621]: E0213 15:21:53.973743 2621 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-564fc96ccb-fw9ln_calico-apiserver(84c3b521-a067-4602-ad8f-cbca4249dad7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-564fc96ccb-fw9ln_calico-apiserver(84c3b521-a067-4602-ad8f-cbca4249dad7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d3f7525b6c6753b8ce43153e40018478056e9589dd30a82916f36bcfc6177342\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-564fc96ccb-fw9ln" podUID="84c3b521-a067-4602-ad8f-cbca4249dad7" Feb 13 15:21:53.974272 containerd[1442]: time="2025-02-13T15:21:53.974168283Z" level=error msg="Failed to destroy network for sandbox \"1bc7ba7f91f5547b00e7049a50ddcf60d573692ca1d4d6b3c1d4222df799d5a0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:53.974559 containerd[1442]: time="2025-02-13T15:21:53.974533540Z" level=error msg="encountered an error cleaning up failed sandbox \"1bc7ba7f91f5547b00e7049a50ddcf60d573692ca1d4d6b3c1d4222df799d5a0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:53.974665 containerd[1442]: time="2025-02-13T15:21:53.974645026Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-564fc96ccb-dqvv5,Uid:de1246bc-b473-496c-be26-bb64afe860ad,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"1bc7ba7f91f5547b00e7049a50ddcf60d573692ca1d4d6b3c1d4222df799d5a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:53.974977 kubelet[2621]: E0213 15:21:53.974849 2621 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bc7ba7f91f5547b00e7049a50ddcf60d573692ca1d4d6b3c1d4222df799d5a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:53.974977 kubelet[2621]: E0213 15:21:53.974886 2621 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bc7ba7f91f5547b00e7049a50ddcf60d573692ca1d4d6b3c1d4222df799d5a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-564fc96ccb-dqvv5" Feb 13 15:21:53.974977 kubelet[2621]: E0213 15:21:53.974903 2621 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bc7ba7f91f5547b00e7049a50ddcf60d573692ca1d4d6b3c1d4222df799d5a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-564fc96ccb-dqvv5" Feb 13 15:21:53.975166 kubelet[2621]: E0213 15:21:53.974940 2621 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-564fc96ccb-dqvv5_calico-apiserver(de1246bc-b473-496c-be26-bb64afe860ad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-564fc96ccb-dqvv5_calico-apiserver(de1246bc-b473-496c-be26-bb64afe860ad)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1bc7ba7f91f5547b00e7049a50ddcf60d573692ca1d4d6b3c1d4222df799d5a0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-564fc96ccb-dqvv5" podUID="de1246bc-b473-496c-be26-bb64afe860ad" Feb 13 15:21:53.975222 containerd[1442]: time="2025-02-13T15:21:53.974942280Z" level=error msg="Failed to destroy network for sandbox \"c9da068fa3f27a54a0b2e673273efa47355288199379af100a264051fdd7317b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:53.975893 containerd[1442]: time="2025-02-13T15:21:53.975792880Z" level=error msg="encountered an error cleaning up failed sandbox \"c9da068fa3f27a54a0b2e673273efa47355288199379af100a264051fdd7317b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:53.976094 containerd[1442]: time="2025-02-13T15:21:53.975948608Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2hpnn,Uid:deed100f-387a-4ac5-9252-5efbd4c9fe2b,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"c9da068fa3f27a54a0b2e673273efa47355288199379af100a264051fdd7317b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:53.976335 kubelet[2621]: E0213 15:21:53.976225 2621 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9da068fa3f27a54a0b2e673273efa47355288199379af100a264051fdd7317b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:53.976335 kubelet[2621]: E0213 15:21:53.976259 2621 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9da068fa3f27a54a0b2e673273efa47355288199379af100a264051fdd7317b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-2hpnn" Feb 13 15:21:53.976335 kubelet[2621]: E0213 15:21:53.976280 2621 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9da068fa3f27a54a0b2e673273efa47355288199379af100a264051fdd7317b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-2hpnn" Feb 13 15:21:53.976437 kubelet[2621]: E0213 15:21:53.976306 2621 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-2hpnn_kube-system(deed100f-387a-4ac5-9252-5efbd4c9fe2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-2hpnn_kube-system(deed100f-387a-4ac5-9252-5efbd4c9fe2b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c9da068fa3f27a54a0b2e673273efa47355288199379af100a264051fdd7317b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-2hpnn" podUID="deed100f-387a-4ac5-9252-5efbd4c9fe2b" Feb 13 15:21:54.465143 systemd[1]: run-netns-cni\x2d8d2e0054\x2d4069\x2d1594\x2d61af\x2d89f236882321.mount: Deactivated successfully. Feb 13 15:21:54.465239 systemd[1]: run-netns-cni\x2d7df4f838\x2d49ad\x2dd57f\x2d450f\x2d9276e8a7103c.mount: Deactivated successfully. Feb 13 15:21:54.712583 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1769280334.mount: Deactivated successfully. Feb 13 15:21:54.727727 kubelet[2621]: I0213 15:21:54.727606 2621 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8db7eb7e96e9c3fedc10436101d0f44d3179d411a6afb2c8436bc1e4908d2830" Feb 13 15:21:54.732252 containerd[1442]: time="2025-02-13T15:21:54.731262953Z" level=info msg="StopPodSandbox for \"8db7eb7e96e9c3fedc10436101d0f44d3179d411a6afb2c8436bc1e4908d2830\"" Feb 13 15:21:54.732252 containerd[1442]: time="2025-02-13T15:21:54.731517325Z" level=info msg="Ensure that sandbox 8db7eb7e96e9c3fedc10436101d0f44d3179d411a6afb2c8436bc1e4908d2830 in task-service has been cleanup successfully" Feb 13 15:21:54.734056 systemd[1]: run-netns-cni\x2d1cfaa9f4\x2d8728\x2d42db\x2d5725\x2d917d4b68de4d.mount: Deactivated successfully. Feb 13 15:21:54.734864 containerd[1442]: time="2025-02-13T15:21:54.734816238Z" level=info msg="TearDown network for sandbox \"8db7eb7e96e9c3fedc10436101d0f44d3179d411a6afb2c8436bc1e4908d2830\" successfully" Feb 13 15:21:54.734864 containerd[1442]: time="2025-02-13T15:21:54.734856840Z" level=info msg="StopPodSandbox for \"8db7eb7e96e9c3fedc10436101d0f44d3179d411a6afb2c8436bc1e4908d2830\" returns successfully" Feb 13 15:21:54.735065 kubelet[2621]: I0213 15:21:54.734990 2621 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9da068fa3f27a54a0b2e673273efa47355288199379af100a264051fdd7317b" Feb 13 15:21:54.735526 containerd[1442]: time="2025-02-13T15:21:54.735242658Z" level=info msg="StopPodSandbox for \"f2a4be37ee45e0a2e04d64278828b4f8471e3f621e93d94fac9f67321674f84b\"" Feb 13 15:21:54.735526 containerd[1442]: time="2025-02-13T15:21:54.735347463Z" level=info msg="TearDown network for sandbox \"f2a4be37ee45e0a2e04d64278828b4f8471e3f621e93d94fac9f67321674f84b\" successfully" Feb 13 15:21:54.735526 containerd[1442]: time="2025-02-13T15:21:54.735360023Z" level=info msg="StopPodSandbox for \"f2a4be37ee45e0a2e04d64278828b4f8471e3f621e93d94fac9f67321674f84b\" returns successfully" Feb 13 15:21:54.735627 containerd[1442]: time="2025-02-13T15:21:54.735528711Z" level=info msg="StopPodSandbox for \"c9da068fa3f27a54a0b2e673273efa47355288199379af100a264051fdd7317b\"" Feb 13 15:21:54.735951 containerd[1442]: time="2025-02-13T15:21:54.735928490Z" level=info msg="StopPodSandbox for \"5fc5ff480b0ac02e88a3e59e79abeae223a5c6ef1c9e47fd64658d555c0f5c92\"" Feb 13 15:21:54.736127 containerd[1442]: time="2025-02-13T15:21:54.736108218Z" level=info msg="TearDown network for sandbox \"5fc5ff480b0ac02e88a3e59e79abeae223a5c6ef1c9e47fd64658d555c0f5c92\" successfully" Feb 13 15:21:54.736257 containerd[1442]: time="2025-02-13T15:21:54.736194302Z" level=info msg="StopPodSandbox for \"5fc5ff480b0ac02e88a3e59e79abeae223a5c6ef1c9e47fd64658d555c0f5c92\" returns successfully" Feb 13 15:21:54.736525 containerd[1442]: time="2025-02-13T15:21:54.736371430Z" level=info msg="Ensure that sandbox c9da068fa3f27a54a0b2e673273efa47355288199379af100a264051fdd7317b in task-service has been cleanup successfully" Feb 13 15:21:54.737617 containerd[1442]: time="2025-02-13T15:21:54.736697885Z" level=info msg="StopPodSandbox for \"d96ae196292a605b5cd4d41b835a3232ce63bc7c919a65833fa2c6f5c8cba484\"" Feb 13 15:21:54.737617 containerd[1442]: time="2025-02-13T15:21:54.736784809Z" level=info msg="TearDown network for sandbox \"d96ae196292a605b5cd4d41b835a3232ce63bc7c919a65833fa2c6f5c8cba484\" successfully" Feb 13 15:21:54.737617 containerd[1442]: time="2025-02-13T15:21:54.736797570Z" level=info msg="StopPodSandbox for \"d96ae196292a605b5cd4d41b835a3232ce63bc7c919a65833fa2c6f5c8cba484\" returns successfully" Feb 13 15:21:54.738728 containerd[1442]: time="2025-02-13T15:21:54.737859299Z" level=info msg="TearDown network for sandbox \"c9da068fa3f27a54a0b2e673273efa47355288199379af100a264051fdd7317b\" successfully" Feb 13 15:21:54.738728 containerd[1442]: time="2025-02-13T15:21:54.737887541Z" level=info msg="StopPodSandbox for \"c9da068fa3f27a54a0b2e673273efa47355288199379af100a264051fdd7317b\" returns successfully" Feb 13 15:21:54.738728 containerd[1442]: time="2025-02-13T15:21:54.738013586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8vvjv,Uid:14b2995f-bdfb-4265-9dc0-06ae16e4bb6c,Namespace:calico-system,Attempt:4,}" Feb 13 15:21:54.739808 containerd[1442]: time="2025-02-13T15:21:54.739006072Z" level=info msg="StopPodSandbox for \"5a990fb5682c636e25fb3a50d8fe0024788508fe708016321710033a0552c448\"" Feb 13 15:21:54.739584 systemd[1]: run-netns-cni\x2dbdde581c\x2d176a\x2da3d3\x2ddb0e\x2d440d11948860.mount: Deactivated successfully. Feb 13 15:21:54.742087 kubelet[2621]: I0213 15:21:54.739275 2621 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3f7525b6c6753b8ce43153e40018478056e9589dd30a82916f36bcfc6177342" Feb 13 15:21:54.742177 containerd[1442]: time="2025-02-13T15:21:54.739734026Z" level=info msg="StopPodSandbox for \"d3f7525b6c6753b8ce43153e40018478056e9589dd30a82916f36bcfc6177342\"" Feb 13 15:21:54.742177 containerd[1442]: time="2025-02-13T15:21:54.740895120Z" level=info msg="Ensure that sandbox d3f7525b6c6753b8ce43153e40018478056e9589dd30a82916f36bcfc6177342 in task-service has been cleanup successfully" Feb 13 15:21:54.742177 containerd[1442]: time="2025-02-13T15:21:54.741152212Z" level=info msg="TearDown network for sandbox \"d3f7525b6c6753b8ce43153e40018478056e9589dd30a82916f36bcfc6177342\" successfully" Feb 13 15:21:54.742177 containerd[1442]: time="2025-02-13T15:21:54.741173053Z" level=info msg="StopPodSandbox for \"d3f7525b6c6753b8ce43153e40018478056e9589dd30a82916f36bcfc6177342\" returns successfully" Feb 13 15:21:54.742294 containerd[1442]: time="2025-02-13T15:21:54.742198181Z" level=info msg="StopPodSandbox for \"74cd138a61eecc2e6903a038baafdc496135da9e7c240b4fd5953c85a00d83ec\"" Feb 13 15:21:54.742294 containerd[1442]: time="2025-02-13T15:21:54.742289665Z" level=info msg="TearDown network for sandbox \"74cd138a61eecc2e6903a038baafdc496135da9e7c240b4fd5953c85a00d83ec\" successfully" Feb 13 15:21:54.742352 containerd[1442]: time="2025-02-13T15:21:54.742299465Z" level=info msg="StopPodSandbox for \"74cd138a61eecc2e6903a038baafdc496135da9e7c240b4fd5953c85a00d83ec\" returns successfully" Feb 13 15:21:54.748078 containerd[1442]: time="2025-02-13T15:21:54.742669123Z" level=info msg="StopPodSandbox for \"a3d2c5311b68c754ed4f52b532c1dbe545a75c82f3b62899ff9015ee5110f3ad\"" Feb 13 15:21:54.748078 containerd[1442]: time="2025-02-13T15:21:54.742751726Z" level=info msg="TearDown network for sandbox \"a3d2c5311b68c754ed4f52b532c1dbe545a75c82f3b62899ff9015ee5110f3ad\" successfully" Feb 13 15:21:54.748078 containerd[1442]: time="2025-02-13T15:21:54.742761847Z" level=info msg="StopPodSandbox for \"a3d2c5311b68c754ed4f52b532c1dbe545a75c82f3b62899ff9015ee5110f3ad\" returns successfully" Feb 13 15:21:54.748078 containerd[1442]: time="2025-02-13T15:21:54.744912387Z" level=info msg="StopPodSandbox for \"892160d9b3fdcfef5dfbb8a15226ee03d30c00255d1c8fffacad07d070348325\"" Feb 13 15:21:54.748078 containerd[1442]: time="2025-02-13T15:21:54.745014671Z" level=info msg="TearDown network for sandbox \"892160d9b3fdcfef5dfbb8a15226ee03d30c00255d1c8fffacad07d070348325\" successfully" Feb 13 15:21:54.748078 containerd[1442]: time="2025-02-13T15:21:54.745040713Z" level=info msg="StopPodSandbox for \"892160d9b3fdcfef5dfbb8a15226ee03d30c00255d1c8fffacad07d070348325\" returns successfully" Feb 13 15:21:54.748078 containerd[1442]: time="2025-02-13T15:21:54.745199000Z" level=info msg="StopPodSandbox for \"0e00bbd7bcb14cbd456352c063bed067e90a3795d0de450c8bec1e4ae2edb640\"" Feb 13 15:21:54.748078 containerd[1442]: time="2025-02-13T15:21:54.745480253Z" level=info msg="TearDown network for sandbox \"5a990fb5682c636e25fb3a50d8fe0024788508fe708016321710033a0552c448\" successfully" Feb 13 15:21:54.748078 containerd[1442]: time="2025-02-13T15:21:54.745501094Z" level=info msg="StopPodSandbox for \"5a990fb5682c636e25fb3a50d8fe0024788508fe708016321710033a0552c448\" returns successfully" Feb 13 15:21:54.748078 containerd[1442]: time="2025-02-13T15:21:54.745610739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-564fc96ccb-fw9ln,Uid:84c3b521-a067-4602-ad8f-cbca4249dad7,Namespace:calico-apiserver,Attempt:4,}" Feb 13 15:21:54.748078 containerd[1442]: time="2025-02-13T15:21:54.745867951Z" level=info msg="StopPodSandbox for \"ec5f837bdef6209b21d66e9a11f472ff6e4172b2c0d38f2552319bfe446e9725\"" Feb 13 15:21:54.748078 containerd[1442]: time="2025-02-13T15:21:54.745945715Z" level=info msg="TearDown network for sandbox \"ec5f837bdef6209b21d66e9a11f472ff6e4172b2c0d38f2552319bfe446e9725\" successfully" Feb 13 15:21:54.748078 containerd[1442]: time="2025-02-13T15:21:54.745955395Z" level=info msg="StopPodSandbox for \"ec5f837bdef6209b21d66e9a11f472ff6e4172b2c0d38f2552319bfe446e9725\" returns successfully" Feb 13 15:21:54.748078 containerd[1442]: time="2025-02-13T15:21:54.746310892Z" level=info msg="StopPodSandbox for \"f77271c961d7b4a7b309d3aeaa83ef8ef8c4ac2d6b4080280f4c6b5d24455417\"" Feb 13 15:21:54.748078 containerd[1442]: time="2025-02-13T15:21:54.746530382Z" level=info msg="TearDown network for sandbox \"f77271c961d7b4a7b309d3aeaa83ef8ef8c4ac2d6b4080280f4c6b5d24455417\" successfully" Feb 13 15:21:54.748078 containerd[1442]: time="2025-02-13T15:21:54.746547143Z" level=info msg="StopPodSandbox for \"f77271c961d7b4a7b309d3aeaa83ef8ef8c4ac2d6b4080280f4c6b5d24455417\" returns successfully" Feb 13 15:21:54.748078 containerd[1442]: time="2025-02-13T15:21:54.747140450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2hpnn,Uid:deed100f-387a-4ac5-9252-5efbd4c9fe2b,Namespace:kube-system,Attempt:4,}" Feb 13 15:21:54.744306 systemd[1]: run-netns-cni\x2d8a924f40\x2daa51\x2d55db\x2d243e\x2d2939b276b90d.mount: Deactivated successfully. Feb 13 15:21:54.748669 kubelet[2621]: I0213 15:21:54.742946 2621 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e00bbd7bcb14cbd456352c063bed067e90a3795d0de450c8bec1e4ae2edb640" Feb 13 15:21:54.748669 kubelet[2621]: E0213 15:21:54.746745 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:21:54.748669 kubelet[2621]: I0213 15:21:54.747524 2621 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11929aadc81cdf1da7658e28ef4d74baafd42d9d50e9e28282ba41964a92a3a7" Feb 13 15:21:54.748834 containerd[1442]: time="2025-02-13T15:21:54.748796407Z" level=info msg="StopPodSandbox for \"11929aadc81cdf1da7658e28ef4d74baafd42d9d50e9e28282ba41964a92a3a7\"" Feb 13 15:21:54.749010 containerd[1442]: time="2025-02-13T15:21:54.748981856Z" level=info msg="Ensure that sandbox 11929aadc81cdf1da7658e28ef4d74baafd42d9d50e9e28282ba41964a92a3a7 in task-service has been cleanup successfully" Feb 13 15:21:54.749237 containerd[1442]: time="2025-02-13T15:21:54.749208746Z" level=info msg="TearDown network for sandbox \"11929aadc81cdf1da7658e28ef4d74baafd42d9d50e9e28282ba41964a92a3a7\" successfully" Feb 13 15:21:54.749237 containerd[1442]: time="2025-02-13T15:21:54.749229107Z" level=info msg="StopPodSandbox for \"11929aadc81cdf1da7658e28ef4d74baafd42d9d50e9e28282ba41964a92a3a7\" returns successfully" Feb 13 15:21:54.749857 containerd[1442]: time="2025-02-13T15:21:54.749831855Z" level=info msg="StopPodSandbox for \"2fb37880992bf25b4ffdc0417195ecffe00d1f91c138c1d050e3c108ce682cc8\"" Feb 13 15:21:54.750265 containerd[1442]: time="2025-02-13T15:21:54.750161990Z" level=info msg="TearDown network for sandbox \"2fb37880992bf25b4ffdc0417195ecffe00d1f91c138c1d050e3c108ce682cc8\" successfully" Feb 13 15:21:54.750265 containerd[1442]: time="2025-02-13T15:21:54.750183111Z" level=info msg="StopPodSandbox for \"2fb37880992bf25b4ffdc0417195ecffe00d1f91c138c1d050e3c108ce682cc8\" returns successfully" Feb 13 15:21:54.750744 containerd[1442]: time="2025-02-13T15:21:54.750599291Z" level=info msg="StopPodSandbox for \"222fe074a6a0fa30aabf41e153b4175978e6e304dfcf93e3a4c0de8e32655c0a\"" Feb 13 15:21:54.750744 containerd[1442]: time="2025-02-13T15:21:54.750684415Z" level=info msg="TearDown network for sandbox \"222fe074a6a0fa30aabf41e153b4175978e6e304dfcf93e3a4c0de8e32655c0a\" successfully" Feb 13 15:21:54.750744 containerd[1442]: time="2025-02-13T15:21:54.750695535Z" level=info msg="StopPodSandbox for \"222fe074a6a0fa30aabf41e153b4175978e6e304dfcf93e3a4c0de8e32655c0a\" returns successfully" Feb 13 15:21:54.751992 containerd[1442]: time="2025-02-13T15:21:54.751520974Z" level=info msg="StopPodSandbox for \"c4baf65475d91bc2c15306315ac291509c501cd40c636754fa7bb756cbcea258\"" Feb 13 15:21:54.751992 containerd[1442]: time="2025-02-13T15:21:54.751612138Z" level=info msg="TearDown network for sandbox \"c4baf65475d91bc2c15306315ac291509c501cd40c636754fa7bb756cbcea258\" successfully" Feb 13 15:21:54.751992 containerd[1442]: time="2025-02-13T15:21:54.751623658Z" level=info msg="StopPodSandbox for \"c4baf65475d91bc2c15306315ac291509c501cd40c636754fa7bb756cbcea258\" returns successfully" Feb 13 15:21:54.752607 containerd[1442]: time="2025-02-13T15:21:54.752545941Z" level=info msg="Ensure that sandbox 0e00bbd7bcb14cbd456352c063bed067e90a3795d0de450c8bec1e4ae2edb640 in task-service has been cleanup successfully" Feb 13 15:21:54.752991 containerd[1442]: time="2025-02-13T15:21:54.752957640Z" level=info msg="TearDown network for sandbox \"0e00bbd7bcb14cbd456352c063bed067e90a3795d0de450c8bec1e4ae2edb640\" successfully" Feb 13 15:21:54.753052 containerd[1442]: time="2025-02-13T15:21:54.752994002Z" level=info msg="StopPodSandbox for \"0e00bbd7bcb14cbd456352c063bed067e90a3795d0de450c8bec1e4ae2edb640\" returns successfully" Feb 13 15:21:54.753834 containerd[1442]: time="2025-02-13T15:21:54.753119208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-658db9fb4b-xcbwf,Uid:87c71f20-054e-44ba-a99e-c4fefdae6457,Namespace:calico-system,Attempt:4,}" Feb 13 15:21:54.753909 kubelet[2621]: I0213 15:21:54.753419 2621 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bc7ba7f91f5547b00e7049a50ddcf60d573692ca1d4d6b3c1d4222df799d5a0" Feb 13 15:21:54.754006 containerd[1442]: time="2025-02-13T15:21:54.753940806Z" level=info msg="StopPodSandbox for \"1bc7ba7f91f5547b00e7049a50ddcf60d573692ca1d4d6b3c1d4222df799d5a0\"" Feb 13 15:21:54.754270 containerd[1442]: time="2025-02-13T15:21:54.754237860Z" level=info msg="Ensure that sandbox 1bc7ba7f91f5547b00e7049a50ddcf60d573692ca1d4d6b3c1d4222df799d5a0 in task-service has been cleanup successfully" Feb 13 15:21:54.754499 containerd[1442]: time="2025-02-13T15:21:54.754468630Z" level=info msg="TearDown network for sandbox \"1bc7ba7f91f5547b00e7049a50ddcf60d573692ca1d4d6b3c1d4222df799d5a0\" successfully" Feb 13 15:21:54.754499 containerd[1442]: time="2025-02-13T15:21:54.754491872Z" level=info msg="StopPodSandbox for \"1bc7ba7f91f5547b00e7049a50ddcf60d573692ca1d4d6b3c1d4222df799d5a0\" returns successfully" Feb 13 15:21:54.754716 containerd[1442]: time="2025-02-13T15:21:54.754688401Z" level=info msg="StopPodSandbox for \"f0b420624c0ef4de8293125d4aa3764bbbc9ad891299e4836ec706ef73b01bc2\"" Feb 13 15:21:54.754946 containerd[1442]: time="2025-02-13T15:21:54.754927812Z" level=info msg="TearDown network for sandbox \"f0b420624c0ef4de8293125d4aa3764bbbc9ad891299e4836ec706ef73b01bc2\" successfully" Feb 13 15:21:54.755067 containerd[1442]: time="2025-02-13T15:21:54.755049257Z" level=info msg="StopPodSandbox for \"f0b420624c0ef4de8293125d4aa3764bbbc9ad891299e4836ec706ef73b01bc2\" returns successfully" Feb 13 15:21:54.755331 containerd[1442]: time="2025-02-13T15:21:54.754934332Z" level=info msg="StopPodSandbox for \"76d4658126becba445517a90b6b43c46a1c22581049ca91a8e0b6be5d84a6a5b\"" Feb 13 15:21:54.755623 containerd[1442]: time="2025-02-13T15:21:54.755450476Z" level=info msg="StopPodSandbox for \"068df4c4fbceb251808403da7d7f70417a6f0acfb26bbe7e26797b860d1eeaef\"" Feb 13 15:21:54.755880 containerd[1442]: time="2025-02-13T15:21:54.755651685Z" level=info msg="TearDown network for sandbox \"068df4c4fbceb251808403da7d7f70417a6f0acfb26bbe7e26797b860d1eeaef\" successfully" Feb 13 15:21:54.755880 containerd[1442]: time="2025-02-13T15:21:54.755662966Z" level=info msg="StopPodSandbox for \"068df4c4fbceb251808403da7d7f70417a6f0acfb26bbe7e26797b860d1eeaef\" returns successfully" Feb 13 15:21:54.756057 containerd[1442]: time="2025-02-13T15:21:54.755937419Z" level=info msg="StopPodSandbox for \"74aa2dc63f90f0fe66425516e5fd4cd00e009a8fcb748a38adc86fdedb44b077\"" Feb 13 15:21:54.756057 containerd[1442]: time="2025-02-13T15:21:54.755977581Z" level=info msg="TearDown network for sandbox \"76d4658126becba445517a90b6b43c46a1c22581049ca91a8e0b6be5d84a6a5b\" successfully" Feb 13 15:21:54.756057 containerd[1442]: time="2025-02-13T15:21:54.755997101Z" level=info msg="StopPodSandbox for \"76d4658126becba445517a90b6b43c46a1c22581049ca91a8e0b6be5d84a6a5b\" returns successfully" Feb 13 15:21:54.756169 containerd[1442]: time="2025-02-13T15:21:54.756106106Z" level=info msg="TearDown network for sandbox \"74aa2dc63f90f0fe66425516e5fd4cd00e009a8fcb748a38adc86fdedb44b077\" successfully" Feb 13 15:21:54.756169 containerd[1442]: time="2025-02-13T15:21:54.756140308Z" level=info msg="StopPodSandbox for \"74aa2dc63f90f0fe66425516e5fd4cd00e009a8fcb748a38adc86fdedb44b077\" returns successfully" Feb 13 15:21:54.757048 containerd[1442]: time="2025-02-13T15:21:54.756671493Z" level=info msg="StopPodSandbox for \"1290b96754b10ffaddc73a37c85cf3d37997bcbba54c2c7727a2d695f71242d3\"" Feb 13 15:21:54.757048 containerd[1442]: time="2025-02-13T15:21:54.756925905Z" level=info msg="TearDown network for sandbox \"1290b96754b10ffaddc73a37c85cf3d37997bcbba54c2c7727a2d695f71242d3\" successfully" Feb 13 15:21:54.757048 containerd[1442]: time="2025-02-13T15:21:54.756939705Z" level=info msg="StopPodSandbox for \"1290b96754b10ffaddc73a37c85cf3d37997bcbba54c2c7727a2d695f71242d3\" returns successfully" Feb 13 15:21:54.757494 containerd[1442]: time="2025-02-13T15:21:54.757322803Z" level=info msg="StopPodSandbox for \"b21e1a8f17e86fe71cf0ca5f7bf255d3e92b462acaa3367bcccf912c971932fc\"" Feb 13 15:21:54.757494 containerd[1442]: time="2025-02-13T15:21:54.757414287Z" level=info msg="TearDown network for sandbox \"b21e1a8f17e86fe71cf0ca5f7bf255d3e92b462acaa3367bcccf912c971932fc\" successfully" Feb 13 15:21:54.757494 containerd[1442]: time="2025-02-13T15:21:54.757425208Z" level=info msg="StopPodSandbox for \"b21e1a8f17e86fe71cf0ca5f7bf255d3e92b462acaa3367bcccf912c971932fc\" returns successfully" Feb 13 15:21:54.757637 kubelet[2621]: E0213 15:21:54.757425 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:21:54.760064 containerd[1442]: time="2025-02-13T15:21:54.759829199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-564fc96ccb-dqvv5,Uid:de1246bc-b473-496c-be26-bb64afe860ad,Namespace:calico-apiserver,Attempt:4,}" Feb 13 15:21:54.760234 containerd[1442]: time="2025-02-13T15:21:54.759851080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qhltz,Uid:c285582e-9191-432e-99ea-cc7fe7db7fbb,Namespace:kube-system,Attempt:4,}" Feb 13 15:21:54.922271 containerd[1442]: time="2025-02-13T15:21:54.922220540Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:54.950509 containerd[1442]: time="2025-02-13T15:21:54.950389248Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Feb 13 15:21:54.964318 containerd[1442]: time="2025-02-13T15:21:54.964259692Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:54.994647 containerd[1442]: time="2025-02-13T15:21:54.994471695Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:54.998344 containerd[1442]: time="2025-02-13T15:21:54.998287312Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 4.511819176s" Feb 13 15:21:54.998344 containerd[1442]: time="2025-02-13T15:21:54.998340994Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Feb 13 15:21:55.019964 containerd[1442]: time="2025-02-13T15:21:55.019918612Z" level=info msg="CreateContainer within sandbox \"c875a564d6b96d77ac8b4ddf50ea95930a6ff3c37798b08421eeb3727e921e06\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 15:21:55.046252 containerd[1442]: time="2025-02-13T15:21:55.046096235Z" level=info msg="CreateContainer within sandbox \"c875a564d6b96d77ac8b4ddf50ea95930a6ff3c37798b08421eeb3727e921e06\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4e857b983729e6b0b38481830acb41fbeb933136e69ced02d937990240074848\"" Feb 13 15:21:55.047193 containerd[1442]: time="2025-02-13T15:21:55.047112841Z" level=info msg="StartContainer for \"4e857b983729e6b0b38481830acb41fbeb933136e69ced02d937990240074848\"" Feb 13 15:21:55.073399 containerd[1442]: time="2025-02-13T15:21:55.073346345Z" level=error msg="Failed to destroy network for sandbox \"47226915c1750f2dcd4281dff0c1822a4cd03cdd3e0248696294da5d0554aa87\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:55.074133 containerd[1442]: time="2025-02-13T15:21:55.074053737Z" level=error msg="encountered an error cleaning up failed sandbox \"47226915c1750f2dcd4281dff0c1822a4cd03cdd3e0248696294da5d0554aa87\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:55.074313 containerd[1442]: time="2025-02-13T15:21:55.074118220Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-564fc96ccb-fw9ln,Uid:84c3b521-a067-4602-ad8f-cbca4249dad7,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"47226915c1750f2dcd4281dff0c1822a4cd03cdd3e0248696294da5d0554aa87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:55.074778 kubelet[2621]: E0213 15:21:55.074708 2621 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47226915c1750f2dcd4281dff0c1822a4cd03cdd3e0248696294da5d0554aa87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:55.074866 kubelet[2621]: E0213 15:21:55.074805 2621 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47226915c1750f2dcd4281dff0c1822a4cd03cdd3e0248696294da5d0554aa87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-564fc96ccb-fw9ln" Feb 13 15:21:55.074866 kubelet[2621]: E0213 15:21:55.074827 2621 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47226915c1750f2dcd4281dff0c1822a4cd03cdd3e0248696294da5d0554aa87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-564fc96ccb-fw9ln" Feb 13 15:21:55.074914 kubelet[2621]: E0213 15:21:55.074869 2621 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-564fc96ccb-fw9ln_calico-apiserver(84c3b521-a067-4602-ad8f-cbca4249dad7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-564fc96ccb-fw9ln_calico-apiserver(84c3b521-a067-4602-ad8f-cbca4249dad7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"47226915c1750f2dcd4281dff0c1822a4cd03cdd3e0248696294da5d0554aa87\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-564fc96ccb-fw9ln" podUID="84c3b521-a067-4602-ad8f-cbca4249dad7" Feb 13 15:21:55.088326 containerd[1442]: time="2025-02-13T15:21:55.088251859Z" level=error msg="Failed to destroy network for sandbox \"d04a99dec6ccf9c7326d2d50b6de346e356bee2d6457b2d5c9680fea9a5adb7c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:55.089600 containerd[1442]: time="2025-02-13T15:21:55.089518436Z" level=error msg="encountered an error cleaning up failed sandbox \"d04a99dec6ccf9c7326d2d50b6de346e356bee2d6457b2d5c9680fea9a5adb7c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:55.089713 containerd[1442]: time="2025-02-13T15:21:55.089686363Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8vvjv,Uid:14b2995f-bdfb-4265-9dc0-06ae16e4bb6c,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"d04a99dec6ccf9c7326d2d50b6de346e356bee2d6457b2d5c9680fea9a5adb7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:55.090043 kubelet[2621]: E0213 15:21:55.089988 2621 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d04a99dec6ccf9c7326d2d50b6de346e356bee2d6457b2d5c9680fea9a5adb7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:55.090103 kubelet[2621]: E0213 15:21:55.090058 2621 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d04a99dec6ccf9c7326d2d50b6de346e356bee2d6457b2d5c9680fea9a5adb7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8vvjv" Feb 13 15:21:55.090103 kubelet[2621]: E0213 15:21:55.090081 2621 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d04a99dec6ccf9c7326d2d50b6de346e356bee2d6457b2d5c9680fea9a5adb7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8vvjv" Feb 13 15:21:55.090243 kubelet[2621]: E0213 15:21:55.090124 2621 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-8vvjv_calico-system(14b2995f-bdfb-4265-9dc0-06ae16e4bb6c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-8vvjv_calico-system(14b2995f-bdfb-4265-9dc0-06ae16e4bb6c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d04a99dec6ccf9c7326d2d50b6de346e356bee2d6457b2d5c9680fea9a5adb7c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8vvjv" podUID="14b2995f-bdfb-4265-9dc0-06ae16e4bb6c" Feb 13 15:21:55.098761 containerd[1442]: time="2025-02-13T15:21:55.098699690Z" level=error msg="Failed to destroy network for sandbox \"625368530c2bc9339bf5122e7b1e46a08e7aa9ae7da5caeccce496924a7d1331\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:55.099663 containerd[1442]: time="2025-02-13T15:21:55.099617572Z" level=error msg="encountered an error cleaning up failed sandbox \"625368530c2bc9339bf5122e7b1e46a08e7aa9ae7da5caeccce496924a7d1331\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:55.099864 containerd[1442]: time="2025-02-13T15:21:55.099841822Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-658db9fb4b-xcbwf,Uid:87c71f20-054e-44ba-a99e-c4fefdae6457,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"625368530c2bc9339bf5122e7b1e46a08e7aa9ae7da5caeccce496924a7d1331\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:55.100405 kubelet[2621]: E0213 15:21:55.100356 2621 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"625368530c2bc9339bf5122e7b1e46a08e7aa9ae7da5caeccce496924a7d1331\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:55.100499 kubelet[2621]: E0213 15:21:55.100423 2621 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"625368530c2bc9339bf5122e7b1e46a08e7aa9ae7da5caeccce496924a7d1331\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-658db9fb4b-xcbwf" Feb 13 15:21:55.100499 kubelet[2621]: E0213 15:21:55.100443 2621 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"625368530c2bc9339bf5122e7b1e46a08e7aa9ae7da5caeccce496924a7d1331\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-658db9fb4b-xcbwf" Feb 13 15:21:55.100599 kubelet[2621]: E0213 15:21:55.100487 2621 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-658db9fb4b-xcbwf_calico-system(87c71f20-054e-44ba-a99e-c4fefdae6457)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-658db9fb4b-xcbwf_calico-system(87c71f20-054e-44ba-a99e-c4fefdae6457)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"625368530c2bc9339bf5122e7b1e46a08e7aa9ae7da5caeccce496924a7d1331\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-658db9fb4b-xcbwf" podUID="87c71f20-054e-44ba-a99e-c4fefdae6457" Feb 13 15:21:55.104587 containerd[1442]: time="2025-02-13T15:21:55.104369506Z" level=error msg="Failed to destroy network for sandbox \"d39bbd1c579b80885befac14b0958dd16f728a35a72084ca6320a77ccb3b7a7a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:55.105040 containerd[1442]: time="2025-02-13T15:21:55.104874449Z" level=error msg="encountered an error cleaning up failed sandbox \"d39bbd1c579b80885befac14b0958dd16f728a35a72084ca6320a77ccb3b7a7a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:55.105040 containerd[1442]: time="2025-02-13T15:21:55.104929652Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-564fc96ccb-dqvv5,Uid:de1246bc-b473-496c-be26-bb64afe860ad,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"d39bbd1c579b80885befac14b0958dd16f728a35a72084ca6320a77ccb3b7a7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:55.105211 kubelet[2621]: E0213 15:21:55.105172 2621 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d39bbd1c579b80885befac14b0958dd16f728a35a72084ca6320a77ccb3b7a7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:55.105380 kubelet[2621]: E0213 15:21:55.105231 2621 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d39bbd1c579b80885befac14b0958dd16f728a35a72084ca6320a77ccb3b7a7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-564fc96ccb-dqvv5" Feb 13 15:21:55.105380 kubelet[2621]: E0213 15:21:55.105251 2621 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d39bbd1c579b80885befac14b0958dd16f728a35a72084ca6320a77ccb3b7a7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-564fc96ccb-dqvv5" Feb 13 15:21:55.105380 kubelet[2621]: E0213 15:21:55.105291 2621 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-564fc96ccb-dqvv5_calico-apiserver(de1246bc-b473-496c-be26-bb64afe860ad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-564fc96ccb-dqvv5_calico-apiserver(de1246bc-b473-496c-be26-bb64afe860ad)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d39bbd1c579b80885befac14b0958dd16f728a35a72084ca6320a77ccb3b7a7a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-564fc96ccb-dqvv5" podUID="de1246bc-b473-496c-be26-bb64afe860ad" Feb 13 15:21:55.107534 containerd[1442]: time="2025-02-13T15:21:55.107499888Z" level=error msg="Failed to destroy network for sandbox \"b2ce9b06d615cd0b7574b238db0c81b74ba7641fc02663b04423eecedbfe567e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:55.107881 containerd[1442]: time="2025-02-13T15:21:55.107853544Z" level=error msg="encountered an error cleaning up failed sandbox \"b2ce9b06d615cd0b7574b238db0c81b74ba7641fc02663b04423eecedbfe567e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:55.107923 containerd[1442]: time="2025-02-13T15:21:55.107906306Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2hpnn,Uid:deed100f-387a-4ac5-9252-5efbd4c9fe2b,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"b2ce9b06d615cd0b7574b238db0c81b74ba7641fc02663b04423eecedbfe567e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:55.108181 kubelet[2621]: E0213 15:21:55.108154 2621 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2ce9b06d615cd0b7574b238db0c81b74ba7641fc02663b04423eecedbfe567e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:55.108219 kubelet[2621]: E0213 15:21:55.108207 2621 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2ce9b06d615cd0b7574b238db0c81b74ba7641fc02663b04423eecedbfe567e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-2hpnn" Feb 13 15:21:55.108243 kubelet[2621]: E0213 15:21:55.108226 2621 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2ce9b06d615cd0b7574b238db0c81b74ba7641fc02663b04423eecedbfe567e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-2hpnn" Feb 13 15:21:55.108434 kubelet[2621]: E0213 15:21:55.108406 2621 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-2hpnn_kube-system(deed100f-387a-4ac5-9252-5efbd4c9fe2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-2hpnn_kube-system(deed100f-387a-4ac5-9252-5efbd4c9fe2b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b2ce9b06d615cd0b7574b238db0c81b74ba7641fc02663b04423eecedbfe567e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-2hpnn" podUID="deed100f-387a-4ac5-9252-5efbd4c9fe2b" Feb 13 15:21:55.114828 containerd[1442]: time="2025-02-13T15:21:55.114709813Z" level=error msg="Failed to destroy network for sandbox \"3f9619cd8018d29949f5b8a88dcba1dde36a35e932ad7eb9d8c70983a1ee1a35\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:55.115157 containerd[1442]: time="2025-02-13T15:21:55.115127552Z" level=error msg="encountered an error cleaning up failed sandbox \"3f9619cd8018d29949f5b8a88dcba1dde36a35e932ad7eb9d8c70983a1ee1a35\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:55.115293 containerd[1442]: time="2025-02-13T15:21:55.115272039Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qhltz,Uid:c285582e-9191-432e-99ea-cc7fe7db7fbb,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"3f9619cd8018d29949f5b8a88dcba1dde36a35e932ad7eb9d8c70983a1ee1a35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:55.115552 kubelet[2621]: E0213 15:21:55.115518 2621 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f9619cd8018d29949f5b8a88dcba1dde36a35e932ad7eb9d8c70983a1ee1a35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:21:55.115632 kubelet[2621]: E0213 15:21:55.115572 2621 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f9619cd8018d29949f5b8a88dcba1dde36a35e932ad7eb9d8c70983a1ee1a35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-qhltz" Feb 13 15:21:55.115632 kubelet[2621]: E0213 15:21:55.115593 2621 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f9619cd8018d29949f5b8a88dcba1dde36a35e932ad7eb9d8c70983a1ee1a35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-qhltz" Feb 13 15:21:55.115688 kubelet[2621]: E0213 15:21:55.115634 2621 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-qhltz_kube-system(c285582e-9191-432e-99ea-cc7fe7db7fbb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-qhltz_kube-system(c285582e-9191-432e-99ea-cc7fe7db7fbb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3f9619cd8018d29949f5b8a88dcba1dde36a35e932ad7eb9d8c70983a1ee1a35\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-qhltz" podUID="c285582e-9191-432e-99ea-cc7fe7db7fbb" Feb 13 15:21:55.141208 systemd[1]: Started cri-containerd-4e857b983729e6b0b38481830acb41fbeb933136e69ced02d937990240074848.scope - libcontainer container 4e857b983729e6b0b38481830acb41fbeb933136e69ced02d937990240074848. Feb 13 15:21:55.167704 containerd[1442]: time="2025-02-13T15:21:55.167660245Z" level=info msg="StartContainer for \"4e857b983729e6b0b38481830acb41fbeb933136e69ced02d937990240074848\" returns successfully" Feb 13 15:21:55.227718 kubelet[2621]: I0213 15:21:55.227400 2621 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:21:55.229051 kubelet[2621]: E0213 15:21:55.228077 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:21:55.412054 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 15:21:55.412174 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 15:21:55.467175 systemd[1]: run-netns-cni\x2d18060090\x2d67a3\x2d9389\x2d9f9b\x2dea5daa9663b8.mount: Deactivated successfully. Feb 13 15:21:55.467273 systemd[1]: run-netns-cni\x2da69027e6\x2da8cb\x2dda53\x2dab08\x2dad5fea913875.mount: Deactivated successfully. Feb 13 15:21:55.467331 systemd[1]: run-netns-cni\x2d32ce5b81\x2d540d\x2d4114\x2d84ab\x2da3b109a645ac.mount: Deactivated successfully. Feb 13 15:21:55.757957 kubelet[2621]: I0213 15:21:55.757852 2621 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2ce9b06d615cd0b7574b238db0c81b74ba7641fc02663b04423eecedbfe567e" Feb 13 15:21:55.759306 containerd[1442]: time="2025-02-13T15:21:55.758859825Z" level=info msg="StopPodSandbox for \"b2ce9b06d615cd0b7574b238db0c81b74ba7641fc02663b04423eecedbfe567e\"" Feb 13 15:21:55.759306 containerd[1442]: time="2025-02-13T15:21:55.759061915Z" level=info msg="Ensure that sandbox b2ce9b06d615cd0b7574b238db0c81b74ba7641fc02663b04423eecedbfe567e in task-service has been cleanup successfully" Feb 13 15:21:55.759578 containerd[1442]: time="2025-02-13T15:21:55.759464493Z" level=info msg="TearDown network for sandbox \"b2ce9b06d615cd0b7574b238db0c81b74ba7641fc02663b04423eecedbfe567e\" successfully" Feb 13 15:21:55.759578 containerd[1442]: time="2025-02-13T15:21:55.759482494Z" level=info msg="StopPodSandbox for \"b2ce9b06d615cd0b7574b238db0c81b74ba7641fc02663b04423eecedbfe567e\" returns successfully" Feb 13 15:21:55.760566 containerd[1442]: time="2025-02-13T15:21:55.760419256Z" level=info msg="StopPodSandbox for \"c9da068fa3f27a54a0b2e673273efa47355288199379af100a264051fdd7317b\"" Feb 13 15:21:55.760566 containerd[1442]: time="2025-02-13T15:21:55.760509100Z" level=info msg="TearDown network for sandbox \"c9da068fa3f27a54a0b2e673273efa47355288199379af100a264051fdd7317b\" successfully" Feb 13 15:21:55.760566 containerd[1442]: time="2025-02-13T15:21:55.760519380Z" level=info msg="StopPodSandbox for \"c9da068fa3f27a54a0b2e673273efa47355288199379af100a264051fdd7317b\" returns successfully" Feb 13 15:21:55.760962 containerd[1442]: time="2025-02-13T15:21:55.760898237Z" level=info msg="StopPodSandbox for \"5a990fb5682c636e25fb3a50d8fe0024788508fe708016321710033a0552c448\"" Feb 13 15:21:55.761069 containerd[1442]: time="2025-02-13T15:21:55.761051124Z" level=info msg="TearDown network for sandbox \"5a990fb5682c636e25fb3a50d8fe0024788508fe708016321710033a0552c448\" successfully" Feb 13 15:21:55.761069 containerd[1442]: time="2025-02-13T15:21:55.761067245Z" level=info msg="StopPodSandbox for \"5a990fb5682c636e25fb3a50d8fe0024788508fe708016321710033a0552c448\" returns successfully" Feb 13 15:21:55.761279 systemd[1]: run-netns-cni\x2d4d53e1a0\x2dd009\x2d424d\x2d382a\x2d98aa0a7fedcc.mount: Deactivated successfully. Feb 13 15:21:55.761356 containerd[1442]: time="2025-02-13T15:21:55.761337217Z" level=info msg="StopPodSandbox for \"ec5f837bdef6209b21d66e9a11f472ff6e4172b2c0d38f2552319bfe446e9725\"" Feb 13 15:21:55.761494 containerd[1442]: time="2025-02-13T15:21:55.761476184Z" level=info msg="TearDown network for sandbox \"ec5f837bdef6209b21d66e9a11f472ff6e4172b2c0d38f2552319bfe446e9725\" successfully" Feb 13 15:21:55.761516 containerd[1442]: time="2025-02-13T15:21:55.761493584Z" level=info msg="StopPodSandbox for \"ec5f837bdef6209b21d66e9a11f472ff6e4172b2c0d38f2552319bfe446e9725\" returns successfully" Feb 13 15:21:55.762530 containerd[1442]: time="2025-02-13T15:21:55.762505110Z" level=info msg="StopPodSandbox for \"f77271c961d7b4a7b309d3aeaa83ef8ef8c4ac2d6b4080280f4c6b5d24455417\"" Feb 13 15:21:55.762609 containerd[1442]: time="2025-02-13T15:21:55.762577833Z" level=info msg="TearDown network for sandbox \"f77271c961d7b4a7b309d3aeaa83ef8ef8c4ac2d6b4080280f4c6b5d24455417\" successfully" Feb 13 15:21:55.762609 containerd[1442]: time="2025-02-13T15:21:55.762587674Z" level=info msg="StopPodSandbox for \"f77271c961d7b4a7b309d3aeaa83ef8ef8c4ac2d6b4080280f4c6b5d24455417\" returns successfully" Feb 13 15:21:55.762799 kubelet[2621]: E0213 15:21:55.762742 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:21:55.763259 kubelet[2621]: I0213 15:21:55.763241 2621 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47226915c1750f2dcd4281dff0c1822a4cd03cdd3e0248696294da5d0554aa87" Feb 13 15:21:55.763547 containerd[1442]: time="2025-02-13T15:21:55.763508235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2hpnn,Uid:deed100f-387a-4ac5-9252-5efbd4c9fe2b,Namespace:kube-system,Attempt:5,}" Feb 13 15:21:55.764147 containerd[1442]: time="2025-02-13T15:21:55.763712325Z" level=info msg="StopPodSandbox for \"47226915c1750f2dcd4281dff0c1822a4cd03cdd3e0248696294da5d0554aa87\"" Feb 13 15:21:55.764147 containerd[1442]: time="2025-02-13T15:21:55.763861171Z" level=info msg="Ensure that sandbox 47226915c1750f2dcd4281dff0c1822a4cd03cdd3e0248696294da5d0554aa87 in task-service has been cleanup successfully" Feb 13 15:21:55.764544 containerd[1442]: time="2025-02-13T15:21:55.764517641Z" level=info msg="TearDown network for sandbox \"47226915c1750f2dcd4281dff0c1822a4cd03cdd3e0248696294da5d0554aa87\" successfully" Feb 13 15:21:55.764617 containerd[1442]: time="2025-02-13T15:21:55.764604325Z" level=info msg="StopPodSandbox for \"47226915c1750f2dcd4281dff0c1822a4cd03cdd3e0248696294da5d0554aa87\" returns successfully" Feb 13 15:21:55.766257 containerd[1442]: time="2025-02-13T15:21:55.766218518Z" level=info msg="StopPodSandbox for \"d3f7525b6c6753b8ce43153e40018478056e9589dd30a82916f36bcfc6177342\"" Feb 13 15:21:55.766332 containerd[1442]: time="2025-02-13T15:21:55.766312522Z" level=info msg="TearDown network for sandbox \"d3f7525b6c6753b8ce43153e40018478056e9589dd30a82916f36bcfc6177342\" successfully" Feb 13 15:21:55.766332 containerd[1442]: time="2025-02-13T15:21:55.766322882Z" level=info msg="StopPodSandbox for \"d3f7525b6c6753b8ce43153e40018478056e9589dd30a82916f36bcfc6177342\" returns successfully" Feb 13 15:21:55.767081 containerd[1442]: time="2025-02-13T15:21:55.766600775Z" level=info msg="StopPodSandbox for \"74cd138a61eecc2e6903a038baafdc496135da9e7c240b4fd5953c85a00d83ec\"" Feb 13 15:21:55.767081 containerd[1442]: time="2025-02-13T15:21:55.766695139Z" level=info msg="TearDown network for sandbox \"74cd138a61eecc2e6903a038baafdc496135da9e7c240b4fd5953c85a00d83ec\" successfully" Feb 13 15:21:55.767081 containerd[1442]: time="2025-02-13T15:21:55.766706700Z" level=info msg="StopPodSandbox for \"74cd138a61eecc2e6903a038baafdc496135da9e7c240b4fd5953c85a00d83ec\" returns successfully" Feb 13 15:21:55.767133 systemd[1]: run-netns-cni\x2d502bca23\x2d16aa\x2ddd5d\x2db15d\x2d7f524de510d4.mount: Deactivated successfully. Feb 13 15:21:55.767895 containerd[1442]: time="2025-02-13T15:21:55.767699665Z" level=info msg="StopPodSandbox for \"a3d2c5311b68c754ed4f52b532c1dbe545a75c82f3b62899ff9015ee5110f3ad\"" Feb 13 15:21:55.767895 containerd[1442]: time="2025-02-13T15:21:55.767775628Z" level=info msg="TearDown network for sandbox \"a3d2c5311b68c754ed4f52b532c1dbe545a75c82f3b62899ff9015ee5110f3ad\" successfully" Feb 13 15:21:55.767895 containerd[1442]: time="2025-02-13T15:21:55.767786389Z" level=info msg="StopPodSandbox for \"a3d2c5311b68c754ed4f52b532c1dbe545a75c82f3b62899ff9015ee5110f3ad\" returns successfully" Feb 13 15:21:55.768410 containerd[1442]: time="2025-02-13T15:21:55.768388296Z" level=info msg="StopPodSandbox for \"892160d9b3fdcfef5dfbb8a15226ee03d30c00255d1c8fffacad07d070348325\"" Feb 13 15:21:55.768483 containerd[1442]: time="2025-02-13T15:21:55.768465619Z" level=info msg="TearDown network for sandbox \"892160d9b3fdcfef5dfbb8a15226ee03d30c00255d1c8fffacad07d070348325\" successfully" Feb 13 15:21:55.768518 containerd[1442]: time="2025-02-13T15:21:55.768490420Z" level=info msg="StopPodSandbox for \"892160d9b3fdcfef5dfbb8a15226ee03d30c00255d1c8fffacad07d070348325\" returns successfully" Feb 13 15:21:55.769793 containerd[1442]: time="2025-02-13T15:21:55.769214613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-564fc96ccb-fw9ln,Uid:84c3b521-a067-4602-ad8f-cbca4249dad7,Namespace:calico-apiserver,Attempt:5,}" Feb 13 15:21:55.770195 kubelet[2621]: I0213 15:21:55.770156 2621 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f9619cd8018d29949f5b8a88dcba1dde36a35e932ad7eb9d8c70983a1ee1a35" Feb 13 15:21:55.771214 containerd[1442]: time="2025-02-13T15:21:55.771186302Z" level=info msg="StopPodSandbox for \"3f9619cd8018d29949f5b8a88dcba1dde36a35e932ad7eb9d8c70983a1ee1a35\"" Feb 13 15:21:55.771374 containerd[1442]: time="2025-02-13T15:21:55.771352070Z" level=info msg="Ensure that sandbox 3f9619cd8018d29949f5b8a88dcba1dde36a35e932ad7eb9d8c70983a1ee1a35 in task-service has been cleanup successfully" Feb 13 15:21:55.772353 containerd[1442]: time="2025-02-13T15:21:55.772325194Z" level=info msg="TearDown network for sandbox \"3f9619cd8018d29949f5b8a88dcba1dde36a35e932ad7eb9d8c70983a1ee1a35\" successfully" Feb 13 15:21:55.772353 containerd[1442]: time="2025-02-13T15:21:55.772352315Z" level=info msg="StopPodSandbox for \"3f9619cd8018d29949f5b8a88dcba1dde36a35e932ad7eb9d8c70983a1ee1a35\" returns successfully" Feb 13 15:21:55.773284 systemd[1]: run-netns-cni\x2d1ad599b8\x2d6f9b\x2d4a72\x2d510f\x2d35f57164ec72.mount: Deactivated successfully. Feb 13 15:21:55.774374 containerd[1442]: time="2025-02-13T15:21:55.774329404Z" level=info msg="StopPodSandbox for \"0e00bbd7bcb14cbd456352c063bed067e90a3795d0de450c8bec1e4ae2edb640\"" Feb 13 15:21:55.774443 containerd[1442]: time="2025-02-13T15:21:55.774421768Z" level=info msg="TearDown network for sandbox \"0e00bbd7bcb14cbd456352c063bed067e90a3795d0de450c8bec1e4ae2edb640\" successfully" Feb 13 15:21:55.774932 containerd[1442]: time="2025-02-13T15:21:55.774437889Z" level=info msg="StopPodSandbox for \"0e00bbd7bcb14cbd456352c063bed067e90a3795d0de450c8bec1e4ae2edb640\" returns successfully" Feb 13 15:21:55.775638 containerd[1442]: time="2025-02-13T15:21:55.775355690Z" level=info msg="StopPodSandbox for \"f0b420624c0ef4de8293125d4aa3764bbbc9ad891299e4836ec706ef73b01bc2\"" Feb 13 15:21:55.775638 containerd[1442]: time="2025-02-13T15:21:55.775442734Z" level=info msg="TearDown network for sandbox \"f0b420624c0ef4de8293125d4aa3764bbbc9ad891299e4836ec706ef73b01bc2\" successfully" Feb 13 15:21:55.775638 containerd[1442]: time="2025-02-13T15:21:55.775454135Z" level=info msg="StopPodSandbox for \"f0b420624c0ef4de8293125d4aa3764bbbc9ad891299e4836ec706ef73b01bc2\" returns successfully" Feb 13 15:21:55.776543 containerd[1442]: time="2025-02-13T15:21:55.776208769Z" level=info msg="StopPodSandbox for \"068df4c4fbceb251808403da7d7f70417a6f0acfb26bbe7e26797b860d1eeaef\"" Feb 13 15:21:55.776543 containerd[1442]: time="2025-02-13T15:21:55.776351735Z" level=info msg="TearDown network for sandbox \"068df4c4fbceb251808403da7d7f70417a6f0acfb26bbe7e26797b860d1eeaef\" successfully" Feb 13 15:21:55.776543 containerd[1442]: time="2025-02-13T15:21:55.776364136Z" level=info msg="StopPodSandbox for \"068df4c4fbceb251808403da7d7f70417a6f0acfb26bbe7e26797b860d1eeaef\" returns successfully" Feb 13 15:21:55.776717 containerd[1442]: time="2025-02-13T15:21:55.776690471Z" level=info msg="StopPodSandbox for \"74aa2dc63f90f0fe66425516e5fd4cd00e009a8fcb748a38adc86fdedb44b077\"" Feb 13 15:21:55.776746 kubelet[2621]: I0213 15:21:55.776733 2621 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d04a99dec6ccf9c7326d2d50b6de346e356bee2d6457b2d5c9680fea9a5adb7c" Feb 13 15:21:55.776899 containerd[1442]: time="2025-02-13T15:21:55.776876799Z" level=info msg="TearDown network for sandbox \"74aa2dc63f90f0fe66425516e5fd4cd00e009a8fcb748a38adc86fdedb44b077\" successfully" Feb 13 15:21:55.776899 containerd[1442]: time="2025-02-13T15:21:55.776895440Z" level=info msg="StopPodSandbox for \"74aa2dc63f90f0fe66425516e5fd4cd00e009a8fcb748a38adc86fdedb44b077\" returns successfully" Feb 13 15:21:55.778288 kubelet[2621]: E0213 15:21:55.777146 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:21:55.778386 containerd[1442]: time="2025-02-13T15:21:55.778059613Z" level=info msg="StopPodSandbox for \"d04a99dec6ccf9c7326d2d50b6de346e356bee2d6457b2d5c9680fea9a5adb7c\"" Feb 13 15:21:55.778386 containerd[1442]: time="2025-02-13T15:21:55.778211019Z" level=info msg="Ensure that sandbox d04a99dec6ccf9c7326d2d50b6de346e356bee2d6457b2d5c9680fea9a5adb7c in task-service has been cleanup successfully" Feb 13 15:21:55.778644 containerd[1442]: time="2025-02-13T15:21:55.778451750Z" level=info msg="TearDown network for sandbox \"d04a99dec6ccf9c7326d2d50b6de346e356bee2d6457b2d5c9680fea9a5adb7c\" successfully" Feb 13 15:21:55.778644 containerd[1442]: time="2025-02-13T15:21:55.778474951Z" level=info msg="StopPodSandbox for \"d04a99dec6ccf9c7326d2d50b6de346e356bee2d6457b2d5c9680fea9a5adb7c\" returns successfully" Feb 13 15:21:55.778644 containerd[1442]: time="2025-02-13T15:21:55.778567035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qhltz,Uid:c285582e-9191-432e-99ea-cc7fe7db7fbb,Namespace:kube-system,Attempt:5,}" Feb 13 15:21:55.779943 containerd[1442]: time="2025-02-13T15:21:55.779893735Z" level=info msg="StopPodSandbox for \"8db7eb7e96e9c3fedc10436101d0f44d3179d411a6afb2c8436bc1e4908d2830\"" Feb 13 15:21:55.781394 containerd[1442]: time="2025-02-13T15:21:55.780333755Z" level=info msg="TearDown network for sandbox \"8db7eb7e96e9c3fedc10436101d0f44d3179d411a6afb2c8436bc1e4908d2830\" successfully" Feb 13 15:21:55.781394 containerd[1442]: time="2025-02-13T15:21:55.780352476Z" level=info msg="StopPodSandbox for \"8db7eb7e96e9c3fedc10436101d0f44d3179d411a6afb2c8436bc1e4908d2830\" returns successfully" Feb 13 15:21:55.781394 containerd[1442]: time="2025-02-13T15:21:55.780677371Z" level=info msg="StopPodSandbox for \"f2a4be37ee45e0a2e04d64278828b4f8471e3f621e93d94fac9f67321674f84b\"" Feb 13 15:21:55.780135 systemd[1]: run-netns-cni\x2d4249f4e3\x2d931b\x2d5dc9\x2d41b2\x2d317088286e75.mount: Deactivated successfully. Feb 13 15:21:55.781624 containerd[1442]: time="2025-02-13T15:21:55.781572371Z" level=info msg="TearDown network for sandbox \"f2a4be37ee45e0a2e04d64278828b4f8471e3f621e93d94fac9f67321674f84b\" successfully" Feb 13 15:21:55.781624 containerd[1442]: time="2025-02-13T15:21:55.781597132Z" level=info msg="StopPodSandbox for \"f2a4be37ee45e0a2e04d64278828b4f8471e3f621e93d94fac9f67321674f84b\" returns successfully" Feb 13 15:21:55.782146 containerd[1442]: time="2025-02-13T15:21:55.782084554Z" level=info msg="StopPodSandbox for \"5fc5ff480b0ac02e88a3e59e79abeae223a5c6ef1c9e47fd64658d555c0f5c92\"" Feb 13 15:21:55.782209 containerd[1442]: time="2025-02-13T15:21:55.782175038Z" level=info msg="TearDown network for sandbox \"5fc5ff480b0ac02e88a3e59e79abeae223a5c6ef1c9e47fd64658d555c0f5c92\" successfully" Feb 13 15:21:55.782209 containerd[1442]: time="2025-02-13T15:21:55.782185319Z" level=info msg="StopPodSandbox for \"5fc5ff480b0ac02e88a3e59e79abeae223a5c6ef1c9e47fd64658d555c0f5c92\" returns successfully" Feb 13 15:21:55.783349 kubelet[2621]: I0213 15:21:55.783302 2621 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="625368530c2bc9339bf5122e7b1e46a08e7aa9ae7da5caeccce496924a7d1331" Feb 13 15:21:55.784083 containerd[1442]: time="2025-02-13T15:21:55.783783151Z" level=info msg="StopPodSandbox for \"625368530c2bc9339bf5122e7b1e46a08e7aa9ae7da5caeccce496924a7d1331\"" Feb 13 15:21:55.784083 containerd[1442]: time="2025-02-13T15:21:55.783938998Z" level=info msg="Ensure that sandbox 625368530c2bc9339bf5122e7b1e46a08e7aa9ae7da5caeccce496924a7d1331 in task-service has been cleanup successfully" Feb 13 15:21:55.784083 containerd[1442]: time="2025-02-13T15:21:55.783955159Z" level=info msg="StopPodSandbox for \"d96ae196292a605b5cd4d41b835a3232ce63bc7c919a65833fa2c6f5c8cba484\"" Feb 13 15:21:55.784083 containerd[1442]: time="2025-02-13T15:21:55.784092365Z" level=info msg="TearDown network for sandbox \"d96ae196292a605b5cd4d41b835a3232ce63bc7c919a65833fa2c6f5c8cba484\" successfully" Feb 13 15:21:55.784237 containerd[1442]: time="2025-02-13T15:21:55.784103726Z" level=info msg="StopPodSandbox for \"d96ae196292a605b5cd4d41b835a3232ce63bc7c919a65833fa2c6f5c8cba484\" returns successfully" Feb 13 15:21:55.784318 containerd[1442]: time="2025-02-13T15:21:55.784295494Z" level=info msg="TearDown network for sandbox \"625368530c2bc9339bf5122e7b1e46a08e7aa9ae7da5caeccce496924a7d1331\" successfully" Feb 13 15:21:55.784377 containerd[1442]: time="2025-02-13T15:21:55.784363937Z" level=info msg="StopPodSandbox for \"625368530c2bc9339bf5122e7b1e46a08e7aa9ae7da5caeccce496924a7d1331\" returns successfully" Feb 13 15:21:55.785387 containerd[1442]: time="2025-02-13T15:21:55.785363502Z" level=info msg="StopPodSandbox for \"11929aadc81cdf1da7658e28ef4d74baafd42d9d50e9e28282ba41964a92a3a7\"" Feb 13 15:21:55.785521 containerd[1442]: time="2025-02-13T15:21:55.785489628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8vvjv,Uid:14b2995f-bdfb-4265-9dc0-06ae16e4bb6c,Namespace:calico-system,Attempt:5,}" Feb 13 15:21:55.785602 containerd[1442]: time="2025-02-13T15:21:55.785584752Z" level=info msg="TearDown network for sandbox \"11929aadc81cdf1da7658e28ef4d74baafd42d9d50e9e28282ba41964a92a3a7\" successfully" Feb 13 15:21:55.785657 containerd[1442]: time="2025-02-13T15:21:55.785645315Z" level=info msg="StopPodSandbox for \"11929aadc81cdf1da7658e28ef4d74baafd42d9d50e9e28282ba41964a92a3a7\" returns successfully" Feb 13 15:21:55.786130 containerd[1442]: time="2025-02-13T15:21:55.786098456Z" level=info msg="StopPodSandbox for \"2fb37880992bf25b4ffdc0417195ecffe00d1f91c138c1d050e3c108ce682cc8\"" Feb 13 15:21:55.786786 containerd[1442]: time="2025-02-13T15:21:55.786176739Z" level=info msg="TearDown network for sandbox \"2fb37880992bf25b4ffdc0417195ecffe00d1f91c138c1d050e3c108ce682cc8\" successfully" Feb 13 15:21:55.786786 containerd[1442]: time="2025-02-13T15:21:55.786192580Z" level=info msg="StopPodSandbox for \"2fb37880992bf25b4ffdc0417195ecffe00d1f91c138c1d050e3c108ce682cc8\" returns successfully" Feb 13 15:21:55.787109 containerd[1442]: time="2025-02-13T15:21:55.787075460Z" level=info msg="StopPodSandbox for \"222fe074a6a0fa30aabf41e153b4175978e6e304dfcf93e3a4c0de8e32655c0a\"" Feb 13 15:21:55.787216 containerd[1442]: time="2025-02-13T15:21:55.787198625Z" level=info msg="TearDown network for sandbox \"222fe074a6a0fa30aabf41e153b4175978e6e304dfcf93e3a4c0de8e32655c0a\" successfully" Feb 13 15:21:55.787254 containerd[1442]: time="2025-02-13T15:21:55.787214666Z" level=info msg="StopPodSandbox for \"222fe074a6a0fa30aabf41e153b4175978e6e304dfcf93e3a4c0de8e32655c0a\" returns successfully" Feb 13 15:21:55.787819 containerd[1442]: time="2025-02-13T15:21:55.787783612Z" level=info msg="StopPodSandbox for \"c4baf65475d91bc2c15306315ac291509c501cd40c636754fa7bb756cbcea258\"" Feb 13 15:21:55.787922 containerd[1442]: time="2025-02-13T15:21:55.787900977Z" level=info msg="TearDown network for sandbox \"c4baf65475d91bc2c15306315ac291509c501cd40c636754fa7bb756cbcea258\" successfully" Feb 13 15:21:55.787922 containerd[1442]: time="2025-02-13T15:21:55.787916858Z" level=info msg="StopPodSandbox for \"c4baf65475d91bc2c15306315ac291509c501cd40c636754fa7bb756cbcea258\" returns successfully" Feb 13 15:21:55.788546 containerd[1442]: time="2025-02-13T15:21:55.788520565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-658db9fb4b-xcbwf,Uid:87c71f20-054e-44ba-a99e-c4fefdae6457,Namespace:calico-system,Attempt:5,}" Feb 13 15:21:55.788846 kubelet[2621]: I0213 15:21:55.788813 2621 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d39bbd1c579b80885befac14b0958dd16f728a35a72084ca6320a77ccb3b7a7a" Feb 13 15:21:55.789891 containerd[1442]: time="2025-02-13T15:21:55.789839865Z" level=info msg="StopPodSandbox for \"d39bbd1c579b80885befac14b0958dd16f728a35a72084ca6320a77ccb3b7a7a\"" Feb 13 15:21:55.790197 containerd[1442]: time="2025-02-13T15:21:55.790165039Z" level=info msg="Ensure that sandbox d39bbd1c579b80885befac14b0958dd16f728a35a72084ca6320a77ccb3b7a7a in task-service has been cleanup successfully" Feb 13 15:21:55.790420 containerd[1442]: time="2025-02-13T15:21:55.790387849Z" level=info msg="TearDown network for sandbox \"d39bbd1c579b80885befac14b0958dd16f728a35a72084ca6320a77ccb3b7a7a\" successfully" Feb 13 15:21:55.790420 containerd[1442]: time="2025-02-13T15:21:55.790408570Z" level=info msg="StopPodSandbox for \"d39bbd1c579b80885befac14b0958dd16f728a35a72084ca6320a77ccb3b7a7a\" returns successfully" Feb 13 15:21:55.790745 containerd[1442]: time="2025-02-13T15:21:55.790709864Z" level=info msg="StopPodSandbox for \"1bc7ba7f91f5547b00e7049a50ddcf60d573692ca1d4d6b3c1d4222df799d5a0\"" Feb 13 15:21:55.790815 containerd[1442]: time="2025-02-13T15:21:55.790799588Z" level=info msg="TearDown network for sandbox \"1bc7ba7f91f5547b00e7049a50ddcf60d573692ca1d4d6b3c1d4222df799d5a0\" successfully" Feb 13 15:21:55.790848 containerd[1442]: time="2025-02-13T15:21:55.790814109Z" level=info msg="StopPodSandbox for \"1bc7ba7f91f5547b00e7049a50ddcf60d573692ca1d4d6b3c1d4222df799d5a0\" returns successfully" Feb 13 15:21:55.791594 containerd[1442]: time="2025-02-13T15:21:55.791558742Z" level=info msg="StopPodSandbox for \"76d4658126becba445517a90b6b43c46a1c22581049ca91a8e0b6be5d84a6a5b\"" Feb 13 15:21:55.791669 containerd[1442]: time="2025-02-13T15:21:55.791645826Z" level=info msg="TearDown network for sandbox \"76d4658126becba445517a90b6b43c46a1c22581049ca91a8e0b6be5d84a6a5b\" successfully" Feb 13 15:21:55.791669 containerd[1442]: time="2025-02-13T15:21:55.791659747Z" level=info msg="StopPodSandbox for \"76d4658126becba445517a90b6b43c46a1c22581049ca91a8e0b6be5d84a6a5b\" returns successfully" Feb 13 15:21:55.792019 containerd[1442]: time="2025-02-13T15:21:55.791981601Z" level=info msg="StopPodSandbox for \"1290b96754b10ffaddc73a37c85cf3d37997bcbba54c2c7727a2d695f71242d3\"" Feb 13 15:21:55.792180 containerd[1442]: time="2025-02-13T15:21:55.792147369Z" level=info msg="TearDown network for sandbox \"1290b96754b10ffaddc73a37c85cf3d37997bcbba54c2c7727a2d695f71242d3\" successfully" Feb 13 15:21:55.792180 containerd[1442]: time="2025-02-13T15:21:55.792171250Z" level=info msg="StopPodSandbox for \"1290b96754b10ffaddc73a37c85cf3d37997bcbba54c2c7727a2d695f71242d3\" returns successfully" Feb 13 15:21:55.792478 containerd[1442]: time="2025-02-13T15:21:55.792449302Z" level=info msg="StopPodSandbox for \"b21e1a8f17e86fe71cf0ca5f7bf255d3e92b462acaa3367bcccf912c971932fc\"" Feb 13 15:21:55.792545 containerd[1442]: time="2025-02-13T15:21:55.792530786Z" level=info msg="TearDown network for sandbox \"b21e1a8f17e86fe71cf0ca5f7bf255d3e92b462acaa3367bcccf912c971932fc\" successfully" Feb 13 15:21:55.792574 containerd[1442]: time="2025-02-13T15:21:55.792545107Z" level=info msg="StopPodSandbox for \"b21e1a8f17e86fe71cf0ca5f7bf255d3e92b462acaa3367bcccf912c971932fc\" returns successfully" Feb 13 15:21:55.793700 containerd[1442]: time="2025-02-13T15:21:55.793583314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-564fc96ccb-dqvv5,Uid:de1246bc-b473-496c-be26-bb64afe860ad,Namespace:calico-apiserver,Attempt:5,}" Feb 13 15:21:55.795284 kubelet[2621]: E0213 15:21:55.795251 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:21:55.795769 kubelet[2621]: E0213 15:21:55.795736 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:21:55.823000 kubelet[2621]: I0213 15:21:55.822941 2621 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-czdlp" podStartSLOduration=1.397915173 podStartE2EDuration="15.822924239s" podCreationTimestamp="2025-02-13 15:21:40 +0000 UTC" firstStartedPulling="2025-02-13 15:21:40.575573633 +0000 UTC m=+23.276015242" lastFinishedPulling="2025-02-13 15:21:55.000582699 +0000 UTC m=+37.701024308" observedRunningTime="2025-02-13 15:21:55.815086005 +0000 UTC m=+38.515527654" watchObservedRunningTime="2025-02-13 15:21:55.822924239 +0000 UTC m=+38.523365848" Feb 13 15:21:55.962339 systemd[1]: Started sshd@9-10.0.0.35:22-10.0.0.1:38326.service - OpenSSH per-connection server daemon (10.0.0.1:38326). Feb 13 15:21:56.023974 sshd[4695]: Accepted publickey for core from 10.0.0.1 port 38326 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:21:56.022863 sshd-session[4695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:21:56.056647 systemd-logind[1425]: New session 10 of user core. Feb 13 15:21:56.086213 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 15:21:56.291151 sshd[4732]: Connection closed by 10.0.0.1 port 38326 Feb 13 15:21:56.291857 sshd-session[4695]: pam_unix(sshd:session): session closed for user core Feb 13 15:21:56.302012 systemd[1]: sshd@9-10.0.0.35:22-10.0.0.1:38326.service: Deactivated successfully. Feb 13 15:21:56.303838 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 15:21:56.305546 systemd-logind[1425]: Session 10 logged out. Waiting for processes to exit. Feb 13 15:21:56.318361 systemd[1]: Started sshd@10-10.0.0.35:22-10.0.0.1:38336.service - OpenSSH per-connection server daemon (10.0.0.1:38336). Feb 13 15:21:56.320571 systemd-logind[1425]: Removed session 10. Feb 13 15:21:56.339976 systemd-networkd[1387]: calic02573eb110: Link UP Feb 13 15:21:56.340318 systemd-networkd[1387]: calic02573eb110: Gained carrier Feb 13 15:21:56.359797 containerd[1442]: 2025-02-13 15:21:55.851 [INFO][4626] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:21:56.359797 containerd[1442]: 2025-02-13 15:21:55.948 [INFO][4626] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--564fc96ccb--fw9ln-eth0 calico-apiserver-564fc96ccb- calico-apiserver 84c3b521-a067-4602-ad8f-cbca4249dad7 808 0 2025-02-13 15:21:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:564fc96ccb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-564fc96ccb-fw9ln eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic02573eb110 [] []}} ContainerID="dfbeac76c82119712525f408302707fe2ccd0f49476eebc04b2ad698626a738d" Namespace="calico-apiserver" Pod="calico-apiserver-564fc96ccb-fw9ln" WorkloadEndpoint="localhost-k8s-calico--apiserver--564fc96ccb--fw9ln-" Feb 13 15:21:56.359797 containerd[1442]: 2025-02-13 15:21:55.948 [INFO][4626] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="dfbeac76c82119712525f408302707fe2ccd0f49476eebc04b2ad698626a738d" Namespace="calico-apiserver" Pod="calico-apiserver-564fc96ccb-fw9ln" WorkloadEndpoint="localhost-k8s-calico--apiserver--564fc96ccb--fw9ln-eth0" Feb 13 15:21:56.359797 containerd[1442]: 2025-02-13 15:21:56.152 [INFO][4697] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dfbeac76c82119712525f408302707fe2ccd0f49476eebc04b2ad698626a738d" HandleID="k8s-pod-network.dfbeac76c82119712525f408302707fe2ccd0f49476eebc04b2ad698626a738d" Workload="localhost-k8s-calico--apiserver--564fc96ccb--fw9ln-eth0" Feb 13 15:21:56.359797 containerd[1442]: 2025-02-13 15:21:56.177 [INFO][4697] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dfbeac76c82119712525f408302707fe2ccd0f49476eebc04b2ad698626a738d" HandleID="k8s-pod-network.dfbeac76c82119712525f408302707fe2ccd0f49476eebc04b2ad698626a738d" Workload="localhost-k8s-calico--apiserver--564fc96ccb--fw9ln-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003b2360), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-564fc96ccb-fw9ln", "timestamp":"2025-02-13 15:21:56.152721067 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:21:56.359797 containerd[1442]: 2025-02-13 15:21:56.177 [INFO][4697] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:21:56.359797 containerd[1442]: 2025-02-13 15:21:56.177 [INFO][4697] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:21:56.359797 containerd[1442]: 2025-02-13 15:21:56.177 [INFO][4697] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 15:21:56.359797 containerd[1442]: 2025-02-13 15:21:56.183 [INFO][4697] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.dfbeac76c82119712525f408302707fe2ccd0f49476eebc04b2ad698626a738d" host="localhost" Feb 13 15:21:56.359797 containerd[1442]: 2025-02-13 15:21:56.202 [INFO][4697] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 15:21:56.359797 containerd[1442]: 2025-02-13 15:21:56.208 [INFO][4697] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 15:21:56.359797 containerd[1442]: 2025-02-13 15:21:56.213 [INFO][4697] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 15:21:56.359797 containerd[1442]: 2025-02-13 15:21:56.215 [INFO][4697] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 15:21:56.359797 containerd[1442]: 2025-02-13 15:21:56.219 [INFO][4697] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.dfbeac76c82119712525f408302707fe2ccd0f49476eebc04b2ad698626a738d" host="localhost" Feb 13 15:21:56.359797 containerd[1442]: 2025-02-13 15:21:56.220 [INFO][4697] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.dfbeac76c82119712525f408302707fe2ccd0f49476eebc04b2ad698626a738d Feb 13 15:21:56.359797 containerd[1442]: 2025-02-13 15:21:56.287 [INFO][4697] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.dfbeac76c82119712525f408302707fe2ccd0f49476eebc04b2ad698626a738d" host="localhost" Feb 13 15:21:56.359797 containerd[1442]: 2025-02-13 15:21:56.318 [INFO][4697] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.dfbeac76c82119712525f408302707fe2ccd0f49476eebc04b2ad698626a738d" host="localhost" Feb 13 15:21:56.359797 containerd[1442]: 2025-02-13 15:21:56.318 [INFO][4697] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.dfbeac76c82119712525f408302707fe2ccd0f49476eebc04b2ad698626a738d" host="localhost" Feb 13 15:21:56.359797 containerd[1442]: 2025-02-13 15:21:56.318 [INFO][4697] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:21:56.359797 containerd[1442]: 2025-02-13 15:21:56.318 [INFO][4697] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="dfbeac76c82119712525f408302707fe2ccd0f49476eebc04b2ad698626a738d" HandleID="k8s-pod-network.dfbeac76c82119712525f408302707fe2ccd0f49476eebc04b2ad698626a738d" Workload="localhost-k8s-calico--apiserver--564fc96ccb--fw9ln-eth0" Feb 13 15:21:56.361358 containerd[1442]: 2025-02-13 15:21:56.323 [INFO][4626] cni-plugin/k8s.go 386: Populated endpoint ContainerID="dfbeac76c82119712525f408302707fe2ccd0f49476eebc04b2ad698626a738d" Namespace="calico-apiserver" Pod="calico-apiserver-564fc96ccb-fw9ln" WorkloadEndpoint="localhost-k8s-calico--apiserver--564fc96ccb--fw9ln-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--564fc96ccb--fw9ln-eth0", GenerateName:"calico-apiserver-564fc96ccb-", Namespace:"calico-apiserver", SelfLink:"", UID:"84c3b521-a067-4602-ad8f-cbca4249dad7", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 21, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"564fc96ccb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-564fc96ccb-fw9ln", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic02573eb110", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:21:56.361358 containerd[1442]: 2025-02-13 15:21:56.323 [INFO][4626] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="dfbeac76c82119712525f408302707fe2ccd0f49476eebc04b2ad698626a738d" Namespace="calico-apiserver" Pod="calico-apiserver-564fc96ccb-fw9ln" WorkloadEndpoint="localhost-k8s-calico--apiserver--564fc96ccb--fw9ln-eth0" Feb 13 15:21:56.361358 containerd[1442]: 2025-02-13 15:21:56.323 [INFO][4626] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic02573eb110 ContainerID="dfbeac76c82119712525f408302707fe2ccd0f49476eebc04b2ad698626a738d" Namespace="calico-apiserver" Pod="calico-apiserver-564fc96ccb-fw9ln" WorkloadEndpoint="localhost-k8s-calico--apiserver--564fc96ccb--fw9ln-eth0" Feb 13 15:21:56.361358 containerd[1442]: 2025-02-13 15:21:56.339 [INFO][4626] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dfbeac76c82119712525f408302707fe2ccd0f49476eebc04b2ad698626a738d" Namespace="calico-apiserver" Pod="calico-apiserver-564fc96ccb-fw9ln" WorkloadEndpoint="localhost-k8s-calico--apiserver--564fc96ccb--fw9ln-eth0" Feb 13 15:21:56.361358 containerd[1442]: 2025-02-13 15:21:56.340 [INFO][4626] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="dfbeac76c82119712525f408302707fe2ccd0f49476eebc04b2ad698626a738d" Namespace="calico-apiserver" Pod="calico-apiserver-564fc96ccb-fw9ln" WorkloadEndpoint="localhost-k8s-calico--apiserver--564fc96ccb--fw9ln-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--564fc96ccb--fw9ln-eth0", GenerateName:"calico-apiserver-564fc96ccb-", Namespace:"calico-apiserver", SelfLink:"", UID:"84c3b521-a067-4602-ad8f-cbca4249dad7", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 21, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"564fc96ccb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dfbeac76c82119712525f408302707fe2ccd0f49476eebc04b2ad698626a738d", Pod:"calico-apiserver-564fc96ccb-fw9ln", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic02573eb110", MAC:"4e:72:5d:c2:76:e1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:21:56.361358 containerd[1442]: 2025-02-13 15:21:56.357 [INFO][4626] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="dfbeac76c82119712525f408302707fe2ccd0f49476eebc04b2ad698626a738d" Namespace="calico-apiserver" Pod="calico-apiserver-564fc96ccb-fw9ln" WorkloadEndpoint="localhost-k8s-calico--apiserver--564fc96ccb--fw9ln-eth0" Feb 13 15:21:56.371767 sshd[4757]: Accepted publickey for core from 10.0.0.1 port 38336 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:21:56.374473 sshd-session[4757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:21:56.383221 systemd-logind[1425]: New session 11 of user core. Feb 13 15:21:56.386238 systemd-networkd[1387]: cali5b0bc200a28: Link UP Feb 13 15:21:56.386955 systemd-networkd[1387]: cali5b0bc200a28: Gained carrier Feb 13 15:21:56.388243 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 15:21:56.397198 containerd[1442]: time="2025-02-13T15:21:56.396888354Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:21:56.397198 containerd[1442]: time="2025-02-13T15:21:56.396964037Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:21:56.397198 containerd[1442]: time="2025-02-13T15:21:56.396983238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:21:56.397198 containerd[1442]: time="2025-02-13T15:21:56.397080362Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:21:56.405047 containerd[1442]: 2025-02-13 15:21:55.827 [INFO][4612] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:21:56.405047 containerd[1442]: 2025-02-13 15:21:55.946 [INFO][4612] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--2hpnn-eth0 coredns-7db6d8ff4d- kube-system deed100f-387a-4ac5-9252-5efbd4c9fe2b 806 0 2025-02-13 15:21:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-2hpnn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5b0bc200a28 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="4c5f1d60cd8d51b1f48381761277058ae0f51b776e0e6208747f9d0d93c7735a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2hpnn" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--2hpnn-" Feb 13 15:21:56.405047 containerd[1442]: 2025-02-13 15:21:55.946 [INFO][4612] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4c5f1d60cd8d51b1f48381761277058ae0f51b776e0e6208747f9d0d93c7735a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2hpnn" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--2hpnn-eth0" Feb 13 15:21:56.405047 containerd[1442]: 2025-02-13 15:21:56.154 [INFO][4698] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4c5f1d60cd8d51b1f48381761277058ae0f51b776e0e6208747f9d0d93c7735a" HandleID="k8s-pod-network.4c5f1d60cd8d51b1f48381761277058ae0f51b776e0e6208747f9d0d93c7735a" Workload="localhost-k8s-coredns--7db6d8ff4d--2hpnn-eth0" Feb 13 15:21:56.405047 containerd[1442]: 2025-02-13 15:21:56.177 [INFO][4698] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4c5f1d60cd8d51b1f48381761277058ae0f51b776e0e6208747f9d0d93c7735a" HandleID="k8s-pod-network.4c5f1d60cd8d51b1f48381761277058ae0f51b776e0e6208747f9d0d93c7735a" Workload="localhost-k8s-coredns--7db6d8ff4d--2hpnn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000411000), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-2hpnn", "timestamp":"2025-02-13 15:21:56.154525946 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:21:56.405047 containerd[1442]: 2025-02-13 15:21:56.177 [INFO][4698] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:21:56.405047 containerd[1442]: 2025-02-13 15:21:56.318 [INFO][4698] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:21:56.405047 containerd[1442]: 2025-02-13 15:21:56.318 [INFO][4698] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 15:21:56.405047 containerd[1442]: 2025-02-13 15:21:56.321 [INFO][4698] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4c5f1d60cd8d51b1f48381761277058ae0f51b776e0e6208747f9d0d93c7735a" host="localhost" Feb 13 15:21:56.405047 containerd[1442]: 2025-02-13 15:21:56.332 [INFO][4698] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 15:21:56.405047 containerd[1442]: 2025-02-13 15:21:56.346 [INFO][4698] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 15:21:56.405047 containerd[1442]: 2025-02-13 15:21:56.350 [INFO][4698] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 15:21:56.405047 containerd[1442]: 2025-02-13 15:21:56.353 [INFO][4698] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 15:21:56.405047 containerd[1442]: 2025-02-13 15:21:56.353 [INFO][4698] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4c5f1d60cd8d51b1f48381761277058ae0f51b776e0e6208747f9d0d93c7735a" host="localhost" Feb 13 15:21:56.405047 containerd[1442]: 2025-02-13 15:21:56.356 [INFO][4698] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4c5f1d60cd8d51b1f48381761277058ae0f51b776e0e6208747f9d0d93c7735a Feb 13 15:21:56.405047 containerd[1442]: 2025-02-13 15:21:56.361 [INFO][4698] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4c5f1d60cd8d51b1f48381761277058ae0f51b776e0e6208747f9d0d93c7735a" host="localhost" Feb 13 15:21:56.405047 containerd[1442]: 2025-02-13 15:21:56.369 [INFO][4698] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.4c5f1d60cd8d51b1f48381761277058ae0f51b776e0e6208747f9d0d93c7735a" host="localhost" Feb 13 15:21:56.405047 containerd[1442]: 2025-02-13 15:21:56.369 [INFO][4698] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.4c5f1d60cd8d51b1f48381761277058ae0f51b776e0e6208747f9d0d93c7735a" host="localhost" Feb 13 15:21:56.405047 containerd[1442]: 2025-02-13 15:21:56.369 [INFO][4698] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:21:56.405047 containerd[1442]: 2025-02-13 15:21:56.369 [INFO][4698] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="4c5f1d60cd8d51b1f48381761277058ae0f51b776e0e6208747f9d0d93c7735a" HandleID="k8s-pod-network.4c5f1d60cd8d51b1f48381761277058ae0f51b776e0e6208747f9d0d93c7735a" Workload="localhost-k8s-coredns--7db6d8ff4d--2hpnn-eth0" Feb 13 15:21:56.405939 containerd[1442]: 2025-02-13 15:21:56.375 [INFO][4612] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4c5f1d60cd8d51b1f48381761277058ae0f51b776e0e6208747f9d0d93c7735a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2hpnn" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--2hpnn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--2hpnn-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"deed100f-387a-4ac5-9252-5efbd4c9fe2b", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 21, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-2hpnn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5b0bc200a28", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:21:56.405939 containerd[1442]: 2025-02-13 15:21:56.375 [INFO][4612] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="4c5f1d60cd8d51b1f48381761277058ae0f51b776e0e6208747f9d0d93c7735a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2hpnn" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--2hpnn-eth0" Feb 13 15:21:56.405939 containerd[1442]: 2025-02-13 15:21:56.377 [INFO][4612] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5b0bc200a28 ContainerID="4c5f1d60cd8d51b1f48381761277058ae0f51b776e0e6208747f9d0d93c7735a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2hpnn" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--2hpnn-eth0" Feb 13 15:21:56.405939 containerd[1442]: 2025-02-13 15:21:56.387 [INFO][4612] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4c5f1d60cd8d51b1f48381761277058ae0f51b776e0e6208747f9d0d93c7735a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2hpnn" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--2hpnn-eth0" Feb 13 15:21:56.405939 containerd[1442]: 2025-02-13 15:21:56.387 [INFO][4612] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4c5f1d60cd8d51b1f48381761277058ae0f51b776e0e6208747f9d0d93c7735a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2hpnn" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--2hpnn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--2hpnn-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"deed100f-387a-4ac5-9252-5efbd4c9fe2b", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 21, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4c5f1d60cd8d51b1f48381761277058ae0f51b776e0e6208747f9d0d93c7735a", Pod:"coredns-7db6d8ff4d-2hpnn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5b0bc200a28", MAC:"8e:fa:2b:0b:47:b9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:21:56.405939 containerd[1442]: 2025-02-13 15:21:56.402 [INFO][4612] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4c5f1d60cd8d51b1f48381761277058ae0f51b776e0e6208747f9d0d93c7735a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2hpnn" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--2hpnn-eth0" Feb 13 15:21:56.422232 systemd[1]: Started cri-containerd-dfbeac76c82119712525f408302707fe2ccd0f49476eebc04b2ad698626a738d.scope - libcontainer container dfbeac76c82119712525f408302707fe2ccd0f49476eebc04b2ad698626a738d. Feb 13 15:21:56.428644 systemd-networkd[1387]: cali9a7bb031216: Link UP Feb 13 15:21:56.430657 systemd-networkd[1387]: cali9a7bb031216: Gained carrier Feb 13 15:21:56.444291 systemd-resolved[1314]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:21:56.447317 containerd[1442]: time="2025-02-13T15:21:56.447164322Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:21:56.447872 containerd[1442]: time="2025-02-13T15:21:56.447800950Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:21:56.447872 containerd[1442]: time="2025-02-13T15:21:56.447824431Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:21:56.447970 containerd[1442]: time="2025-02-13T15:21:56.447920596Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:21:56.454391 containerd[1442]: 2025-02-13 15:21:55.910 [INFO][4639] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:21:56.454391 containerd[1442]: 2025-02-13 15:21:55.946 [INFO][4639] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--qhltz-eth0 coredns-7db6d8ff4d- kube-system c285582e-9191-432e-99ea-cc7fe7db7fbb 801 0 2025-02-13 15:21:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-qhltz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9a7bb031216 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="c58b971505a10b36598c63d5d8286d81d4e48bd42944efa91ddc3b029bc4742a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qhltz" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--qhltz-" Feb 13 15:21:56.454391 containerd[1442]: 2025-02-13 15:21:55.946 [INFO][4639] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c58b971505a10b36598c63d5d8286d81d4e48bd42944efa91ddc3b029bc4742a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qhltz" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--qhltz-eth0" Feb 13 15:21:56.454391 containerd[1442]: 2025-02-13 15:21:56.154 [INFO][4696] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c58b971505a10b36598c63d5d8286d81d4e48bd42944efa91ddc3b029bc4742a" HandleID="k8s-pod-network.c58b971505a10b36598c63d5d8286d81d4e48bd42944efa91ddc3b029bc4742a" Workload="localhost-k8s-coredns--7db6d8ff4d--qhltz-eth0" Feb 13 15:21:56.454391 containerd[1442]: 2025-02-13 15:21:56.183 [INFO][4696] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c58b971505a10b36598c63d5d8286d81d4e48bd42944efa91ddc3b029bc4742a" HandleID="k8s-pod-network.c58b971505a10b36598c63d5d8286d81d4e48bd42944efa91ddc3b029bc4742a" Workload="localhost-k8s-coredns--7db6d8ff4d--qhltz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000261460), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-qhltz", "timestamp":"2025-02-13 15:21:56.154773877 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:21:56.454391 containerd[1442]: 2025-02-13 15:21:56.183 [INFO][4696] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:21:56.454391 containerd[1442]: 2025-02-13 15:21:56.370 [INFO][4696] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:21:56.454391 containerd[1442]: 2025-02-13 15:21:56.371 [INFO][4696] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 15:21:56.454391 containerd[1442]: 2025-02-13 15:21:56.375 [INFO][4696] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c58b971505a10b36598c63d5d8286d81d4e48bd42944efa91ddc3b029bc4742a" host="localhost" Feb 13 15:21:56.454391 containerd[1442]: 2025-02-13 15:21:56.383 [INFO][4696] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 15:21:56.454391 containerd[1442]: 2025-02-13 15:21:56.395 [INFO][4696] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 15:21:56.454391 containerd[1442]: 2025-02-13 15:21:56.399 [INFO][4696] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 15:21:56.454391 containerd[1442]: 2025-02-13 15:21:56.402 [INFO][4696] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 15:21:56.454391 containerd[1442]: 2025-02-13 15:21:56.402 [INFO][4696] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c58b971505a10b36598c63d5d8286d81d4e48bd42944efa91ddc3b029bc4742a" host="localhost" Feb 13 15:21:56.454391 containerd[1442]: 2025-02-13 15:21:56.404 [INFO][4696] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c58b971505a10b36598c63d5d8286d81d4e48bd42944efa91ddc3b029bc4742a Feb 13 15:21:56.454391 containerd[1442]: 2025-02-13 15:21:56.410 [INFO][4696] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c58b971505a10b36598c63d5d8286d81d4e48bd42944efa91ddc3b029bc4742a" host="localhost" Feb 13 15:21:56.454391 containerd[1442]: 2025-02-13 15:21:56.420 [INFO][4696] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.c58b971505a10b36598c63d5d8286d81d4e48bd42944efa91ddc3b029bc4742a" host="localhost" Feb 13 15:21:56.454391 containerd[1442]: 2025-02-13 15:21:56.420 [INFO][4696] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.c58b971505a10b36598c63d5d8286d81d4e48bd42944efa91ddc3b029bc4742a" host="localhost" Feb 13 15:21:56.454391 containerd[1442]: 2025-02-13 15:21:56.420 [INFO][4696] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:21:56.454391 containerd[1442]: 2025-02-13 15:21:56.420 [INFO][4696] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="c58b971505a10b36598c63d5d8286d81d4e48bd42944efa91ddc3b029bc4742a" HandleID="k8s-pod-network.c58b971505a10b36598c63d5d8286d81d4e48bd42944efa91ddc3b029bc4742a" Workload="localhost-k8s-coredns--7db6d8ff4d--qhltz-eth0" Feb 13 15:21:56.455982 containerd[1442]: 2025-02-13 15:21:56.423 [INFO][4639] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c58b971505a10b36598c63d5d8286d81d4e48bd42944efa91ddc3b029bc4742a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qhltz" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--qhltz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--qhltz-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"c285582e-9191-432e-99ea-cc7fe7db7fbb", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 21, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-qhltz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9a7bb031216", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:21:56.455982 containerd[1442]: 2025-02-13 15:21:56.424 [INFO][4639] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="c58b971505a10b36598c63d5d8286d81d4e48bd42944efa91ddc3b029bc4742a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qhltz" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--qhltz-eth0" Feb 13 15:21:56.455982 containerd[1442]: 2025-02-13 15:21:56.424 [INFO][4639] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9a7bb031216 ContainerID="c58b971505a10b36598c63d5d8286d81d4e48bd42944efa91ddc3b029bc4742a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qhltz" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--qhltz-eth0" Feb 13 15:21:56.455982 containerd[1442]: 2025-02-13 15:21:56.430 [INFO][4639] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c58b971505a10b36598c63d5d8286d81d4e48bd42944efa91ddc3b029bc4742a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qhltz" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--qhltz-eth0" Feb 13 15:21:56.455982 containerd[1442]: 2025-02-13 15:21:56.432 [INFO][4639] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c58b971505a10b36598c63d5d8286d81d4e48bd42944efa91ddc3b029bc4742a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qhltz" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--qhltz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--qhltz-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"c285582e-9191-432e-99ea-cc7fe7db7fbb", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 21, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c58b971505a10b36598c63d5d8286d81d4e48bd42944efa91ddc3b029bc4742a", Pod:"coredns-7db6d8ff4d-qhltz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9a7bb031216", MAC:"86:ea:58:bf:35:4a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:21:56.455982 containerd[1442]: 2025-02-13 15:21:56.451 [INFO][4639] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c58b971505a10b36598c63d5d8286d81d4e48bd42944efa91ddc3b029bc4742a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qhltz" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--qhltz-eth0" Feb 13 15:21:56.478688 systemd[1]: run-netns-cni\x2deddb19e0\x2dd52c\x2d7fa8\x2d46cf\x2d1f479b34f0ac.mount: Deactivated successfully. Feb 13 15:21:56.478790 systemd[1]: run-netns-cni\x2d97f99b22\x2d990b\x2d49d9\x2da011\x2dc93a51c23343.mount: Deactivated successfully. Feb 13 15:21:56.493518 systemd-networkd[1387]: calic9ab272115f: Link UP Feb 13 15:21:56.501147 systemd-networkd[1387]: calic9ab272115f: Gained carrier Feb 13 15:21:56.527129 containerd[1442]: 2025-02-13 15:21:55.928 [INFO][4650] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:21:56.527129 containerd[1442]: 2025-02-13 15:21:55.950 [INFO][4650] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--8vvjv-eth0 csi-node-driver- calico-system 14b2995f-bdfb-4265-9dc0-06ae16e4bb6c 659 0 2025-02-13 15:21:40 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-8vvjv eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic9ab272115f [] []}} ContainerID="af0074bc2d6e147684c54bf17e756ce2de30b50f02cad34c010800ba270f2644" Namespace="calico-system" Pod="csi-node-driver-8vvjv" WorkloadEndpoint="localhost-k8s-csi--node--driver--8vvjv-" Feb 13 15:21:56.527129 containerd[1442]: 2025-02-13 15:21:55.950 [INFO][4650] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="af0074bc2d6e147684c54bf17e756ce2de30b50f02cad34c010800ba270f2644" Namespace="calico-system" Pod="csi-node-driver-8vvjv" WorkloadEndpoint="localhost-k8s-csi--node--driver--8vvjv-eth0" Feb 13 15:21:56.527129 containerd[1442]: 2025-02-13 15:21:56.152 [INFO][4699] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="af0074bc2d6e147684c54bf17e756ce2de30b50f02cad34c010800ba270f2644" HandleID="k8s-pod-network.af0074bc2d6e147684c54bf17e756ce2de30b50f02cad34c010800ba270f2644" Workload="localhost-k8s-csi--node--driver--8vvjv-eth0" Feb 13 15:21:56.527129 containerd[1442]: 2025-02-13 15:21:56.183 [INFO][4699] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="af0074bc2d6e147684c54bf17e756ce2de30b50f02cad34c010800ba270f2644" HandleID="k8s-pod-network.af0074bc2d6e147684c54bf17e756ce2de30b50f02cad34c010800ba270f2644" Workload="localhost-k8s-csi--node--driver--8vvjv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400027e2f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-8vvjv", "timestamp":"2025-02-13 15:21:56.152865113 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:21:56.527129 containerd[1442]: 2025-02-13 15:21:56.183 [INFO][4699] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:21:56.527129 containerd[1442]: 2025-02-13 15:21:56.420 [INFO][4699] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:21:56.527129 containerd[1442]: 2025-02-13 15:21:56.420 [INFO][4699] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 15:21:56.527129 containerd[1442]: 2025-02-13 15:21:56.424 [INFO][4699] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.af0074bc2d6e147684c54bf17e756ce2de30b50f02cad34c010800ba270f2644" host="localhost" Feb 13 15:21:56.527129 containerd[1442]: 2025-02-13 15:21:56.433 [INFO][4699] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 15:21:56.527129 containerd[1442]: 2025-02-13 15:21:56.439 [INFO][4699] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 15:21:56.527129 containerd[1442]: 2025-02-13 15:21:56.442 [INFO][4699] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 15:21:56.527129 containerd[1442]: 2025-02-13 15:21:56.446 [INFO][4699] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 15:21:56.527129 containerd[1442]: 2025-02-13 15:21:56.446 [INFO][4699] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.af0074bc2d6e147684c54bf17e756ce2de30b50f02cad34c010800ba270f2644" host="localhost" Feb 13 15:21:56.527129 containerd[1442]: 2025-02-13 15:21:56.451 [INFO][4699] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.af0074bc2d6e147684c54bf17e756ce2de30b50f02cad34c010800ba270f2644 Feb 13 15:21:56.527129 containerd[1442]: 2025-02-13 15:21:56.458 [INFO][4699] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.af0074bc2d6e147684c54bf17e756ce2de30b50f02cad34c010800ba270f2644" host="localhost" Feb 13 15:21:56.527129 containerd[1442]: 2025-02-13 15:21:56.470 [INFO][4699] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.af0074bc2d6e147684c54bf17e756ce2de30b50f02cad34c010800ba270f2644" host="localhost" Feb 13 15:21:56.527129 containerd[1442]: 2025-02-13 15:21:56.470 [INFO][4699] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.af0074bc2d6e147684c54bf17e756ce2de30b50f02cad34c010800ba270f2644" host="localhost" Feb 13 15:21:56.527129 containerd[1442]: 2025-02-13 15:21:56.471 [INFO][4699] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:21:56.527129 containerd[1442]: 2025-02-13 15:21:56.471 [INFO][4699] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="af0074bc2d6e147684c54bf17e756ce2de30b50f02cad34c010800ba270f2644" HandleID="k8s-pod-network.af0074bc2d6e147684c54bf17e756ce2de30b50f02cad34c010800ba270f2644" Workload="localhost-k8s-csi--node--driver--8vvjv-eth0" Feb 13 15:21:56.527858 containerd[1442]: 2025-02-13 15:21:56.477 [INFO][4650] cni-plugin/k8s.go 386: Populated endpoint ContainerID="af0074bc2d6e147684c54bf17e756ce2de30b50f02cad34c010800ba270f2644" Namespace="calico-system" Pod="csi-node-driver-8vvjv" WorkloadEndpoint="localhost-k8s-csi--node--driver--8vvjv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8vvjv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"14b2995f-bdfb-4265-9dc0-06ae16e4bb6c", ResourceVersion:"659", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 21, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-8vvjv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic9ab272115f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:21:56.527858 containerd[1442]: 2025-02-13 15:21:56.477 [INFO][4650] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="af0074bc2d6e147684c54bf17e756ce2de30b50f02cad34c010800ba270f2644" Namespace="calico-system" Pod="csi-node-driver-8vvjv" WorkloadEndpoint="localhost-k8s-csi--node--driver--8vvjv-eth0" Feb 13 15:21:56.527858 containerd[1442]: 2025-02-13 15:21:56.477 [INFO][4650] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic9ab272115f ContainerID="af0074bc2d6e147684c54bf17e756ce2de30b50f02cad34c010800ba270f2644" Namespace="calico-system" Pod="csi-node-driver-8vvjv" WorkloadEndpoint="localhost-k8s-csi--node--driver--8vvjv-eth0" Feb 13 15:21:56.527858 containerd[1442]: 2025-02-13 15:21:56.495 [INFO][4650] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="af0074bc2d6e147684c54bf17e756ce2de30b50f02cad34c010800ba270f2644" Namespace="calico-system" Pod="csi-node-driver-8vvjv" WorkloadEndpoint="localhost-k8s-csi--node--driver--8vvjv-eth0" Feb 13 15:21:56.527858 containerd[1442]: 2025-02-13 15:21:56.502 [INFO][4650] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="af0074bc2d6e147684c54bf17e756ce2de30b50f02cad34c010800ba270f2644" Namespace="calico-system" Pod="csi-node-driver-8vvjv" WorkloadEndpoint="localhost-k8s-csi--node--driver--8vvjv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8vvjv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"14b2995f-bdfb-4265-9dc0-06ae16e4bb6c", ResourceVersion:"659", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 21, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"af0074bc2d6e147684c54bf17e756ce2de30b50f02cad34c010800ba270f2644", Pod:"csi-node-driver-8vvjv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic9ab272115f", MAC:"06:a6:b8:1f:28:09", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:21:56.527858 containerd[1442]: 2025-02-13 15:21:56.521 [INFO][4650] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="af0074bc2d6e147684c54bf17e756ce2de30b50f02cad34c010800ba270f2644" Namespace="calico-system" Pod="csi-node-driver-8vvjv" WorkloadEndpoint="localhost-k8s-csi--node--driver--8vvjv-eth0" Feb 13 15:21:56.527858 containerd[1442]: time="2025-02-13T15:21:56.527505052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-564fc96ccb-fw9ln,Uid:84c3b521-a067-4602-ad8f-cbca4249dad7,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"dfbeac76c82119712525f408302707fe2ccd0f49476eebc04b2ad698626a738d\"" Feb 13 15:21:56.539362 containerd[1442]: time="2025-02-13T15:21:56.539327331Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 15:21:56.544860 systemd[1]: Started cri-containerd-4c5f1d60cd8d51b1f48381761277058ae0f51b776e0e6208747f9d0d93c7735a.scope - libcontainer container 4c5f1d60cd8d51b1f48381761277058ae0f51b776e0e6208747f9d0d93c7735a. Feb 13 15:21:56.569335 systemd-networkd[1387]: calib98f1d9ae3d: Link UP Feb 13 15:21:56.569538 systemd-networkd[1387]: calib98f1d9ae3d: Gained carrier Feb 13 15:21:56.581300 systemd-resolved[1314]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:21:56.590709 containerd[1442]: 2025-02-13 15:21:55.918 [INFO][4661] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:21:56.590709 containerd[1442]: 2025-02-13 15:21:55.950 [INFO][4661] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--658db9fb4b--xcbwf-eth0 calico-kube-controllers-658db9fb4b- calico-system 87c71f20-054e-44ba-a99e-c4fefdae6457 807 0 2025-02-13 15:21:40 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:658db9fb4b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-658db9fb4b-xcbwf eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib98f1d9ae3d [] []}} ContainerID="95c4500cba1198431997c04afbbc15140b076db1498bc3e627ca57b9a3ce4373" Namespace="calico-system" Pod="calico-kube-controllers-658db9fb4b-xcbwf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--658db9fb4b--xcbwf-" Feb 13 15:21:56.590709 containerd[1442]: 2025-02-13 15:21:55.951 [INFO][4661] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="95c4500cba1198431997c04afbbc15140b076db1498bc3e627ca57b9a3ce4373" Namespace="calico-system" Pod="calico-kube-controllers-658db9fb4b-xcbwf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--658db9fb4b--xcbwf-eth0" Feb 13 15:21:56.590709 containerd[1442]: 2025-02-13 15:21:56.154 [INFO][4711] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="95c4500cba1198431997c04afbbc15140b076db1498bc3e627ca57b9a3ce4373" HandleID="k8s-pod-network.95c4500cba1198431997c04afbbc15140b076db1498bc3e627ca57b9a3ce4373" Workload="localhost-k8s-calico--kube--controllers--658db9fb4b--xcbwf-eth0" Feb 13 15:21:56.590709 containerd[1442]: 2025-02-13 15:21:56.184 [INFO][4711] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="95c4500cba1198431997c04afbbc15140b076db1498bc3e627ca57b9a3ce4373" HandleID="k8s-pod-network.95c4500cba1198431997c04afbbc15140b076db1498bc3e627ca57b9a3ce4373" Workload="localhost-k8s-calico--kube--controllers--658db9fb4b--xcbwf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000483cb0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-658db9fb4b-xcbwf", "timestamp":"2025-02-13 15:21:56.154133969 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:21:56.590709 containerd[1442]: 2025-02-13 15:21:56.184 [INFO][4711] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:21:56.590709 containerd[1442]: 2025-02-13 15:21:56.470 [INFO][4711] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:21:56.590709 containerd[1442]: 2025-02-13 15:21:56.470 [INFO][4711] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 15:21:56.590709 containerd[1442]: 2025-02-13 15:21:56.482 [INFO][4711] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.95c4500cba1198431997c04afbbc15140b076db1498bc3e627ca57b9a3ce4373" host="localhost" Feb 13 15:21:56.590709 containerd[1442]: 2025-02-13 15:21:56.505 [INFO][4711] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 15:21:56.590709 containerd[1442]: 2025-02-13 15:21:56.525 [INFO][4711] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 15:21:56.590709 containerd[1442]: 2025-02-13 15:21:56.529 [INFO][4711] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 15:21:56.590709 containerd[1442]: 2025-02-13 15:21:56.537 [INFO][4711] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 15:21:56.590709 containerd[1442]: 2025-02-13 15:21:56.537 [INFO][4711] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.95c4500cba1198431997c04afbbc15140b076db1498bc3e627ca57b9a3ce4373" host="localhost" Feb 13 15:21:56.590709 containerd[1442]: 2025-02-13 15:21:56.540 [INFO][4711] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.95c4500cba1198431997c04afbbc15140b076db1498bc3e627ca57b9a3ce4373 Feb 13 15:21:56.590709 containerd[1442]: 2025-02-13 15:21:56.553 [INFO][4711] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.95c4500cba1198431997c04afbbc15140b076db1498bc3e627ca57b9a3ce4373" host="localhost" Feb 13 15:21:56.590709 containerd[1442]: 2025-02-13 15:21:56.561 [INFO][4711] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.95c4500cba1198431997c04afbbc15140b076db1498bc3e627ca57b9a3ce4373" host="localhost" Feb 13 15:21:56.590709 containerd[1442]: 2025-02-13 15:21:56.561 [INFO][4711] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.95c4500cba1198431997c04afbbc15140b076db1498bc3e627ca57b9a3ce4373" host="localhost" Feb 13 15:21:56.590709 containerd[1442]: 2025-02-13 15:21:56.561 [INFO][4711] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:21:56.590709 containerd[1442]: 2025-02-13 15:21:56.561 [INFO][4711] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="95c4500cba1198431997c04afbbc15140b076db1498bc3e627ca57b9a3ce4373" HandleID="k8s-pod-network.95c4500cba1198431997c04afbbc15140b076db1498bc3e627ca57b9a3ce4373" Workload="localhost-k8s-calico--kube--controllers--658db9fb4b--xcbwf-eth0" Feb 13 15:21:56.591955 containerd[1442]: 2025-02-13 15:21:56.565 [INFO][4661] cni-plugin/k8s.go 386: Populated endpoint ContainerID="95c4500cba1198431997c04afbbc15140b076db1498bc3e627ca57b9a3ce4373" Namespace="calico-system" Pod="calico-kube-controllers-658db9fb4b-xcbwf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--658db9fb4b--xcbwf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--658db9fb4b--xcbwf-eth0", GenerateName:"calico-kube-controllers-658db9fb4b-", Namespace:"calico-system", SelfLink:"", UID:"87c71f20-054e-44ba-a99e-c4fefdae6457", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 21, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"658db9fb4b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-658db9fb4b-xcbwf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib98f1d9ae3d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:21:56.591955 containerd[1442]: 2025-02-13 15:21:56.565 [INFO][4661] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="95c4500cba1198431997c04afbbc15140b076db1498bc3e627ca57b9a3ce4373" Namespace="calico-system" Pod="calico-kube-controllers-658db9fb4b-xcbwf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--658db9fb4b--xcbwf-eth0" Feb 13 15:21:56.591955 containerd[1442]: 2025-02-13 15:21:56.565 [INFO][4661] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib98f1d9ae3d ContainerID="95c4500cba1198431997c04afbbc15140b076db1498bc3e627ca57b9a3ce4373" Namespace="calico-system" Pod="calico-kube-controllers-658db9fb4b-xcbwf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--658db9fb4b--xcbwf-eth0" Feb 13 15:21:56.591955 containerd[1442]: 2025-02-13 15:21:56.569 [INFO][4661] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="95c4500cba1198431997c04afbbc15140b076db1498bc3e627ca57b9a3ce4373" Namespace="calico-system" Pod="calico-kube-controllers-658db9fb4b-xcbwf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--658db9fb4b--xcbwf-eth0" Feb 13 15:21:56.591955 containerd[1442]: 2025-02-13 15:21:56.570 [INFO][4661] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="95c4500cba1198431997c04afbbc15140b076db1498bc3e627ca57b9a3ce4373" Namespace="calico-system" Pod="calico-kube-controllers-658db9fb4b-xcbwf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--658db9fb4b--xcbwf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--658db9fb4b--xcbwf-eth0", GenerateName:"calico-kube-controllers-658db9fb4b-", Namespace:"calico-system", SelfLink:"", UID:"87c71f20-054e-44ba-a99e-c4fefdae6457", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 21, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"658db9fb4b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"95c4500cba1198431997c04afbbc15140b076db1498bc3e627ca57b9a3ce4373", Pod:"calico-kube-controllers-658db9fb4b-xcbwf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib98f1d9ae3d", MAC:"da:8b:34:df:0d:f7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:21:56.591955 containerd[1442]: 2025-02-13 15:21:56.584 [INFO][4661] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="95c4500cba1198431997c04afbbc15140b076db1498bc3e627ca57b9a3ce4373" Namespace="calico-system" Pod="calico-kube-controllers-658db9fb4b-xcbwf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--658db9fb4b--xcbwf-eth0" Feb 13 15:21:56.597105 containerd[1442]: time="2025-02-13T15:21:56.596796336Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:21:56.597105 containerd[1442]: time="2025-02-13T15:21:56.596858619Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:21:56.597105 containerd[1442]: time="2025-02-13T15:21:56.596873419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:21:56.597105 containerd[1442]: time="2025-02-13T15:21:56.596970704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:21:56.628094 containerd[1442]: time="2025-02-13T15:21:56.624950333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2hpnn,Uid:deed100f-387a-4ac5-9252-5efbd4c9fe2b,Namespace:kube-system,Attempt:5,} returns sandbox id \"4c5f1d60cd8d51b1f48381761277058ae0f51b776e0e6208747f9d0d93c7735a\"" Feb 13 15:21:56.635462 systemd-networkd[1387]: calibe18846580d: Link UP Feb 13 15:21:56.636301 systemd-networkd[1387]: calibe18846580d: Gained carrier Feb 13 15:21:56.637533 kubelet[2621]: E0213 15:21:56.637191 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:21:56.643229 systemd[1]: Started cri-containerd-af0074bc2d6e147684c54bf17e756ce2de30b50f02cad34c010800ba270f2644.scope - libcontainer container af0074bc2d6e147684c54bf17e756ce2de30b50f02cad34c010800ba270f2644. Feb 13 15:21:56.646373 sshd[4790]: Connection closed by 10.0.0.1 port 38336 Feb 13 15:21:56.647194 sshd-session[4757]: pam_unix(sshd:session): session closed for user core Feb 13 15:21:56.651839 containerd[1442]: time="2025-02-13T15:21:56.651497139Z" level=info msg="CreateContainer within sandbox \"4c5f1d60cd8d51b1f48381761277058ae0f51b776e0e6208747f9d0d93c7735a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:21:56.655106 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 15:21:56.658872 containerd[1442]: time="2025-02-13T15:21:56.652627229Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:21:56.658872 containerd[1442]: time="2025-02-13T15:21:56.652678111Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:21:56.658872 containerd[1442]: time="2025-02-13T15:21:56.652689472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:21:56.658872 containerd[1442]: time="2025-02-13T15:21:56.652787196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:21:56.661328 systemd-logind[1425]: Session 11 logged out. Waiting for processes to exit. Feb 13 15:21:56.666504 systemd[1]: sshd@10-10.0.0.35:22-10.0.0.1:38336.service: Deactivated successfully. Feb 13 15:21:56.685580 systemd[1]: Started sshd@11-10.0.0.35:22-10.0.0.1:38344.service - OpenSSH per-connection server daemon (10.0.0.1:38344). Feb 13 15:21:56.688155 systemd-logind[1425]: Removed session 11. Feb 13 15:21:56.690956 containerd[1442]: 2025-02-13 15:21:55.918 [INFO][4676] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:21:56.690956 containerd[1442]: 2025-02-13 15:21:55.947 [INFO][4676] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--564fc96ccb--dqvv5-eth0 calico-apiserver-564fc96ccb- calico-apiserver de1246bc-b473-496c-be26-bb64afe860ad 805 0 2025-02-13 15:21:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:564fc96ccb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-564fc96ccb-dqvv5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibe18846580d [] []}} ContainerID="07b24dba0ee0aaa95470af4ff08db6d70796e7ac1b1083f503c3f982f5e5d1c1" Namespace="calico-apiserver" Pod="calico-apiserver-564fc96ccb-dqvv5" WorkloadEndpoint="localhost-k8s-calico--apiserver--564fc96ccb--dqvv5-" Feb 13 15:21:56.690956 containerd[1442]: 2025-02-13 15:21:55.947 [INFO][4676] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="07b24dba0ee0aaa95470af4ff08db6d70796e7ac1b1083f503c3f982f5e5d1c1" Namespace="calico-apiserver" Pod="calico-apiserver-564fc96ccb-dqvv5" WorkloadEndpoint="localhost-k8s-calico--apiserver--564fc96ccb--dqvv5-eth0" Feb 13 15:21:56.690956 containerd[1442]: 2025-02-13 15:21:56.152 [INFO][4709] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="07b24dba0ee0aaa95470af4ff08db6d70796e7ac1b1083f503c3f982f5e5d1c1" HandleID="k8s-pod-network.07b24dba0ee0aaa95470af4ff08db6d70796e7ac1b1083f503c3f982f5e5d1c1" Workload="localhost-k8s-calico--apiserver--564fc96ccb--dqvv5-eth0" Feb 13 15:21:56.690956 containerd[1442]: 2025-02-13 15:21:56.188 [INFO][4709] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="07b24dba0ee0aaa95470af4ff08db6d70796e7ac1b1083f503c3f982f5e5d1c1" HandleID="k8s-pod-network.07b24dba0ee0aaa95470af4ff08db6d70796e7ac1b1083f503c3f982f5e5d1c1" Workload="localhost-k8s-calico--apiserver--564fc96ccb--dqvv5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003803b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-564fc96ccb-dqvv5", "timestamp":"2025-02-13 15:21:56.152726747 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:21:56.690956 containerd[1442]: 2025-02-13 15:21:56.188 [INFO][4709] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:21:56.690956 containerd[1442]: 2025-02-13 15:21:56.561 [INFO][4709] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:21:56.690956 containerd[1442]: 2025-02-13 15:21:56.561 [INFO][4709] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 15:21:56.690956 containerd[1442]: 2025-02-13 15:21:56.563 [INFO][4709] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.07b24dba0ee0aaa95470af4ff08db6d70796e7ac1b1083f503c3f982f5e5d1c1" host="localhost" Feb 13 15:21:56.690956 containerd[1442]: 2025-02-13 15:21:56.579 [INFO][4709] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 15:21:56.690956 containerd[1442]: 2025-02-13 15:21:56.587 [INFO][4709] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 15:21:56.690956 containerd[1442]: 2025-02-13 15:21:56.592 [INFO][4709] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 15:21:56.690956 containerd[1442]: 2025-02-13 15:21:56.598 [INFO][4709] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 15:21:56.690956 containerd[1442]: 2025-02-13 15:21:56.598 [INFO][4709] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.07b24dba0ee0aaa95470af4ff08db6d70796e7ac1b1083f503c3f982f5e5d1c1" host="localhost" Feb 13 15:21:56.690956 containerd[1442]: 2025-02-13 15:21:56.601 [INFO][4709] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.07b24dba0ee0aaa95470af4ff08db6d70796e7ac1b1083f503c3f982f5e5d1c1 Feb 13 15:21:56.690956 containerd[1442]: 2025-02-13 15:21:56.604 [INFO][4709] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.07b24dba0ee0aaa95470af4ff08db6d70796e7ac1b1083f503c3f982f5e5d1c1" host="localhost" Feb 13 15:21:56.690956 containerd[1442]: 2025-02-13 15:21:56.615 [INFO][4709] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.07b24dba0ee0aaa95470af4ff08db6d70796e7ac1b1083f503c3f982f5e5d1c1" host="localhost" Feb 13 15:21:56.690956 containerd[1442]: 2025-02-13 15:21:56.615 [INFO][4709] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.07b24dba0ee0aaa95470af4ff08db6d70796e7ac1b1083f503c3f982f5e5d1c1" host="localhost" Feb 13 15:21:56.690956 containerd[1442]: 2025-02-13 15:21:56.615 [INFO][4709] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:21:56.690956 containerd[1442]: 2025-02-13 15:21:56.615 [INFO][4709] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="07b24dba0ee0aaa95470af4ff08db6d70796e7ac1b1083f503c3f982f5e5d1c1" HandleID="k8s-pod-network.07b24dba0ee0aaa95470af4ff08db6d70796e7ac1b1083f503c3f982f5e5d1c1" Workload="localhost-k8s-calico--apiserver--564fc96ccb--dqvv5-eth0" Feb 13 15:21:56.691450 containerd[1442]: 2025-02-13 15:21:56.624 [INFO][4676] cni-plugin/k8s.go 386: Populated endpoint ContainerID="07b24dba0ee0aaa95470af4ff08db6d70796e7ac1b1083f503c3f982f5e5d1c1" Namespace="calico-apiserver" Pod="calico-apiserver-564fc96ccb-dqvv5" WorkloadEndpoint="localhost-k8s-calico--apiserver--564fc96ccb--dqvv5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--564fc96ccb--dqvv5-eth0", GenerateName:"calico-apiserver-564fc96ccb-", Namespace:"calico-apiserver", SelfLink:"", UID:"de1246bc-b473-496c-be26-bb64afe860ad", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 21, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"564fc96ccb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-564fc96ccb-dqvv5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibe18846580d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:21:56.691450 containerd[1442]: 2025-02-13 15:21:56.624 [INFO][4676] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="07b24dba0ee0aaa95470af4ff08db6d70796e7ac1b1083f503c3f982f5e5d1c1" Namespace="calico-apiserver" Pod="calico-apiserver-564fc96ccb-dqvv5" WorkloadEndpoint="localhost-k8s-calico--apiserver--564fc96ccb--dqvv5-eth0" Feb 13 15:21:56.691450 containerd[1442]: 2025-02-13 15:21:56.624 [INFO][4676] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibe18846580d ContainerID="07b24dba0ee0aaa95470af4ff08db6d70796e7ac1b1083f503c3f982f5e5d1c1" Namespace="calico-apiserver" Pod="calico-apiserver-564fc96ccb-dqvv5" WorkloadEndpoint="localhost-k8s-calico--apiserver--564fc96ccb--dqvv5-eth0" Feb 13 15:21:56.691450 containerd[1442]: 2025-02-13 15:21:56.638 [INFO][4676] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="07b24dba0ee0aaa95470af4ff08db6d70796e7ac1b1083f503c3f982f5e5d1c1" Namespace="calico-apiserver" Pod="calico-apiserver-564fc96ccb-dqvv5" WorkloadEndpoint="localhost-k8s-calico--apiserver--564fc96ccb--dqvv5-eth0" Feb 13 15:21:56.691450 containerd[1442]: 2025-02-13 15:21:56.640 [INFO][4676] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="07b24dba0ee0aaa95470af4ff08db6d70796e7ac1b1083f503c3f982f5e5d1c1" Namespace="calico-apiserver" Pod="calico-apiserver-564fc96ccb-dqvv5" WorkloadEndpoint="localhost-k8s-calico--apiserver--564fc96ccb--dqvv5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--564fc96ccb--dqvv5-eth0", GenerateName:"calico-apiserver-564fc96ccb-", Namespace:"calico-apiserver", SelfLink:"", UID:"de1246bc-b473-496c-be26-bb64afe860ad", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 21, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"564fc96ccb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"07b24dba0ee0aaa95470af4ff08db6d70796e7ac1b1083f503c3f982f5e5d1c1", Pod:"calico-apiserver-564fc96ccb-dqvv5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibe18846580d", MAC:"8e:59:c9:e0:b3:f6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:21:56.691450 containerd[1442]: 2025-02-13 15:21:56.664 [INFO][4676] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="07b24dba0ee0aaa95470af4ff08db6d70796e7ac1b1083f503c3f982f5e5d1c1" Namespace="calico-apiserver" Pod="calico-apiserver-564fc96ccb-dqvv5" WorkloadEndpoint="localhost-k8s-calico--apiserver--564fc96ccb--dqvv5-eth0" Feb 13 15:21:56.709245 containerd[1442]: time="2025-02-13T15:21:56.707166265Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:21:56.709245 containerd[1442]: time="2025-02-13T15:21:56.707314431Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:21:56.709245 containerd[1442]: time="2025-02-13T15:21:56.707726249Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:21:56.709245 containerd[1442]: time="2025-02-13T15:21:56.707828574Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:21:56.715286 systemd-resolved[1314]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:21:56.742981 containerd[1442]: time="2025-02-13T15:21:56.742877674Z" level=info msg="CreateContainer within sandbox \"4c5f1d60cd8d51b1f48381761277058ae0f51b776e0e6208747f9d0d93c7735a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"190e5a7c47905bcac5139faac02bae14e999be2b2738cf70601a6d2a169cece5\"" Feb 13 15:21:56.744789 containerd[1442]: time="2025-02-13T15:21:56.743598305Z" level=info msg="StartContainer for \"190e5a7c47905bcac5139faac02bae14e999be2b2738cf70601a6d2a169cece5\"" Feb 13 15:21:56.750722 sshd[4987]: Accepted publickey for core from 10.0.0.1 port 38344 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:21:56.752753 systemd[1]: Started cri-containerd-c58b971505a10b36598c63d5d8286d81d4e48bd42944efa91ddc3b029bc4742a.scope - libcontainer container c58b971505a10b36598c63d5d8286d81d4e48bd42944efa91ddc3b029bc4742a. Feb 13 15:21:56.755584 sshd-session[4987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:21:56.760852 containerd[1442]: time="2025-02-13T15:21:56.760048428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8vvjv,Uid:14b2995f-bdfb-4265-9dc0-06ae16e4bb6c,Namespace:calico-system,Attempt:5,} returns sandbox id \"af0074bc2d6e147684c54bf17e756ce2de30b50f02cad34c010800ba270f2644\"" Feb 13 15:21:56.772688 systemd-logind[1425]: New session 12 of user core. Feb 13 15:21:56.774409 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 15:21:56.786269 systemd[1]: Started cri-containerd-95c4500cba1198431997c04afbbc15140b076db1498bc3e627ca57b9a3ce4373.scope - libcontainer container 95c4500cba1198431997c04afbbc15140b076db1498bc3e627ca57b9a3ce4373. Feb 13 15:21:56.788418 containerd[1442]: time="2025-02-13T15:21:56.788210145Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:21:56.788594 containerd[1442]: time="2025-02-13T15:21:56.788380473Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:21:56.788594 containerd[1442]: time="2025-02-13T15:21:56.788400674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:21:56.789216 containerd[1442]: time="2025-02-13T15:21:56.788946018Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:21:56.804321 systemd-resolved[1314]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:21:56.824752 systemd[1]: Started cri-containerd-190e5a7c47905bcac5139faac02bae14e999be2b2738cf70601a6d2a169cece5.scope - libcontainer container 190e5a7c47905bcac5139faac02bae14e999be2b2738cf70601a6d2a169cece5. Feb 13 15:21:56.847109 systemd-resolved[1314]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:21:56.854789 systemd[1]: Started cri-containerd-07b24dba0ee0aaa95470af4ff08db6d70796e7ac1b1083f503c3f982f5e5d1c1.scope - libcontainer container 07b24dba0ee0aaa95470af4ff08db6d70796e7ac1b1083f503c3f982f5e5d1c1. Feb 13 15:21:56.867311 kubelet[2621]: I0213 15:21:56.866909 2621 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:21:56.869457 kubelet[2621]: E0213 15:21:56.869324 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:21:56.878175 containerd[1442]: time="2025-02-13T15:21:56.877947648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qhltz,Uid:c285582e-9191-432e-99ea-cc7fe7db7fbb,Namespace:kube-system,Attempt:5,} returns sandbox id \"c58b971505a10b36598c63d5d8286d81d4e48bd42944efa91ddc3b029bc4742a\"" Feb 13 15:21:56.879492 kubelet[2621]: E0213 15:21:56.879443 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:21:56.890103 containerd[1442]: time="2025-02-13T15:21:56.889040055Z" level=info msg="CreateContainer within sandbox \"c58b971505a10b36598c63d5d8286d81d4e48bd42944efa91ddc3b029bc4742a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:21:56.916398 containerd[1442]: time="2025-02-13T15:21:56.916342294Z" level=info msg="StartContainer for \"190e5a7c47905bcac5139faac02bae14e999be2b2738cf70601a6d2a169cece5\" returns successfully" Feb 13 15:21:56.928152 systemd-resolved[1314]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:21:56.945768 containerd[1442]: time="2025-02-13T15:21:56.945510656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-658db9fb4b-xcbwf,Uid:87c71f20-054e-44ba-a99e-c4fefdae6457,Namespace:calico-system,Attempt:5,} returns sandbox id \"95c4500cba1198431997c04afbbc15140b076db1498bc3e627ca57b9a3ce4373\"" Feb 13 15:21:56.952960 containerd[1442]: time="2025-02-13T15:21:56.951699008Z" level=info msg="CreateContainer within sandbox \"c58b971505a10b36598c63d5d8286d81d4e48bd42944efa91ddc3b029bc4742a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d7569221c9d4b9907d471c4a3c202cba3c57345f7c1dc6ac058e0ade10d49e5b\"" Feb 13 15:21:56.958424 containerd[1442]: time="2025-02-13T15:21:56.958395182Z" level=info msg="StartContainer for \"d7569221c9d4b9907d471c4a3c202cba3c57345f7c1dc6ac058e0ade10d49e5b\"" Feb 13 15:21:56.973044 containerd[1442]: time="2025-02-13T15:21:56.970676762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-564fc96ccb-dqvv5,Uid:de1246bc-b473-496c-be26-bb64afe860ad,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"07b24dba0ee0aaa95470af4ff08db6d70796e7ac1b1083f503c3f982f5e5d1c1\"" Feb 13 15:21:57.002420 systemd[1]: Started cri-containerd-d7569221c9d4b9907d471c4a3c202cba3c57345f7c1dc6ac058e0ade10d49e5b.scope - libcontainer container d7569221c9d4b9907d471c4a3c202cba3c57345f7c1dc6ac058e0ade10d49e5b. Feb 13 15:21:57.063606 sshd[5117]: Connection closed by 10.0.0.1 port 38344 Feb 13 15:21:57.064393 sshd-session[4987]: pam_unix(sshd:session): session closed for user core Feb 13 15:21:57.068400 containerd[1442]: time="2025-02-13T15:21:57.068359613Z" level=info msg="StartContainer for \"d7569221c9d4b9907d471c4a3c202cba3c57345f7c1dc6ac058e0ade10d49e5b\" returns successfully" Feb 13 15:21:57.069917 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 15:21:57.072996 systemd[1]: sshd@11-10.0.0.35:22-10.0.0.1:38344.service: Deactivated successfully. Feb 13 15:21:57.082816 systemd-logind[1425]: Session 12 logged out. Waiting for processes to exit. Feb 13 15:21:57.088640 systemd-logind[1425]: Removed session 12. Feb 13 15:21:57.147104 kernel: bpftool[5295]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 15:21:57.363751 systemd-networkd[1387]: vxlan.calico: Link UP Feb 13 15:21:57.363757 systemd-networkd[1387]: vxlan.calico: Gained carrier Feb 13 15:21:57.568202 systemd-networkd[1387]: calic02573eb110: Gained IPv6LL Feb 13 15:21:57.824231 systemd-networkd[1387]: calic9ab272115f: Gained IPv6LL Feb 13 15:21:57.881315 kubelet[2621]: E0213 15:21:57.881095 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:21:57.889174 systemd-networkd[1387]: cali9a7bb031216: Gained IPv6LL Feb 13 15:21:57.896829 kubelet[2621]: I0213 15:21:57.896693 2621 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-qhltz" podStartSLOduration=24.896663454 podStartE2EDuration="24.896663454s" podCreationTimestamp="2025-02-13 15:21:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:21:57.895471843 +0000 UTC m=+40.595913452" watchObservedRunningTime="2025-02-13 15:21:57.896663454 +0000 UTC m=+40.597105063" Feb 13 15:21:57.900736 kubelet[2621]: E0213 15:21:57.900185 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:21:57.918631 kubelet[2621]: I0213 15:21:57.917697 2621 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-2hpnn" podStartSLOduration=24.917659112 podStartE2EDuration="24.917659112s" podCreationTimestamp="2025-02-13 15:21:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:21:57.917337858 +0000 UTC m=+40.617779867" watchObservedRunningTime="2025-02-13 15:21:57.917659112 +0000 UTC m=+40.618100681" Feb 13 15:21:58.144584 systemd-networkd[1387]: cali5b0bc200a28: Gained IPv6LL Feb 13 15:21:58.390892 containerd[1442]: time="2025-02-13T15:21:58.390840005Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:58.391569 containerd[1442]: time="2025-02-13T15:21:58.391517433Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Feb 13 15:21:58.392123 containerd[1442]: time="2025-02-13T15:21:58.392095257Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:58.394877 containerd[1442]: time="2025-02-13T15:21:58.394538639Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:58.395304 containerd[1442]: time="2025-02-13T15:21:58.395272149Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 1.855512559s" Feb 13 15:21:58.395304 containerd[1442]: time="2025-02-13T15:21:58.395300831Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Feb 13 15:21:58.396549 containerd[1442]: time="2025-02-13T15:21:58.396516041Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 15:21:58.397811 containerd[1442]: time="2025-02-13T15:21:58.397784294Z" level=info msg="CreateContainer within sandbox \"dfbeac76c82119712525f408302707fe2ccd0f49476eebc04b2ad698626a738d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 15:21:58.400467 systemd-networkd[1387]: calib98f1d9ae3d: Gained IPv6LL Feb 13 15:21:58.408932 containerd[1442]: time="2025-02-13T15:21:58.408877315Z" level=info msg="CreateContainer within sandbox \"dfbeac76c82119712525f408302707fe2ccd0f49476eebc04b2ad698626a738d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e4e88c57db9ec1b7f929891dc750de61074bafa179d59fec4df3c331c99845d1\"" Feb 13 15:21:58.410181 containerd[1442]: time="2025-02-13T15:21:58.409414057Z" level=info msg="StartContainer for \"e4e88c57db9ec1b7f929891dc750de61074bafa179d59fec4df3c331c99845d1\"" Feb 13 15:21:58.470211 systemd[1]: Started cri-containerd-e4e88c57db9ec1b7f929891dc750de61074bafa179d59fec4df3c331c99845d1.scope - libcontainer container e4e88c57db9ec1b7f929891dc750de61074bafa179d59fec4df3c331c99845d1. Feb 13 15:21:58.501266 containerd[1442]: time="2025-02-13T15:21:58.501218795Z" level=info msg="StartContainer for \"e4e88c57db9ec1b7f929891dc750de61074bafa179d59fec4df3c331c99845d1\" returns successfully" Feb 13 15:21:58.592413 systemd-networkd[1387]: calibe18846580d: Gained IPv6LL Feb 13 15:21:58.905649 kubelet[2621]: E0213 15:21:58.905045 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:21:58.905649 kubelet[2621]: E0213 15:21:58.905112 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:21:59.296309 systemd-networkd[1387]: vxlan.calico: Gained IPv6LL Feb 13 15:21:59.689312 containerd[1442]: time="2025-02-13T15:21:59.689261949Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:59.690629 containerd[1442]: time="2025-02-13T15:21:59.690584083Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Feb 13 15:21:59.691471 containerd[1442]: time="2025-02-13T15:21:59.691431197Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:59.693621 containerd[1442]: time="2025-02-13T15:21:59.693594365Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:59.694650 containerd[1442]: time="2025-02-13T15:21:59.694313674Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.297765871s" Feb 13 15:21:59.694650 containerd[1442]: time="2025-02-13T15:21:59.694339315Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Feb 13 15:21:59.696132 containerd[1442]: time="2025-02-13T15:21:59.695489001Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Feb 13 15:21:59.697629 containerd[1442]: time="2025-02-13T15:21:59.697565285Z" level=info msg="CreateContainer within sandbox \"af0074bc2d6e147684c54bf17e756ce2de30b50f02cad34c010800ba270f2644\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 15:21:59.723516 containerd[1442]: time="2025-02-13T15:21:59.723460613Z" level=info msg="CreateContainer within sandbox \"af0074bc2d6e147684c54bf17e756ce2de30b50f02cad34c010800ba270f2644\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"05da2a9442212de4618b9e50ba0a547b69ef8424c1c6a4e8f6a8c1d8a802518d\"" Feb 13 15:21:59.723969 containerd[1442]: time="2025-02-13T15:21:59.723926432Z" level=info msg="StartContainer for \"05da2a9442212de4618b9e50ba0a547b69ef8424c1c6a4e8f6a8c1d8a802518d\"" Feb 13 15:21:59.753211 systemd[1]: Started cri-containerd-05da2a9442212de4618b9e50ba0a547b69ef8424c1c6a4e8f6a8c1d8a802518d.scope - libcontainer container 05da2a9442212de4618b9e50ba0a547b69ef8424c1c6a4e8f6a8c1d8a802518d. Feb 13 15:21:59.780302 containerd[1442]: time="2025-02-13T15:21:59.780183948Z" level=info msg="StartContainer for \"05da2a9442212de4618b9e50ba0a547b69ef8424c1c6a4e8f6a8c1d8a802518d\" returns successfully" Feb 13 15:21:59.909281 kubelet[2621]: I0213 15:21:59.909251 2621 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:21:59.909750 kubelet[2621]: E0213 15:21:59.909736 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:21:59.910541 kubelet[2621]: E0213 15:21:59.910523 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:22:00.908435 kubelet[2621]: I0213 15:22:00.908149 2621 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-564fc96ccb-fw9ln" podStartSLOduration=20.0482495 podStartE2EDuration="21.908129728s" podCreationTimestamp="2025-02-13 15:21:39 +0000 UTC" firstStartedPulling="2025-02-13 15:21:56.536491567 +0000 UTC m=+39.236933176" lastFinishedPulling="2025-02-13 15:21:58.396371795 +0000 UTC m=+41.096813404" observedRunningTime="2025-02-13 15:21:58.915400099 +0000 UTC m=+41.615841708" watchObservedRunningTime="2025-02-13 15:22:00.908129728 +0000 UTC m=+43.608571297" Feb 13 15:22:01.577185 containerd[1442]: time="2025-02-13T15:22:01.576938341Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:22:01.577663 containerd[1442]: time="2025-02-13T15:22:01.577617207Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Feb 13 15:22:01.578371 containerd[1442]: time="2025-02-13T15:22:01.578337115Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:22:01.580950 containerd[1442]: time="2025-02-13T15:22:01.580908773Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:22:01.581390 containerd[1442]: time="2025-02-13T15:22:01.581359071Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 1.885839628s" Feb 13 15:22:01.581444 containerd[1442]: time="2025-02-13T15:22:01.581413633Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Feb 13 15:22:01.583095 containerd[1442]: time="2025-02-13T15:22:01.582819407Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 15:22:01.594614 containerd[1442]: time="2025-02-13T15:22:01.594373890Z" level=info msg="CreateContainer within sandbox \"95c4500cba1198431997c04afbbc15140b076db1498bc3e627ca57b9a3ce4373\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 13 15:22:01.607925 containerd[1442]: time="2025-02-13T15:22:01.607881087Z" level=info msg="CreateContainer within sandbox \"95c4500cba1198431997c04afbbc15140b076db1498bc3e627ca57b9a3ce4373\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"e0525a8da0ee53ebb23ac3c12d85b08e86e1dcaeb6524f1a62cf5b2a9d20c4e2\"" Feb 13 15:22:01.609762 containerd[1442]: time="2025-02-13T15:22:01.608736240Z" level=info msg="StartContainer for \"e0525a8da0ee53ebb23ac3c12d85b08e86e1dcaeb6524f1a62cf5b2a9d20c4e2\"" Feb 13 15:22:01.645244 systemd[1]: Started cri-containerd-e0525a8da0ee53ebb23ac3c12d85b08e86e1dcaeb6524f1a62cf5b2a9d20c4e2.scope - libcontainer container e0525a8da0ee53ebb23ac3c12d85b08e86e1dcaeb6524f1a62cf5b2a9d20c4e2. Feb 13 15:22:01.682310 containerd[1442]: time="2025-02-13T15:22:01.682268899Z" level=info msg="StartContainer for \"e0525a8da0ee53ebb23ac3c12d85b08e86e1dcaeb6524f1a62cf5b2a9d20c4e2\" returns successfully" Feb 13 15:22:01.906554 containerd[1442]: time="2025-02-13T15:22:01.906506054Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:22:01.906987 containerd[1442]: time="2025-02-13T15:22:01.906959431Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Feb 13 15:22:01.909500 containerd[1442]: time="2025-02-13T15:22:01.909460847Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 326.599078ms" Feb 13 15:22:01.909500 containerd[1442]: time="2025-02-13T15:22:01.909501849Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Feb 13 15:22:01.910621 containerd[1442]: time="2025-02-13T15:22:01.910387763Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 15:22:01.914403 containerd[1442]: time="2025-02-13T15:22:01.914366355Z" level=info msg="CreateContainer within sandbox \"07b24dba0ee0aaa95470af4ff08db6d70796e7ac1b1083f503c3f982f5e5d1c1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 15:22:01.928586 containerd[1442]: time="2025-02-13T15:22:01.928429134Z" level=info msg="CreateContainer within sandbox \"07b24dba0ee0aaa95470af4ff08db6d70796e7ac1b1083f503c3f982f5e5d1c1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f1b3c5984e65453bf7eaed1213839529f7ac9b8b869f16e0621f586d15798010\"" Feb 13 15:22:01.929195 containerd[1442]: time="2025-02-13T15:22:01.929158282Z" level=info msg="StartContainer for \"f1b3c5984e65453bf7eaed1213839529f7ac9b8b869f16e0621f586d15798010\"" Feb 13 15:22:01.961219 systemd[1]: Started cri-containerd-f1b3c5984e65453bf7eaed1213839529f7ac9b8b869f16e0621f586d15798010.scope - libcontainer container f1b3c5984e65453bf7eaed1213839529f7ac9b8b869f16e0621f586d15798010. Feb 13 15:22:01.997682 containerd[1442]: time="2025-02-13T15:22:01.997635347Z" level=info msg="StartContainer for \"f1b3c5984e65453bf7eaed1213839529f7ac9b8b869f16e0621f586d15798010\" returns successfully" Feb 13 15:22:02.077177 systemd[1]: Started sshd@12-10.0.0.35:22-10.0.0.1:38350.service - OpenSSH per-connection server daemon (10.0.0.1:38350). Feb 13 15:22:02.147893 sshd[5557]: Accepted publickey for core from 10.0.0.1 port 38350 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:22:02.150151 sshd-session[5557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:22:02.154487 systemd-logind[1425]: New session 13 of user core. Feb 13 15:22:02.160192 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 15:22:02.437728 sshd[5560]: Connection closed by 10.0.0.1 port 38350 Feb 13 15:22:02.438879 sshd-session[5557]: pam_unix(sshd:session): session closed for user core Feb 13 15:22:02.458619 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 15:22:02.460346 systemd[1]: sshd@12-10.0.0.35:22-10.0.0.1:38350.service: Deactivated successfully. Feb 13 15:22:02.465095 systemd-logind[1425]: Session 13 logged out. Waiting for processes to exit. Feb 13 15:22:02.473364 systemd[1]: Started sshd@13-10.0.0.35:22-10.0.0.1:38352.service - OpenSSH per-connection server daemon (10.0.0.1:38352). Feb 13 15:22:02.474872 systemd-logind[1425]: Removed session 13. Feb 13 15:22:02.514985 sshd[5572]: Accepted publickey for core from 10.0.0.1 port 38352 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:22:02.515498 sshd-session[5572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:22:02.521102 systemd-logind[1425]: New session 14 of user core. Feb 13 15:22:02.527185 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 15:22:02.794480 sshd[5574]: Connection closed by 10.0.0.1 port 38352 Feb 13 15:22:02.794949 sshd-session[5572]: pam_unix(sshd:session): session closed for user core Feb 13 15:22:02.808085 systemd[1]: sshd@13-10.0.0.35:22-10.0.0.1:38352.service: Deactivated successfully. Feb 13 15:22:02.810432 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 15:22:02.811968 systemd-logind[1425]: Session 14 logged out. Waiting for processes to exit. Feb 13 15:22:02.825609 systemd[1]: Started sshd@14-10.0.0.35:22-10.0.0.1:57268.service - OpenSSH per-connection server daemon (10.0.0.1:57268). Feb 13 15:22:02.828254 systemd-logind[1425]: Removed session 14. Feb 13 15:22:02.871801 sshd[5592]: Accepted publickey for core from 10.0.0.1 port 57268 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:22:02.873367 sshd-session[5592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:22:02.879821 systemd-logind[1425]: New session 15 of user core. Feb 13 15:22:02.886217 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 15:22:02.957103 kubelet[2621]: I0213 15:22:02.954894 2621 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-658db9fb4b-xcbwf" podStartSLOduration=18.320405162 podStartE2EDuration="22.954875069s" podCreationTimestamp="2025-02-13 15:21:40 +0000 UTC" firstStartedPulling="2025-02-13 15:21:56.948202494 +0000 UTC m=+39.648644103" lastFinishedPulling="2025-02-13 15:22:01.582672401 +0000 UTC m=+44.283114010" observedRunningTime="2025-02-13 15:22:01.936337477 +0000 UTC m=+44.636779086" watchObservedRunningTime="2025-02-13 15:22:02.954875069 +0000 UTC m=+45.655316718" Feb 13 15:22:02.958552 kubelet[2621]: I0213 15:22:02.958254 2621 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-564fc96ccb-dqvv5" podStartSLOduration=19.022181552 podStartE2EDuration="23.958239034s" podCreationTimestamp="2025-02-13 15:21:39 +0000 UTC" firstStartedPulling="2025-02-13 15:21:56.974173755 +0000 UTC m=+39.674615364" lastFinishedPulling="2025-02-13 15:22:01.910231237 +0000 UTC m=+44.610672846" observedRunningTime="2025-02-13 15:22:02.954864028 +0000 UTC m=+45.655305637" watchObservedRunningTime="2025-02-13 15:22:02.958239034 +0000 UTC m=+45.658680643" Feb 13 15:22:03.033880 systemd[1]: run-containerd-runc-k8s.io-e0525a8da0ee53ebb23ac3c12d85b08e86e1dcaeb6524f1a62cf5b2a9d20c4e2-runc.F7T49G.mount: Deactivated successfully. Feb 13 15:22:03.303340 containerd[1442]: time="2025-02-13T15:22:03.303286491Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:22:03.304335 containerd[1442]: time="2025-02-13T15:22:03.303841311Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Feb 13 15:22:03.305426 containerd[1442]: time="2025-02-13T15:22:03.304970272Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:22:03.310218 containerd[1442]: time="2025-02-13T15:22:03.309422154Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:22:03.329684 containerd[1442]: time="2025-02-13T15:22:03.329638848Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.419213924s" Feb 13 15:22:03.329990 containerd[1442]: time="2025-02-13T15:22:03.329880817Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Feb 13 15:22:03.332152 containerd[1442]: time="2025-02-13T15:22:03.331981374Z" level=info msg="CreateContainer within sandbox \"af0074bc2d6e147684c54bf17e756ce2de30b50f02cad34c010800ba270f2644\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 15:22:03.351430 containerd[1442]: time="2025-02-13T15:22:03.351390599Z" level=info msg="CreateContainer within sandbox \"af0074bc2d6e147684c54bf17e756ce2de30b50f02cad34c010800ba270f2644\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"bacabf642ae00c521b1defe453725dd56888e1a5fd9729a69f29d4bf3ec5fc1a\"" Feb 13 15:22:03.352076 containerd[1442]: time="2025-02-13T15:22:03.352033542Z" level=info msg="StartContainer for \"bacabf642ae00c521b1defe453725dd56888e1a5fd9729a69f29d4bf3ec5fc1a\"" Feb 13 15:22:03.389198 systemd[1]: Started cri-containerd-bacabf642ae00c521b1defe453725dd56888e1a5fd9729a69f29d4bf3ec5fc1a.scope - libcontainer container bacabf642ae00c521b1defe453725dd56888e1a5fd9729a69f29d4bf3ec5fc1a. Feb 13 15:22:03.431376 containerd[1442]: time="2025-02-13T15:22:03.431331223Z" level=info msg="StartContainer for \"bacabf642ae00c521b1defe453725dd56888e1a5fd9729a69f29d4bf3ec5fc1a\" returns successfully" Feb 13 15:22:03.955080 kubelet[2621]: I0213 15:22:03.955018 2621 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:22:03.967429 kubelet[2621]: I0213 15:22:03.965717 2621 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-8vvjv" podStartSLOduration=17.409413325 podStartE2EDuration="23.965701915s" podCreationTimestamp="2025-02-13 15:21:40 +0000 UTC" firstStartedPulling="2025-02-13 15:21:56.774235691 +0000 UTC m=+39.474677300" lastFinishedPulling="2025-02-13 15:22:03.330524281 +0000 UTC m=+46.030965890" observedRunningTime="2025-02-13 15:22:03.965351503 +0000 UTC m=+46.665793112" watchObservedRunningTime="2025-02-13 15:22:03.965701915 +0000 UTC m=+46.666143524" Feb 13 15:22:04.463435 kubelet[2621]: I0213 15:22:04.463389 2621 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 15:22:04.467999 kubelet[2621]: I0213 15:22:04.467962 2621 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 15:22:04.501260 sshd[5594]: Connection closed by 10.0.0.1 port 57268 Feb 13 15:22:04.500158 sshd-session[5592]: pam_unix(sshd:session): session closed for user core Feb 13 15:22:04.510734 systemd[1]: sshd@14-10.0.0.35:22-10.0.0.1:57268.service: Deactivated successfully. Feb 13 15:22:04.515553 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 15:22:04.518144 systemd-logind[1425]: Session 15 logged out. Waiting for processes to exit. Feb 13 15:22:04.528808 systemd[1]: Started sshd@15-10.0.0.35:22-10.0.0.1:57274.service - OpenSSH per-connection server daemon (10.0.0.1:57274). Feb 13 15:22:04.531845 systemd-logind[1425]: Removed session 15. Feb 13 15:22:04.570740 sshd[5678]: Accepted publickey for core from 10.0.0.1 port 57274 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:22:04.572367 sshd-session[5678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:22:04.576772 systemd-logind[1425]: New session 16 of user core. Feb 13 15:22:04.587205 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 15:22:04.867248 sshd[5681]: Connection closed by 10.0.0.1 port 57274 Feb 13 15:22:04.868531 sshd-session[5678]: pam_unix(sshd:session): session closed for user core Feb 13 15:22:04.877470 systemd[1]: sshd@15-10.0.0.35:22-10.0.0.1:57274.service: Deactivated successfully. Feb 13 15:22:04.879135 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 15:22:04.881619 systemd-logind[1425]: Session 16 logged out. Waiting for processes to exit. Feb 13 15:22:04.887413 systemd[1]: Started sshd@16-10.0.0.35:22-10.0.0.1:57284.service - OpenSSH per-connection server daemon (10.0.0.1:57284). Feb 13 15:22:04.888304 systemd-logind[1425]: Removed session 16. Feb 13 15:22:04.927765 sshd[5692]: Accepted publickey for core from 10.0.0.1 port 57284 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:22:04.929191 sshd-session[5692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:22:04.934528 systemd-logind[1425]: New session 17 of user core. Feb 13 15:22:04.943224 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 15:22:05.080319 sshd[5694]: Connection closed by 10.0.0.1 port 57284 Feb 13 15:22:05.080679 sshd-session[5692]: pam_unix(sshd:session): session closed for user core Feb 13 15:22:05.083825 systemd[1]: sshd@16-10.0.0.35:22-10.0.0.1:57284.service: Deactivated successfully. Feb 13 15:22:05.087004 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 15:22:05.088204 systemd-logind[1425]: Session 17 logged out. Waiting for processes to exit. Feb 13 15:22:05.089603 systemd-logind[1425]: Removed session 17. Feb 13 15:22:10.094375 systemd[1]: Started sshd@17-10.0.0.35:22-10.0.0.1:57292.service - OpenSSH per-connection server daemon (10.0.0.1:57292). Feb 13 15:22:10.144732 sshd[5721]: Accepted publickey for core from 10.0.0.1 port 57292 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:22:10.146321 sshd-session[5721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:22:10.155335 systemd-logind[1425]: New session 18 of user core. Feb 13 15:22:10.158628 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 15:22:10.305986 sshd[5723]: Connection closed by 10.0.0.1 port 57292 Feb 13 15:22:10.306617 sshd-session[5721]: pam_unix(sshd:session): session closed for user core Feb 13 15:22:10.312068 systemd[1]: sshd@17-10.0.0.35:22-10.0.0.1:57292.service: Deactivated successfully. Feb 13 15:22:10.317166 systemd-logind[1425]: Session 18 logged out. Waiting for processes to exit. Feb 13 15:22:10.317730 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 15:22:10.323614 systemd-logind[1425]: Removed session 18. Feb 13 15:22:10.417724 kubelet[2621]: I0213 15:22:10.417667 2621 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:22:10.418453 kubelet[2621]: E0213 15:22:10.418435 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:22:10.971569 kubelet[2621]: E0213 15:22:10.971527 2621 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:22:15.321894 systemd[1]: Started sshd@18-10.0.0.35:22-10.0.0.1:53298.service - OpenSSH per-connection server daemon (10.0.0.1:53298). Feb 13 15:22:15.374812 sshd[5783]: Accepted publickey for core from 10.0.0.1 port 53298 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:22:15.376419 sshd-session[5783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:22:15.381094 systemd-logind[1425]: New session 19 of user core. Feb 13 15:22:15.387332 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 15:22:15.550771 sshd[5785]: Connection closed by 10.0.0.1 port 53298 Feb 13 15:22:15.551125 sshd-session[5783]: pam_unix(sshd:session): session closed for user core Feb 13 15:22:15.554603 systemd[1]: sshd@18-10.0.0.35:22-10.0.0.1:53298.service: Deactivated successfully. Feb 13 15:22:15.556486 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 15:22:15.557216 systemd-logind[1425]: Session 19 logged out. Waiting for processes to exit. Feb 13 15:22:15.557918 systemd-logind[1425]: Removed session 19. Feb 13 15:22:17.387119 containerd[1442]: time="2025-02-13T15:22:17.387078905Z" level=info msg="StopPodSandbox for \"892160d9b3fdcfef5dfbb8a15226ee03d30c00255d1c8fffacad07d070348325\"" Feb 13 15:22:17.387458 containerd[1442]: time="2025-02-13T15:22:17.387189308Z" level=info msg="TearDown network for sandbox \"892160d9b3fdcfef5dfbb8a15226ee03d30c00255d1c8fffacad07d070348325\" successfully" Feb 13 15:22:17.387458 containerd[1442]: time="2025-02-13T15:22:17.387200788Z" level=info msg="StopPodSandbox for \"892160d9b3fdcfef5dfbb8a15226ee03d30c00255d1c8fffacad07d070348325\" returns successfully" Feb 13 15:22:17.388804 containerd[1442]: time="2025-02-13T15:22:17.387671040Z" level=info msg="RemovePodSandbox for \"892160d9b3fdcfef5dfbb8a15226ee03d30c00255d1c8fffacad07d070348325\"" Feb 13 15:22:17.388804 containerd[1442]: time="2025-02-13T15:22:17.387703321Z" level=info msg="Forcibly stopping sandbox \"892160d9b3fdcfef5dfbb8a15226ee03d30c00255d1c8fffacad07d070348325\"" Feb 13 15:22:17.388804 containerd[1442]: time="2025-02-13T15:22:17.387766482Z" level=info msg="TearDown network for sandbox \"892160d9b3fdcfef5dfbb8a15226ee03d30c00255d1c8fffacad07d070348325\" successfully" Feb 13 15:22:17.398883 containerd[1442]: time="2025-02-13T15:22:17.398837603Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"892160d9b3fdcfef5dfbb8a15226ee03d30c00255d1c8fffacad07d070348325\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:22:17.399047 containerd[1442]: time="2025-02-13T15:22:17.399011128Z" level=info msg="RemovePodSandbox \"892160d9b3fdcfef5dfbb8a15226ee03d30c00255d1c8fffacad07d070348325\" returns successfully" Feb 13 15:22:17.399515 containerd[1442]: time="2025-02-13T15:22:17.399489340Z" level=info msg="StopPodSandbox for \"a3d2c5311b68c754ed4f52b532c1dbe545a75c82f3b62899ff9015ee5110f3ad\"" Feb 13 15:22:17.399597 containerd[1442]: time="2025-02-13T15:22:17.399581062Z" level=info msg="TearDown network for sandbox \"a3d2c5311b68c754ed4f52b532c1dbe545a75c82f3b62899ff9015ee5110f3ad\" successfully" Feb 13 15:22:17.399597 containerd[1442]: time="2025-02-13T15:22:17.399593782Z" level=info msg="StopPodSandbox for \"a3d2c5311b68c754ed4f52b532c1dbe545a75c82f3b62899ff9015ee5110f3ad\" returns successfully" Feb 13 15:22:17.401112 containerd[1442]: time="2025-02-13T15:22:17.399911030Z" level=info msg="RemovePodSandbox for \"a3d2c5311b68c754ed4f52b532c1dbe545a75c82f3b62899ff9015ee5110f3ad\"" Feb 13 15:22:17.401112 containerd[1442]: time="2025-02-13T15:22:17.399938551Z" level=info msg="Forcibly stopping sandbox \"a3d2c5311b68c754ed4f52b532c1dbe545a75c82f3b62899ff9015ee5110f3ad\"" Feb 13 15:22:17.401112 containerd[1442]: time="2025-02-13T15:22:17.400006113Z" level=info msg="TearDown network for sandbox \"a3d2c5311b68c754ed4f52b532c1dbe545a75c82f3b62899ff9015ee5110f3ad\" successfully" Feb 13 15:22:17.402483 containerd[1442]: time="2025-02-13T15:22:17.402365413Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a3d2c5311b68c754ed4f52b532c1dbe545a75c82f3b62899ff9015ee5110f3ad\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:22:17.402483 containerd[1442]: time="2025-02-13T15:22:17.402417214Z" level=info msg="RemovePodSandbox \"a3d2c5311b68c754ed4f52b532c1dbe545a75c82f3b62899ff9015ee5110f3ad\" returns successfully" Feb 13 15:22:17.402888 containerd[1442]: time="2025-02-13T15:22:17.402864545Z" level=info msg="StopPodSandbox for \"74cd138a61eecc2e6903a038baafdc496135da9e7c240b4fd5953c85a00d83ec\"" Feb 13 15:22:17.402978 containerd[1442]: time="2025-02-13T15:22:17.402962228Z" level=info msg="TearDown network for sandbox \"74cd138a61eecc2e6903a038baafdc496135da9e7c240b4fd5953c85a00d83ec\" successfully" Feb 13 15:22:17.403013 containerd[1442]: time="2025-02-13T15:22:17.402977428Z" level=info msg="StopPodSandbox for \"74cd138a61eecc2e6903a038baafdc496135da9e7c240b4fd5953c85a00d83ec\" returns successfully" Feb 13 15:22:17.403293 containerd[1442]: time="2025-02-13T15:22:17.403271116Z" level=info msg="RemovePodSandbox for \"74cd138a61eecc2e6903a038baafdc496135da9e7c240b4fd5953c85a00d83ec\"" Feb 13 15:22:17.403327 containerd[1442]: time="2025-02-13T15:22:17.403301156Z" level=info msg="Forcibly stopping sandbox \"74cd138a61eecc2e6903a038baafdc496135da9e7c240b4fd5953c85a00d83ec\"" Feb 13 15:22:17.403377 containerd[1442]: time="2025-02-13T15:22:17.403363278Z" level=info msg="TearDown network for sandbox \"74cd138a61eecc2e6903a038baafdc496135da9e7c240b4fd5953c85a00d83ec\" successfully" Feb 13 15:22:17.405515 containerd[1442]: time="2025-02-13T15:22:17.405476052Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"74cd138a61eecc2e6903a038baafdc496135da9e7c240b4fd5953c85a00d83ec\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:22:17.405579 containerd[1442]: time="2025-02-13T15:22:17.405532933Z" level=info msg="RemovePodSandbox \"74cd138a61eecc2e6903a038baafdc496135da9e7c240b4fd5953c85a00d83ec\" returns successfully" Feb 13 15:22:17.405937 containerd[1442]: time="2025-02-13T15:22:17.405901262Z" level=info msg="StopPodSandbox for \"d3f7525b6c6753b8ce43153e40018478056e9589dd30a82916f36bcfc6177342\"" Feb 13 15:22:17.406078 containerd[1442]: time="2025-02-13T15:22:17.406046626Z" level=info msg="TearDown network for sandbox \"d3f7525b6c6753b8ce43153e40018478056e9589dd30a82916f36bcfc6177342\" successfully" Feb 13 15:22:17.406078 containerd[1442]: time="2025-02-13T15:22:17.406066987Z" level=info msg="StopPodSandbox for \"d3f7525b6c6753b8ce43153e40018478056e9589dd30a82916f36bcfc6177342\" returns successfully" Feb 13 15:22:17.406385 containerd[1442]: time="2025-02-13T15:22:17.406345154Z" level=info msg="RemovePodSandbox for \"d3f7525b6c6753b8ce43153e40018478056e9589dd30a82916f36bcfc6177342\"" Feb 13 15:22:17.406385 containerd[1442]: time="2025-02-13T15:22:17.406373314Z" level=info msg="Forcibly stopping sandbox \"d3f7525b6c6753b8ce43153e40018478056e9589dd30a82916f36bcfc6177342\"" Feb 13 15:22:17.406464 containerd[1442]: time="2025-02-13T15:22:17.406447396Z" level=info msg="TearDown network for sandbox \"d3f7525b6c6753b8ce43153e40018478056e9589dd30a82916f36bcfc6177342\" successfully" Feb 13 15:22:17.410475 containerd[1442]: time="2025-02-13T15:22:17.410434177Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d3f7525b6c6753b8ce43153e40018478056e9589dd30a82916f36bcfc6177342\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:22:17.410525 containerd[1442]: time="2025-02-13T15:22:17.410494579Z" level=info msg="RemovePodSandbox \"d3f7525b6c6753b8ce43153e40018478056e9589dd30a82916f36bcfc6177342\" returns successfully" Feb 13 15:22:17.410941 containerd[1442]: time="2025-02-13T15:22:17.410855588Z" level=info msg="StopPodSandbox for \"47226915c1750f2dcd4281dff0c1822a4cd03cdd3e0248696294da5d0554aa87\"" Feb 13 15:22:17.410995 containerd[1442]: time="2025-02-13T15:22:17.410966551Z" level=info msg="TearDown network for sandbox \"47226915c1750f2dcd4281dff0c1822a4cd03cdd3e0248696294da5d0554aa87\" successfully" Feb 13 15:22:17.410995 containerd[1442]: time="2025-02-13T15:22:17.410977951Z" level=info msg="StopPodSandbox for \"47226915c1750f2dcd4281dff0c1822a4cd03cdd3e0248696294da5d0554aa87\" returns successfully" Feb 13 15:22:17.411346 containerd[1442]: time="2025-02-13T15:22:17.411322280Z" level=info msg="RemovePodSandbox for \"47226915c1750f2dcd4281dff0c1822a4cd03cdd3e0248696294da5d0554aa87\"" Feb 13 15:22:17.412567 containerd[1442]: time="2025-02-13T15:22:17.411435923Z" level=info msg="Forcibly stopping sandbox \"47226915c1750f2dcd4281dff0c1822a4cd03cdd3e0248696294da5d0554aa87\"" Feb 13 15:22:17.412567 containerd[1442]: time="2025-02-13T15:22:17.411508325Z" level=info msg="TearDown network for sandbox \"47226915c1750f2dcd4281dff0c1822a4cd03cdd3e0248696294da5d0554aa87\" successfully" Feb 13 15:22:17.413879 containerd[1442]: time="2025-02-13T15:22:17.413843864Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"47226915c1750f2dcd4281dff0c1822a4cd03cdd3e0248696294da5d0554aa87\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:22:17.413999 containerd[1442]: time="2025-02-13T15:22:17.413981867Z" level=info msg="RemovePodSandbox \"47226915c1750f2dcd4281dff0c1822a4cd03cdd3e0248696294da5d0554aa87\" returns successfully" Feb 13 15:22:17.414444 containerd[1442]: time="2025-02-13T15:22:17.414422399Z" level=info msg="StopPodSandbox for \"d96ae196292a605b5cd4d41b835a3232ce63bc7c919a65833fa2c6f5c8cba484\"" Feb 13 15:22:17.414519 containerd[1442]: time="2025-02-13T15:22:17.414505401Z" level=info msg="TearDown network for sandbox \"d96ae196292a605b5cd4d41b835a3232ce63bc7c919a65833fa2c6f5c8cba484\" successfully" Feb 13 15:22:17.414553 containerd[1442]: time="2025-02-13T15:22:17.414518521Z" level=info msg="StopPodSandbox for \"d96ae196292a605b5cd4d41b835a3232ce63bc7c919a65833fa2c6f5c8cba484\" returns successfully" Feb 13 15:22:17.415925 containerd[1442]: time="2025-02-13T15:22:17.414821369Z" level=info msg="RemovePodSandbox for \"d96ae196292a605b5cd4d41b835a3232ce63bc7c919a65833fa2c6f5c8cba484\"" Feb 13 15:22:17.415925 containerd[1442]: time="2025-02-13T15:22:17.414849129Z" level=info msg="Forcibly stopping sandbox \"d96ae196292a605b5cd4d41b835a3232ce63bc7c919a65833fa2c6f5c8cba484\"" Feb 13 15:22:17.415925 containerd[1442]: time="2025-02-13T15:22:17.414912371Z" level=info msg="TearDown network for sandbox \"d96ae196292a605b5cd4d41b835a3232ce63bc7c919a65833fa2c6f5c8cba484\" successfully" Feb 13 15:22:17.420778 containerd[1442]: time="2025-02-13T15:22:17.420746799Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d96ae196292a605b5cd4d41b835a3232ce63bc7c919a65833fa2c6f5c8cba484\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:22:17.421356 containerd[1442]: time="2025-02-13T15:22:17.421330654Z" level=info msg="RemovePodSandbox \"d96ae196292a605b5cd4d41b835a3232ce63bc7c919a65833fa2c6f5c8cba484\" returns successfully" Feb 13 15:22:17.422137 containerd[1442]: time="2025-02-13T15:22:17.421940069Z" level=info msg="StopPodSandbox for \"5fc5ff480b0ac02e88a3e59e79abeae223a5c6ef1c9e47fd64658d555c0f5c92\"" Feb 13 15:22:17.422210 containerd[1442]: time="2025-02-13T15:22:17.422189476Z" level=info msg="TearDown network for sandbox \"5fc5ff480b0ac02e88a3e59e79abeae223a5c6ef1c9e47fd64658d555c0f5c92\" successfully" Feb 13 15:22:17.422210 containerd[1442]: time="2025-02-13T15:22:17.422206116Z" level=info msg="StopPodSandbox for \"5fc5ff480b0ac02e88a3e59e79abeae223a5c6ef1c9e47fd64658d555c0f5c92\" returns successfully" Feb 13 15:22:17.422471 containerd[1442]: time="2025-02-13T15:22:17.422448522Z" level=info msg="RemovePodSandbox for \"5fc5ff480b0ac02e88a3e59e79abeae223a5c6ef1c9e47fd64658d555c0f5c92\"" Feb 13 15:22:17.422471 containerd[1442]: time="2025-02-13T15:22:17.422470643Z" level=info msg="Forcibly stopping sandbox \"5fc5ff480b0ac02e88a3e59e79abeae223a5c6ef1c9e47fd64658d555c0f5c92\"" Feb 13 15:22:17.422533 containerd[1442]: time="2025-02-13T15:22:17.422520844Z" level=info msg="TearDown network for sandbox \"5fc5ff480b0ac02e88a3e59e79abeae223a5c6ef1c9e47fd64658d555c0f5c92\" successfully" Feb 13 15:22:17.433914 containerd[1442]: time="2025-02-13T15:22:17.425927090Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5fc5ff480b0ac02e88a3e59e79abeae223a5c6ef1c9e47fd64658d555c0f5c92\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:22:17.434042 containerd[1442]: time="2025-02-13T15:22:17.433946814Z" level=info msg="RemovePodSandbox \"5fc5ff480b0ac02e88a3e59e79abeae223a5c6ef1c9e47fd64658d555c0f5c92\" returns successfully" Feb 13 15:22:17.434378 containerd[1442]: time="2025-02-13T15:22:17.434349264Z" level=info msg="StopPodSandbox for \"f2a4be37ee45e0a2e04d64278828b4f8471e3f621e93d94fac9f67321674f84b\"" Feb 13 15:22:17.434534 containerd[1442]: time="2025-02-13T15:22:17.434516668Z" level=info msg="TearDown network for sandbox \"f2a4be37ee45e0a2e04d64278828b4f8471e3f621e93d94fac9f67321674f84b\" successfully" Feb 13 15:22:17.434568 containerd[1442]: time="2025-02-13T15:22:17.434533789Z" level=info msg="StopPodSandbox for \"f2a4be37ee45e0a2e04d64278828b4f8471e3f621e93d94fac9f67321674f84b\" returns successfully" Feb 13 15:22:17.434942 containerd[1442]: time="2025-02-13T15:22:17.434922798Z" level=info msg="RemovePodSandbox for \"f2a4be37ee45e0a2e04d64278828b4f8471e3f621e93d94fac9f67321674f84b\"" Feb 13 15:22:17.434992 containerd[1442]: time="2025-02-13T15:22:17.434952719Z" level=info msg="Forcibly stopping sandbox \"f2a4be37ee45e0a2e04d64278828b4f8471e3f621e93d94fac9f67321674f84b\"" Feb 13 15:22:17.435037 containerd[1442]: time="2025-02-13T15:22:17.435014681Z" level=info msg="TearDown network for sandbox \"f2a4be37ee45e0a2e04d64278828b4f8471e3f621e93d94fac9f67321674f84b\" successfully" Feb 13 15:22:17.437608 containerd[1442]: time="2025-02-13T15:22:17.437540945Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f2a4be37ee45e0a2e04d64278828b4f8471e3f621e93d94fac9f67321674f84b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:22:17.437608 containerd[1442]: time="2025-02-13T15:22:17.437600226Z" level=info msg="RemovePodSandbox \"f2a4be37ee45e0a2e04d64278828b4f8471e3f621e93d94fac9f67321674f84b\" returns successfully" Feb 13 15:22:17.438033 containerd[1442]: time="2025-02-13T15:22:17.437991076Z" level=info msg="StopPodSandbox for \"8db7eb7e96e9c3fedc10436101d0f44d3179d411a6afb2c8436bc1e4908d2830\"" Feb 13 15:22:17.438210 containerd[1442]: time="2025-02-13T15:22:17.438087759Z" level=info msg="TearDown network for sandbox \"8db7eb7e96e9c3fedc10436101d0f44d3179d411a6afb2c8436bc1e4908d2830\" successfully" Feb 13 15:22:17.438210 containerd[1442]: time="2025-02-13T15:22:17.438099599Z" level=info msg="StopPodSandbox for \"8db7eb7e96e9c3fedc10436101d0f44d3179d411a6afb2c8436bc1e4908d2830\" returns successfully" Feb 13 15:22:17.438471 containerd[1442]: time="2025-02-13T15:22:17.438436368Z" level=info msg="RemovePodSandbox for \"8db7eb7e96e9c3fedc10436101d0f44d3179d411a6afb2c8436bc1e4908d2830\"" Feb 13 15:22:17.438471 containerd[1442]: time="2025-02-13T15:22:17.438463608Z" level=info msg="Forcibly stopping sandbox \"8db7eb7e96e9c3fedc10436101d0f44d3179d411a6afb2c8436bc1e4908d2830\"" Feb 13 15:22:17.438620 containerd[1442]: time="2025-02-13T15:22:17.438515290Z" level=info msg="TearDown network for sandbox \"8db7eb7e96e9c3fedc10436101d0f44d3179d411a6afb2c8436bc1e4908d2830\" successfully" Feb 13 15:22:17.440653 containerd[1442]: time="2025-02-13T15:22:17.440620703Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8db7eb7e96e9c3fedc10436101d0f44d3179d411a6afb2c8436bc1e4908d2830\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:22:17.440731 containerd[1442]: time="2025-02-13T15:22:17.440675944Z" level=info msg="RemovePodSandbox \"8db7eb7e96e9c3fedc10436101d0f44d3179d411a6afb2c8436bc1e4908d2830\" returns successfully" Feb 13 15:22:17.441350 containerd[1442]: time="2025-02-13T15:22:17.441099475Z" level=info msg="StopPodSandbox for \"d04a99dec6ccf9c7326d2d50b6de346e356bee2d6457b2d5c9680fea9a5adb7c\"" Feb 13 15:22:17.441350 containerd[1442]: time="2025-02-13T15:22:17.441182477Z" level=info msg="TearDown network for sandbox \"d04a99dec6ccf9c7326d2d50b6de346e356bee2d6457b2d5c9680fea9a5adb7c\" successfully" Feb 13 15:22:17.441350 containerd[1442]: time="2025-02-13T15:22:17.441191637Z" level=info msg="StopPodSandbox for \"d04a99dec6ccf9c7326d2d50b6de346e356bee2d6457b2d5c9680fea9a5adb7c\" returns successfully" Feb 13 15:22:17.441616 containerd[1442]: time="2025-02-13T15:22:17.441575207Z" level=info msg="RemovePodSandbox for \"d04a99dec6ccf9c7326d2d50b6de346e356bee2d6457b2d5c9680fea9a5adb7c\"" Feb 13 15:22:17.441616 containerd[1442]: time="2025-02-13T15:22:17.441607288Z" level=info msg="Forcibly stopping sandbox \"d04a99dec6ccf9c7326d2d50b6de346e356bee2d6457b2d5c9680fea9a5adb7c\"" Feb 13 15:22:17.441701 containerd[1442]: time="2025-02-13T15:22:17.441671170Z" level=info msg="TearDown network for sandbox \"d04a99dec6ccf9c7326d2d50b6de346e356bee2d6457b2d5c9680fea9a5adb7c\" successfully" Feb 13 15:22:17.450888 containerd[1442]: time="2025-02-13T15:22:17.450843802Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d04a99dec6ccf9c7326d2d50b6de346e356bee2d6457b2d5c9680fea9a5adb7c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:22:17.450945 containerd[1442]: time="2025-02-13T15:22:17.450906284Z" level=info msg="RemovePodSandbox \"d04a99dec6ccf9c7326d2d50b6de346e356bee2d6457b2d5c9680fea9a5adb7c\" returns successfully" Feb 13 15:22:17.451754 containerd[1442]: time="2025-02-13T15:22:17.451717624Z" level=info msg="StopPodSandbox for \"c4baf65475d91bc2c15306315ac291509c501cd40c636754fa7bb756cbcea258\"" Feb 13 15:22:17.451836 containerd[1442]: time="2025-02-13T15:22:17.451812387Z" level=info msg="TearDown network for sandbox \"c4baf65475d91bc2c15306315ac291509c501cd40c636754fa7bb756cbcea258\" successfully" Feb 13 15:22:17.451836 containerd[1442]: time="2025-02-13T15:22:17.451826787Z" level=info msg="StopPodSandbox for \"c4baf65475d91bc2c15306315ac291509c501cd40c636754fa7bb756cbcea258\" returns successfully" Feb 13 15:22:17.452093 containerd[1442]: time="2025-02-13T15:22:17.452072993Z" level=info msg="RemovePodSandbox for \"c4baf65475d91bc2c15306315ac291509c501cd40c636754fa7bb756cbcea258\"" Feb 13 15:22:17.452145 containerd[1442]: time="2025-02-13T15:22:17.452119555Z" level=info msg="Forcibly stopping sandbox \"c4baf65475d91bc2c15306315ac291509c501cd40c636754fa7bb756cbcea258\"" Feb 13 15:22:17.452199 containerd[1442]: time="2025-02-13T15:22:17.452183636Z" level=info msg="TearDown network for sandbox \"c4baf65475d91bc2c15306315ac291509c501cd40c636754fa7bb756cbcea258\" successfully" Feb 13 15:22:17.454245 containerd[1442]: time="2025-02-13T15:22:17.454212448Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c4baf65475d91bc2c15306315ac291509c501cd40c636754fa7bb756cbcea258\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:22:17.454294 containerd[1442]: time="2025-02-13T15:22:17.454260009Z" level=info msg="RemovePodSandbox \"c4baf65475d91bc2c15306315ac291509c501cd40c636754fa7bb756cbcea258\" returns successfully" Feb 13 15:22:17.454747 containerd[1442]: time="2025-02-13T15:22:17.454572297Z" level=info msg="StopPodSandbox for \"222fe074a6a0fa30aabf41e153b4175978e6e304dfcf93e3a4c0de8e32655c0a\"" Feb 13 15:22:17.454747 containerd[1442]: time="2025-02-13T15:22:17.454693340Z" level=info msg="TearDown network for sandbox \"222fe074a6a0fa30aabf41e153b4175978e6e304dfcf93e3a4c0de8e32655c0a\" successfully" Feb 13 15:22:17.454747 containerd[1442]: time="2025-02-13T15:22:17.454702540Z" level=info msg="StopPodSandbox for \"222fe074a6a0fa30aabf41e153b4175978e6e304dfcf93e3a4c0de8e32655c0a\" returns successfully" Feb 13 15:22:17.455161 containerd[1442]: time="2025-02-13T15:22:17.455123791Z" level=info msg="RemovePodSandbox for \"222fe074a6a0fa30aabf41e153b4175978e6e304dfcf93e3a4c0de8e32655c0a\"" Feb 13 15:22:17.455161 containerd[1442]: time="2025-02-13T15:22:17.455148071Z" level=info msg="Forcibly stopping sandbox \"222fe074a6a0fa30aabf41e153b4175978e6e304dfcf93e3a4c0de8e32655c0a\"" Feb 13 15:22:17.455236 containerd[1442]: time="2025-02-13T15:22:17.455210993Z" level=info msg="TearDown network for sandbox \"222fe074a6a0fa30aabf41e153b4175978e6e304dfcf93e3a4c0de8e32655c0a\" successfully" Feb 13 15:22:17.457414 containerd[1442]: time="2025-02-13T15:22:17.457378328Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"222fe074a6a0fa30aabf41e153b4175978e6e304dfcf93e3a4c0de8e32655c0a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:22:17.457657 containerd[1442]: time="2025-02-13T15:22:17.457429649Z" level=info msg="RemovePodSandbox \"222fe074a6a0fa30aabf41e153b4175978e6e304dfcf93e3a4c0de8e32655c0a\" returns successfully" Feb 13 15:22:17.459280 containerd[1442]: time="2025-02-13T15:22:17.459228655Z" level=info msg="StopPodSandbox for \"2fb37880992bf25b4ffdc0417195ecffe00d1f91c138c1d050e3c108ce682cc8\"" Feb 13 15:22:17.459362 containerd[1442]: time="2025-02-13T15:22:17.459344658Z" level=info msg="TearDown network for sandbox \"2fb37880992bf25b4ffdc0417195ecffe00d1f91c138c1d050e3c108ce682cc8\" successfully" Feb 13 15:22:17.459393 containerd[1442]: time="2025-02-13T15:22:17.459360418Z" level=info msg="StopPodSandbox for \"2fb37880992bf25b4ffdc0417195ecffe00d1f91c138c1d050e3c108ce682cc8\" returns successfully" Feb 13 15:22:17.459716 containerd[1442]: time="2025-02-13T15:22:17.459645946Z" level=info msg="RemovePodSandbox for \"2fb37880992bf25b4ffdc0417195ecffe00d1f91c138c1d050e3c108ce682cc8\"" Feb 13 15:22:17.459716 containerd[1442]: time="2025-02-13T15:22:17.459674306Z" level=info msg="Forcibly stopping sandbox \"2fb37880992bf25b4ffdc0417195ecffe00d1f91c138c1d050e3c108ce682cc8\"" Feb 13 15:22:17.459791 containerd[1442]: time="2025-02-13T15:22:17.459736828Z" level=info msg="TearDown network for sandbox \"2fb37880992bf25b4ffdc0417195ecffe00d1f91c138c1d050e3c108ce682cc8\" successfully" Feb 13 15:22:17.462652 containerd[1442]: time="2025-02-13T15:22:17.462603181Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2fb37880992bf25b4ffdc0417195ecffe00d1f91c138c1d050e3c108ce682cc8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:22:17.462727 containerd[1442]: time="2025-02-13T15:22:17.462661262Z" level=info msg="RemovePodSandbox \"2fb37880992bf25b4ffdc0417195ecffe00d1f91c138c1d050e3c108ce682cc8\" returns successfully" Feb 13 15:22:17.463626 containerd[1442]: time="2025-02-13T15:22:17.463582605Z" level=info msg="StopPodSandbox for \"11929aadc81cdf1da7658e28ef4d74baafd42d9d50e9e28282ba41964a92a3a7\"" Feb 13 15:22:17.463701 containerd[1442]: time="2025-02-13T15:22:17.463671368Z" level=info msg="TearDown network for sandbox \"11929aadc81cdf1da7658e28ef4d74baafd42d9d50e9e28282ba41964a92a3a7\" successfully" Feb 13 15:22:17.463701 containerd[1442]: time="2025-02-13T15:22:17.463681608Z" level=info msg="StopPodSandbox for \"11929aadc81cdf1da7658e28ef4d74baafd42d9d50e9e28282ba41964a92a3a7\" returns successfully" Feb 13 15:22:17.464060 containerd[1442]: time="2025-02-13T15:22:17.463957735Z" level=info msg="RemovePodSandbox for \"11929aadc81cdf1da7658e28ef4d74baafd42d9d50e9e28282ba41964a92a3a7\"" Feb 13 15:22:17.464060 containerd[1442]: time="2025-02-13T15:22:17.463974935Z" level=info msg="Forcibly stopping sandbox \"11929aadc81cdf1da7658e28ef4d74baafd42d9d50e9e28282ba41964a92a3a7\"" Feb 13 15:22:17.464060 containerd[1442]: time="2025-02-13T15:22:17.464021337Z" level=info msg="TearDown network for sandbox \"11929aadc81cdf1da7658e28ef4d74baafd42d9d50e9e28282ba41964a92a3a7\" successfully" Feb 13 15:22:17.466734 containerd[1442]: time="2025-02-13T15:22:17.466695924Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"11929aadc81cdf1da7658e28ef4d74baafd42d9d50e9e28282ba41964a92a3a7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:22:17.466797 containerd[1442]: time="2025-02-13T15:22:17.466777006Z" level=info msg="RemovePodSandbox \"11929aadc81cdf1da7658e28ef4d74baafd42d9d50e9e28282ba41964a92a3a7\" returns successfully" Feb 13 15:22:17.467152 containerd[1442]: time="2025-02-13T15:22:17.467125695Z" level=info msg="StopPodSandbox for \"625368530c2bc9339bf5122e7b1e46a08e7aa9ae7da5caeccce496924a7d1331\"" Feb 13 15:22:17.467241 containerd[1442]: time="2025-02-13T15:22:17.467222178Z" level=info msg="TearDown network for sandbox \"625368530c2bc9339bf5122e7b1e46a08e7aa9ae7da5caeccce496924a7d1331\" successfully" Feb 13 15:22:17.467241 containerd[1442]: time="2025-02-13T15:22:17.467236898Z" level=info msg="StopPodSandbox for \"625368530c2bc9339bf5122e7b1e46a08e7aa9ae7da5caeccce496924a7d1331\" returns successfully" Feb 13 15:22:17.468063 containerd[1442]: time="2025-02-13T15:22:17.467536106Z" level=info msg="RemovePodSandbox for \"625368530c2bc9339bf5122e7b1e46a08e7aa9ae7da5caeccce496924a7d1331\"" Feb 13 15:22:17.468063 containerd[1442]: time="2025-02-13T15:22:17.467564986Z" level=info msg="Forcibly stopping sandbox \"625368530c2bc9339bf5122e7b1e46a08e7aa9ae7da5caeccce496924a7d1331\"" Feb 13 15:22:17.468063 containerd[1442]: time="2025-02-13T15:22:17.467627148Z" level=info msg="TearDown network for sandbox \"625368530c2bc9339bf5122e7b1e46a08e7aa9ae7da5caeccce496924a7d1331\" successfully" Feb 13 15:22:17.470805 containerd[1442]: time="2025-02-13T15:22:17.470760987Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"625368530c2bc9339bf5122e7b1e46a08e7aa9ae7da5caeccce496924a7d1331\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:22:17.470881 containerd[1442]: time="2025-02-13T15:22:17.470821509Z" level=info msg="RemovePodSandbox \"625368530c2bc9339bf5122e7b1e46a08e7aa9ae7da5caeccce496924a7d1331\" returns successfully" Feb 13 15:22:17.471155 containerd[1442]: time="2025-02-13T15:22:17.471132877Z" level=info msg="StopPodSandbox for \"f77271c961d7b4a7b309d3aeaa83ef8ef8c4ac2d6b4080280f4c6b5d24455417\"" Feb 13 15:22:17.471237 containerd[1442]: time="2025-02-13T15:22:17.471223599Z" level=info msg="TearDown network for sandbox \"f77271c961d7b4a7b309d3aeaa83ef8ef8c4ac2d6b4080280f4c6b5d24455417\" successfully" Feb 13 15:22:17.471267 containerd[1442]: time="2025-02-13T15:22:17.471237280Z" level=info msg="StopPodSandbox for \"f77271c961d7b4a7b309d3aeaa83ef8ef8c4ac2d6b4080280f4c6b5d24455417\" returns successfully" Feb 13 15:22:17.471612 containerd[1442]: time="2025-02-13T15:22:17.471589088Z" level=info msg="RemovePodSandbox for \"f77271c961d7b4a7b309d3aeaa83ef8ef8c4ac2d6b4080280f4c6b5d24455417\"" Feb 13 15:22:17.471648 containerd[1442]: time="2025-02-13T15:22:17.471615249Z" level=info msg="Forcibly stopping sandbox \"f77271c961d7b4a7b309d3aeaa83ef8ef8c4ac2d6b4080280f4c6b5d24455417\"" Feb 13 15:22:17.471689 containerd[1442]: time="2025-02-13T15:22:17.471676971Z" level=info msg="TearDown network for sandbox \"f77271c961d7b4a7b309d3aeaa83ef8ef8c4ac2d6b4080280f4c6b5d24455417\" successfully" Feb 13 15:22:17.474121 containerd[1442]: time="2025-02-13T15:22:17.474084632Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f77271c961d7b4a7b309d3aeaa83ef8ef8c4ac2d6b4080280f4c6b5d24455417\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:22:17.474179 containerd[1442]: time="2025-02-13T15:22:17.474137353Z" level=info msg="RemovePodSandbox \"f77271c961d7b4a7b309d3aeaa83ef8ef8c4ac2d6b4080280f4c6b5d24455417\" returns successfully" Feb 13 15:22:17.474497 containerd[1442]: time="2025-02-13T15:22:17.474464401Z" level=info msg="StopPodSandbox for \"ec5f837bdef6209b21d66e9a11f472ff6e4172b2c0d38f2552319bfe446e9725\"" Feb 13 15:22:17.474578 containerd[1442]: time="2025-02-13T15:22:17.474552884Z" level=info msg="TearDown network for sandbox \"ec5f837bdef6209b21d66e9a11f472ff6e4172b2c0d38f2552319bfe446e9725\" successfully" Feb 13 15:22:17.474578 containerd[1442]: time="2025-02-13T15:22:17.474567244Z" level=info msg="StopPodSandbox for \"ec5f837bdef6209b21d66e9a11f472ff6e4172b2c0d38f2552319bfe446e9725\" returns successfully" Feb 13 15:22:17.474867 containerd[1442]: time="2025-02-13T15:22:17.474835131Z" level=info msg="RemovePodSandbox for \"ec5f837bdef6209b21d66e9a11f472ff6e4172b2c0d38f2552319bfe446e9725\"" Feb 13 15:22:17.474901 containerd[1442]: time="2025-02-13T15:22:17.474866572Z" level=info msg="Forcibly stopping sandbox \"ec5f837bdef6209b21d66e9a11f472ff6e4172b2c0d38f2552319bfe446e9725\"" Feb 13 15:22:17.474947 containerd[1442]: time="2025-02-13T15:22:17.474933733Z" level=info msg="TearDown network for sandbox \"ec5f837bdef6209b21d66e9a11f472ff6e4172b2c0d38f2552319bfe446e9725\" successfully" Feb 13 15:22:17.477305 containerd[1442]: time="2025-02-13T15:22:17.477264392Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ec5f837bdef6209b21d66e9a11f472ff6e4172b2c0d38f2552319bfe446e9725\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:22:17.477334 containerd[1442]: time="2025-02-13T15:22:17.477320754Z" level=info msg="RemovePodSandbox \"ec5f837bdef6209b21d66e9a11f472ff6e4172b2c0d38f2552319bfe446e9725\" returns successfully" Feb 13 15:22:17.477639 containerd[1442]: time="2025-02-13T15:22:17.477605001Z" level=info msg="StopPodSandbox for \"5a990fb5682c636e25fb3a50d8fe0024788508fe708016321710033a0552c448\"" Feb 13 15:22:17.477718 containerd[1442]: time="2025-02-13T15:22:17.477696883Z" level=info msg="TearDown network for sandbox \"5a990fb5682c636e25fb3a50d8fe0024788508fe708016321710033a0552c448\" successfully" Feb 13 15:22:17.477718 containerd[1442]: time="2025-02-13T15:22:17.477711484Z" level=info msg="StopPodSandbox for \"5a990fb5682c636e25fb3a50d8fe0024788508fe708016321710033a0552c448\" returns successfully" Feb 13 15:22:17.478145 containerd[1442]: time="2025-02-13T15:22:17.478116374Z" level=info msg="RemovePodSandbox for \"5a990fb5682c636e25fb3a50d8fe0024788508fe708016321710033a0552c448\"" Feb 13 15:22:17.478176 containerd[1442]: time="2025-02-13T15:22:17.478144375Z" level=info msg="Forcibly stopping sandbox \"5a990fb5682c636e25fb3a50d8fe0024788508fe708016321710033a0552c448\"" Feb 13 15:22:17.478222 containerd[1442]: time="2025-02-13T15:22:17.478208896Z" level=info msg="TearDown network for sandbox \"5a990fb5682c636e25fb3a50d8fe0024788508fe708016321710033a0552c448\" successfully" Feb 13 15:22:17.481006 containerd[1442]: time="2025-02-13T15:22:17.480969046Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5a990fb5682c636e25fb3a50d8fe0024788508fe708016321710033a0552c448\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:22:17.481061 containerd[1442]: time="2025-02-13T15:22:17.481017408Z" level=info msg="RemovePodSandbox \"5a990fb5682c636e25fb3a50d8fe0024788508fe708016321710033a0552c448\" returns successfully" Feb 13 15:22:17.481370 containerd[1442]: time="2025-02-13T15:22:17.481340656Z" level=info msg="StopPodSandbox for \"c9da068fa3f27a54a0b2e673273efa47355288199379af100a264051fdd7317b\"" Feb 13 15:22:17.481449 containerd[1442]: time="2025-02-13T15:22:17.481428338Z" level=info msg="TearDown network for sandbox \"c9da068fa3f27a54a0b2e673273efa47355288199379af100a264051fdd7317b\" successfully" Feb 13 15:22:17.481449 containerd[1442]: time="2025-02-13T15:22:17.481442738Z" level=info msg="StopPodSandbox for \"c9da068fa3f27a54a0b2e673273efa47355288199379af100a264051fdd7317b\" returns successfully" Feb 13 15:22:17.481850 containerd[1442]: time="2025-02-13T15:22:17.481781907Z" level=info msg="RemovePodSandbox for \"c9da068fa3f27a54a0b2e673273efa47355288199379af100a264051fdd7317b\"" Feb 13 15:22:17.481850 containerd[1442]: time="2025-02-13T15:22:17.481822828Z" level=info msg="Forcibly stopping sandbox \"c9da068fa3f27a54a0b2e673273efa47355288199379af100a264051fdd7317b\"" Feb 13 15:22:17.481932 containerd[1442]: time="2025-02-13T15:22:17.481886430Z" level=info msg="TearDown network for sandbox \"c9da068fa3f27a54a0b2e673273efa47355288199379af100a264051fdd7317b\" successfully" Feb 13 15:22:17.484067 containerd[1442]: time="2025-02-13T15:22:17.484015964Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c9da068fa3f27a54a0b2e673273efa47355288199379af100a264051fdd7317b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:22:17.484154 containerd[1442]: time="2025-02-13T15:22:17.484082925Z" level=info msg="RemovePodSandbox \"c9da068fa3f27a54a0b2e673273efa47355288199379af100a264051fdd7317b\" returns successfully" Feb 13 15:22:17.484432 containerd[1442]: time="2025-02-13T15:22:17.484397973Z" level=info msg="StopPodSandbox for \"b2ce9b06d615cd0b7574b238db0c81b74ba7641fc02663b04423eecedbfe567e\"" Feb 13 15:22:17.484505 containerd[1442]: time="2025-02-13T15:22:17.484486296Z" level=info msg="TearDown network for sandbox \"b2ce9b06d615cd0b7574b238db0c81b74ba7641fc02663b04423eecedbfe567e\" successfully" Feb 13 15:22:17.484505 containerd[1442]: time="2025-02-13T15:22:17.484500616Z" level=info msg="StopPodSandbox for \"b2ce9b06d615cd0b7574b238db0c81b74ba7641fc02663b04423eecedbfe567e\" returns successfully" Feb 13 15:22:17.484809 containerd[1442]: time="2025-02-13T15:22:17.484770703Z" level=info msg="RemovePodSandbox for \"b2ce9b06d615cd0b7574b238db0c81b74ba7641fc02663b04423eecedbfe567e\"" Feb 13 15:22:17.484846 containerd[1442]: time="2025-02-13T15:22:17.484807864Z" level=info msg="Forcibly stopping sandbox \"b2ce9b06d615cd0b7574b238db0c81b74ba7641fc02663b04423eecedbfe567e\"" Feb 13 15:22:17.484887 containerd[1442]: time="2025-02-13T15:22:17.484871545Z" level=info msg="TearDown network for sandbox \"b2ce9b06d615cd0b7574b238db0c81b74ba7641fc02663b04423eecedbfe567e\" successfully" Feb 13 15:22:17.487570 containerd[1442]: time="2025-02-13T15:22:17.487527013Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b2ce9b06d615cd0b7574b238db0c81b74ba7641fc02663b04423eecedbfe567e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:22:17.487645 containerd[1442]: time="2025-02-13T15:22:17.487583694Z" level=info msg="RemovePodSandbox \"b2ce9b06d615cd0b7574b238db0c81b74ba7641fc02663b04423eecedbfe567e\" returns successfully" Feb 13 15:22:17.487915 containerd[1442]: time="2025-02-13T15:22:17.487883382Z" level=info msg="StopPodSandbox for \"b21e1a8f17e86fe71cf0ca5f7bf255d3e92b462acaa3367bcccf912c971932fc\"" Feb 13 15:22:17.487977 containerd[1442]: time="2025-02-13T15:22:17.487963584Z" level=info msg="TearDown network for sandbox \"b21e1a8f17e86fe71cf0ca5f7bf255d3e92b462acaa3367bcccf912c971932fc\" successfully" Feb 13 15:22:17.488001 containerd[1442]: time="2025-02-13T15:22:17.487977224Z" level=info msg="StopPodSandbox for \"b21e1a8f17e86fe71cf0ca5f7bf255d3e92b462acaa3367bcccf912c971932fc\" returns successfully" Feb 13 15:22:17.488243 containerd[1442]: time="2025-02-13T15:22:17.488211110Z" level=info msg="RemovePodSandbox for \"b21e1a8f17e86fe71cf0ca5f7bf255d3e92b462acaa3367bcccf912c971932fc\"" Feb 13 15:22:17.488243 containerd[1442]: time="2025-02-13T15:22:17.488240031Z" level=info msg="Forcibly stopping sandbox \"b21e1a8f17e86fe71cf0ca5f7bf255d3e92b462acaa3367bcccf912c971932fc\"" Feb 13 15:22:17.488316 containerd[1442]: time="2025-02-13T15:22:17.488302072Z" level=info msg="TearDown network for sandbox \"b21e1a8f17e86fe71cf0ca5f7bf255d3e92b462acaa3367bcccf912c971932fc\" successfully" Feb 13 15:22:17.490430 containerd[1442]: time="2025-02-13T15:22:17.490379125Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b21e1a8f17e86fe71cf0ca5f7bf255d3e92b462acaa3367bcccf912c971932fc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:22:17.490430 containerd[1442]: time="2025-02-13T15:22:17.490430606Z" level=info msg="RemovePodSandbox \"b21e1a8f17e86fe71cf0ca5f7bf255d3e92b462acaa3367bcccf912c971932fc\" returns successfully" Feb 13 15:22:17.490778 containerd[1442]: time="2025-02-13T15:22:17.490703253Z" level=info msg="StopPodSandbox for \"1290b96754b10ffaddc73a37c85cf3d37997bcbba54c2c7727a2d695f71242d3\"" Feb 13 15:22:17.490832 containerd[1442]: time="2025-02-13T15:22:17.490779735Z" level=info msg="TearDown network for sandbox \"1290b96754b10ffaddc73a37c85cf3d37997bcbba54c2c7727a2d695f71242d3\" successfully" Feb 13 15:22:17.490832 containerd[1442]: time="2025-02-13T15:22:17.490789655Z" level=info msg="StopPodSandbox for \"1290b96754b10ffaddc73a37c85cf3d37997bcbba54c2c7727a2d695f71242d3\" returns successfully" Feb 13 15:22:17.491524 containerd[1442]: time="2025-02-13T15:22:17.491066582Z" level=info msg="RemovePodSandbox for \"1290b96754b10ffaddc73a37c85cf3d37997bcbba54c2c7727a2d695f71242d3\"" Feb 13 15:22:17.491524 containerd[1442]: time="2025-02-13T15:22:17.491090343Z" level=info msg="Forcibly stopping sandbox \"1290b96754b10ffaddc73a37c85cf3d37997bcbba54c2c7727a2d695f71242d3\"" Feb 13 15:22:17.491524 containerd[1442]: time="2025-02-13T15:22:17.491148785Z" level=info msg="TearDown network for sandbox \"1290b96754b10ffaddc73a37c85cf3d37997bcbba54c2c7727a2d695f71242d3\" successfully" Feb 13 15:22:17.493958 containerd[1442]: time="2025-02-13T15:22:17.493484564Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1290b96754b10ffaddc73a37c85cf3d37997bcbba54c2c7727a2d695f71242d3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:22:17.493958 containerd[1442]: time="2025-02-13T15:22:17.493530965Z" level=info msg="RemovePodSandbox \"1290b96754b10ffaddc73a37c85cf3d37997bcbba54c2c7727a2d695f71242d3\" returns successfully" Feb 13 15:22:17.493958 containerd[1442]: time="2025-02-13T15:22:17.493847293Z" level=info msg="StopPodSandbox for \"76d4658126becba445517a90b6b43c46a1c22581049ca91a8e0b6be5d84a6a5b\"" Feb 13 15:22:17.494091 containerd[1442]: time="2025-02-13T15:22:17.493991817Z" level=info msg="TearDown network for sandbox \"76d4658126becba445517a90b6b43c46a1c22581049ca91a8e0b6be5d84a6a5b\" successfully" Feb 13 15:22:17.494091 containerd[1442]: time="2025-02-13T15:22:17.494004417Z" level=info msg="StopPodSandbox for \"76d4658126becba445517a90b6b43c46a1c22581049ca91a8e0b6be5d84a6a5b\" returns successfully" Feb 13 15:22:17.494823 containerd[1442]: time="2025-02-13T15:22:17.494267744Z" level=info msg="RemovePodSandbox for \"76d4658126becba445517a90b6b43c46a1c22581049ca91a8e0b6be5d84a6a5b\"" Feb 13 15:22:17.494823 containerd[1442]: time="2025-02-13T15:22:17.494302385Z" level=info msg="Forcibly stopping sandbox \"76d4658126becba445517a90b6b43c46a1c22581049ca91a8e0b6be5d84a6a5b\"" Feb 13 15:22:17.494823 containerd[1442]: time="2025-02-13T15:22:17.494365226Z" level=info msg="TearDown network for sandbox \"76d4658126becba445517a90b6b43c46a1c22581049ca91a8e0b6be5d84a6a5b\" successfully" Feb 13 15:22:17.496498 containerd[1442]: time="2025-02-13T15:22:17.496353917Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"76d4658126becba445517a90b6b43c46a1c22581049ca91a8e0b6be5d84a6a5b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:22:17.496498 containerd[1442]: time="2025-02-13T15:22:17.496401718Z" level=info msg="RemovePodSandbox \"76d4658126becba445517a90b6b43c46a1c22581049ca91a8e0b6be5d84a6a5b\" returns successfully" Feb 13 15:22:17.496699 containerd[1442]: time="2025-02-13T15:22:17.496663804Z" level=info msg="StopPodSandbox for \"1bc7ba7f91f5547b00e7049a50ddcf60d573692ca1d4d6b3c1d4222df799d5a0\"" Feb 13 15:22:17.496769 containerd[1442]: time="2025-02-13T15:22:17.496744726Z" level=info msg="TearDown network for sandbox \"1bc7ba7f91f5547b00e7049a50ddcf60d573692ca1d4d6b3c1d4222df799d5a0\" successfully" Feb 13 15:22:17.496769 containerd[1442]: time="2025-02-13T15:22:17.496759647Z" level=info msg="StopPodSandbox for \"1bc7ba7f91f5547b00e7049a50ddcf60d573692ca1d4d6b3c1d4222df799d5a0\" returns successfully" Feb 13 15:22:17.497420 containerd[1442]: time="2025-02-13T15:22:17.497084895Z" level=info msg="RemovePodSandbox for \"1bc7ba7f91f5547b00e7049a50ddcf60d573692ca1d4d6b3c1d4222df799d5a0\"" Feb 13 15:22:17.497420 containerd[1442]: time="2025-02-13T15:22:17.497109816Z" level=info msg="Forcibly stopping sandbox \"1bc7ba7f91f5547b00e7049a50ddcf60d573692ca1d4d6b3c1d4222df799d5a0\"" Feb 13 15:22:17.497420 containerd[1442]: time="2025-02-13T15:22:17.497210858Z" level=info msg="TearDown network for sandbox \"1bc7ba7f91f5547b00e7049a50ddcf60d573692ca1d4d6b3c1d4222df799d5a0\" successfully" Feb 13 15:22:17.499280 containerd[1442]: time="2025-02-13T15:22:17.499218509Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1bc7ba7f91f5547b00e7049a50ddcf60d573692ca1d4d6b3c1d4222df799d5a0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:22:17.499280 containerd[1442]: time="2025-02-13T15:22:17.499285271Z" level=info msg="RemovePodSandbox \"1bc7ba7f91f5547b00e7049a50ddcf60d573692ca1d4d6b3c1d4222df799d5a0\" returns successfully" Feb 13 15:22:17.499962 containerd[1442]: time="2025-02-13T15:22:17.499588119Z" level=info msg="StopPodSandbox for \"d39bbd1c579b80885befac14b0958dd16f728a35a72084ca6320a77ccb3b7a7a\"" Feb 13 15:22:17.499962 containerd[1442]: time="2025-02-13T15:22:17.499668121Z" level=info msg="TearDown network for sandbox \"d39bbd1c579b80885befac14b0958dd16f728a35a72084ca6320a77ccb3b7a7a\" successfully" Feb 13 15:22:17.499962 containerd[1442]: time="2025-02-13T15:22:17.499678401Z" level=info msg="StopPodSandbox for \"d39bbd1c579b80885befac14b0958dd16f728a35a72084ca6320a77ccb3b7a7a\" returns successfully" Feb 13 15:22:17.500102 containerd[1442]: time="2025-02-13T15:22:17.500050690Z" level=info msg="RemovePodSandbox for \"d39bbd1c579b80885befac14b0958dd16f728a35a72084ca6320a77ccb3b7a7a\"" Feb 13 15:22:17.500102 containerd[1442]: time="2025-02-13T15:22:17.500075291Z" level=info msg="Forcibly stopping sandbox \"d39bbd1c579b80885befac14b0958dd16f728a35a72084ca6320a77ccb3b7a7a\"" Feb 13 15:22:17.500143 containerd[1442]: time="2025-02-13T15:22:17.500130932Z" level=info msg="TearDown network for sandbox \"d39bbd1c579b80885befac14b0958dd16f728a35a72084ca6320a77ccb3b7a7a\" successfully" Feb 13 15:22:17.513793 containerd[1442]: time="2025-02-13T15:22:17.513747638Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d39bbd1c579b80885befac14b0958dd16f728a35a72084ca6320a77ccb3b7a7a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:22:17.513872 containerd[1442]: time="2025-02-13T15:22:17.513818440Z" level=info msg="RemovePodSandbox \"d39bbd1c579b80885befac14b0958dd16f728a35a72084ca6320a77ccb3b7a7a\" returns successfully" Feb 13 15:22:17.518224 containerd[1442]: time="2025-02-13T15:22:17.518174950Z" level=info msg="StopPodSandbox for \"74aa2dc63f90f0fe66425516e5fd4cd00e009a8fcb748a38adc86fdedb44b077\"" Feb 13 15:22:17.518296 containerd[1442]: time="2025-02-13T15:22:17.518278553Z" level=info msg="TearDown network for sandbox \"74aa2dc63f90f0fe66425516e5fd4cd00e009a8fcb748a38adc86fdedb44b077\" successfully" Feb 13 15:22:17.518339 containerd[1442]: time="2025-02-13T15:22:17.518291593Z" level=info msg="StopPodSandbox for \"74aa2dc63f90f0fe66425516e5fd4cd00e009a8fcb748a38adc86fdedb44b077\" returns successfully" Feb 13 15:22:17.518656 containerd[1442]: time="2025-02-13T15:22:17.518622481Z" level=info msg="RemovePodSandbox for \"74aa2dc63f90f0fe66425516e5fd4cd00e009a8fcb748a38adc86fdedb44b077\"" Feb 13 15:22:17.518656 containerd[1442]: time="2025-02-13T15:22:17.518651242Z" level=info msg="Forcibly stopping sandbox \"74aa2dc63f90f0fe66425516e5fd4cd00e009a8fcb748a38adc86fdedb44b077\"" Feb 13 15:22:17.518751 containerd[1442]: time="2025-02-13T15:22:17.518716004Z" level=info msg="TearDown network for sandbox \"74aa2dc63f90f0fe66425516e5fd4cd00e009a8fcb748a38adc86fdedb44b077\" successfully" Feb 13 15:22:17.521045 containerd[1442]: time="2025-02-13T15:22:17.521010382Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"74aa2dc63f90f0fe66425516e5fd4cd00e009a8fcb748a38adc86fdedb44b077\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:22:17.521097 containerd[1442]: time="2025-02-13T15:22:17.521064463Z" level=info msg="RemovePodSandbox \"74aa2dc63f90f0fe66425516e5fd4cd00e009a8fcb748a38adc86fdedb44b077\" returns successfully" Feb 13 15:22:17.521350 containerd[1442]: time="2025-02-13T15:22:17.521309790Z" level=info msg="StopPodSandbox for \"068df4c4fbceb251808403da7d7f70417a6f0acfb26bbe7e26797b860d1eeaef\"" Feb 13 15:22:17.521445 containerd[1442]: time="2025-02-13T15:22:17.521419752Z" level=info msg="TearDown network for sandbox \"068df4c4fbceb251808403da7d7f70417a6f0acfb26bbe7e26797b860d1eeaef\" successfully" Feb 13 15:22:17.521445 containerd[1442]: time="2025-02-13T15:22:17.521435953Z" level=info msg="StopPodSandbox for \"068df4c4fbceb251808403da7d7f70417a6f0acfb26bbe7e26797b860d1eeaef\" returns successfully" Feb 13 15:22:17.522050 containerd[1442]: time="2025-02-13T15:22:17.521707440Z" level=info msg="RemovePodSandbox for \"068df4c4fbceb251808403da7d7f70417a6f0acfb26bbe7e26797b860d1eeaef\"" Feb 13 15:22:17.522050 containerd[1442]: time="2025-02-13T15:22:17.521739360Z" level=info msg="Forcibly stopping sandbox \"068df4c4fbceb251808403da7d7f70417a6f0acfb26bbe7e26797b860d1eeaef\"" Feb 13 15:22:17.522050 containerd[1442]: time="2025-02-13T15:22:17.521797562Z" level=info msg="TearDown network for sandbox \"068df4c4fbceb251808403da7d7f70417a6f0acfb26bbe7e26797b860d1eeaef\" successfully" Feb 13 15:22:17.523922 containerd[1442]: time="2025-02-13T15:22:17.523888895Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"068df4c4fbceb251808403da7d7f70417a6f0acfb26bbe7e26797b860d1eeaef\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:22:17.523959 containerd[1442]: time="2025-02-13T15:22:17.523935696Z" level=info msg="RemovePodSandbox \"068df4c4fbceb251808403da7d7f70417a6f0acfb26bbe7e26797b860d1eeaef\" returns successfully" Feb 13 15:22:17.524395 containerd[1442]: time="2025-02-13T15:22:17.524228464Z" level=info msg="StopPodSandbox for \"f0b420624c0ef4de8293125d4aa3764bbbc9ad891299e4836ec706ef73b01bc2\"" Feb 13 15:22:17.524487 containerd[1442]: time="2025-02-13T15:22:17.524471430Z" level=info msg="TearDown network for sandbox \"f0b420624c0ef4de8293125d4aa3764bbbc9ad891299e4836ec706ef73b01bc2\" successfully" Feb 13 15:22:17.524514 containerd[1442]: time="2025-02-13T15:22:17.524487150Z" level=info msg="StopPodSandbox for \"f0b420624c0ef4de8293125d4aa3764bbbc9ad891299e4836ec706ef73b01bc2\" returns successfully" Feb 13 15:22:17.525690 containerd[1442]: time="2025-02-13T15:22:17.525656060Z" level=info msg="RemovePodSandbox for \"f0b420624c0ef4de8293125d4aa3764bbbc9ad891299e4836ec706ef73b01bc2\"" Feb 13 15:22:17.525723 containerd[1442]: time="2025-02-13T15:22:17.525690221Z" level=info msg="Forcibly stopping sandbox \"f0b420624c0ef4de8293125d4aa3764bbbc9ad891299e4836ec706ef73b01bc2\"" Feb 13 15:22:17.525778 containerd[1442]: time="2025-02-13T15:22:17.525753182Z" level=info msg="TearDown network for sandbox \"f0b420624c0ef4de8293125d4aa3764bbbc9ad891299e4836ec706ef73b01bc2\" successfully" Feb 13 15:22:17.528416 containerd[1442]: time="2025-02-13T15:22:17.528345448Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f0b420624c0ef4de8293125d4aa3764bbbc9ad891299e4836ec706ef73b01bc2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:22:17.528464 containerd[1442]: time="2025-02-13T15:22:17.528430650Z" level=info msg="RemovePodSandbox \"f0b420624c0ef4de8293125d4aa3764bbbc9ad891299e4836ec706ef73b01bc2\" returns successfully" Feb 13 15:22:17.528795 containerd[1442]: time="2025-02-13T15:22:17.528769939Z" level=info msg="StopPodSandbox for \"0e00bbd7bcb14cbd456352c063bed067e90a3795d0de450c8bec1e4ae2edb640\"" Feb 13 15:22:17.528879 containerd[1442]: time="2025-02-13T15:22:17.528863541Z" level=info msg="TearDown network for sandbox \"0e00bbd7bcb14cbd456352c063bed067e90a3795d0de450c8bec1e4ae2edb640\" successfully" Feb 13 15:22:17.528921 containerd[1442]: time="2025-02-13T15:22:17.528878501Z" level=info msg="StopPodSandbox for \"0e00bbd7bcb14cbd456352c063bed067e90a3795d0de450c8bec1e4ae2edb640\" returns successfully" Feb 13 15:22:17.529695 containerd[1442]: time="2025-02-13T15:22:17.529673402Z" level=info msg="RemovePodSandbox for \"0e00bbd7bcb14cbd456352c063bed067e90a3795d0de450c8bec1e4ae2edb640\"" Feb 13 15:22:17.529736 containerd[1442]: time="2025-02-13T15:22:17.529699322Z" level=info msg="Forcibly stopping sandbox \"0e00bbd7bcb14cbd456352c063bed067e90a3795d0de450c8bec1e4ae2edb640\"" Feb 13 15:22:17.529773 containerd[1442]: time="2025-02-13T15:22:17.529759244Z" level=info msg="TearDown network for sandbox \"0e00bbd7bcb14cbd456352c063bed067e90a3795d0de450c8bec1e4ae2edb640\" successfully" Feb 13 15:22:17.531881 containerd[1442]: time="2025-02-13T15:22:17.531839417Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0e00bbd7bcb14cbd456352c063bed067e90a3795d0de450c8bec1e4ae2edb640\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:22:17.531944 containerd[1442]: time="2025-02-13T15:22:17.531896458Z" level=info msg="RemovePodSandbox \"0e00bbd7bcb14cbd456352c063bed067e90a3795d0de450c8bec1e4ae2edb640\" returns successfully" Feb 13 15:22:17.532227 containerd[1442]: time="2025-02-13T15:22:17.532204666Z" level=info msg="StopPodSandbox for \"3f9619cd8018d29949f5b8a88dcba1dde36a35e932ad7eb9d8c70983a1ee1a35\"" Feb 13 15:22:17.532302 containerd[1442]: time="2025-02-13T15:22:17.532284948Z" level=info msg="TearDown network for sandbox \"3f9619cd8018d29949f5b8a88dcba1dde36a35e932ad7eb9d8c70983a1ee1a35\" successfully" Feb 13 15:22:17.532336 containerd[1442]: time="2025-02-13T15:22:17.532299868Z" level=info msg="StopPodSandbox for \"3f9619cd8018d29949f5b8a88dcba1dde36a35e932ad7eb9d8c70983a1ee1a35\" returns successfully" Feb 13 15:22:17.532575 containerd[1442]: time="2025-02-13T15:22:17.532553995Z" level=info msg="RemovePodSandbox for \"3f9619cd8018d29949f5b8a88dcba1dde36a35e932ad7eb9d8c70983a1ee1a35\"" Feb 13 15:22:17.532658 containerd[1442]: time="2025-02-13T15:22:17.532582995Z" level=info msg="Forcibly stopping sandbox \"3f9619cd8018d29949f5b8a88dcba1dde36a35e932ad7eb9d8c70983a1ee1a35\"" Feb 13 15:22:17.532658 containerd[1442]: time="2025-02-13T15:22:17.532651517Z" level=info msg="TearDown network for sandbox \"3f9619cd8018d29949f5b8a88dcba1dde36a35e932ad7eb9d8c70983a1ee1a35\" successfully" Feb 13 15:22:17.534703 containerd[1442]: time="2025-02-13T15:22:17.534677289Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3f9619cd8018d29949f5b8a88dcba1dde36a35e932ad7eb9d8c70983a1ee1a35\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:22:17.534751 containerd[1442]: time="2025-02-13T15:22:17.534724210Z" level=info msg="RemovePodSandbox \"3f9619cd8018d29949f5b8a88dcba1dde36a35e932ad7eb9d8c70983a1ee1a35\" returns successfully" Feb 13 15:22:20.566218 systemd[1]: Started sshd@19-10.0.0.35:22-10.0.0.1:53314.service - OpenSSH per-connection server daemon (10.0.0.1:53314). Feb 13 15:22:20.617465 sshd[5834]: Accepted publickey for core from 10.0.0.1 port 53314 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:22:20.618666 sshd-session[5834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:22:20.622390 systemd-logind[1425]: New session 20 of user core. Feb 13 15:22:20.632164 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 15:22:20.773423 sshd[5849]: Connection closed by 10.0.0.1 port 53314 Feb 13 15:22:20.774152 sshd-session[5834]: pam_unix(sshd:session): session closed for user core Feb 13 15:22:20.777504 systemd[1]: sshd@19-10.0.0.35:22-10.0.0.1:53314.service: Deactivated successfully. Feb 13 15:22:20.780402 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 15:22:20.781046 systemd-logind[1425]: Session 20 logged out. Waiting for processes to exit. Feb 13 15:22:20.781807 systemd-logind[1425]: Removed session 20.