Feb 13 15:26:56.904770 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 15:26:56.904793 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu Feb 13 13:57:00 -00 2025 Feb 13 15:26:56.904811 kernel: KASLR enabled Feb 13 15:26:56.904817 kernel: efi: EFI v2.7 by EDK II Feb 13 15:26:56.904822 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbbf018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40d98 Feb 13 15:26:56.904828 kernel: random: crng init done Feb 13 15:26:56.904835 kernel: secureboot: Secure boot disabled Feb 13 15:26:56.904841 kernel: ACPI: Early table checksum verification disabled Feb 13 15:26:56.904847 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Feb 13 15:26:56.904854 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 13 15:26:56.904861 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:26:56.904867 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:26:56.904873 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:26:56.904879 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:26:56.904887 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:26:56.904894 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:26:56.904901 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:26:56.904907 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:26:56.904914 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:26:56.904920 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 13 15:26:56.904926 kernel: NUMA: Failed to initialise from firmware Feb 13 15:26:56.904933 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 15:26:56.904939 kernel: NUMA: NODE_DATA [mem 0xdc95a800-0xdc95ffff] Feb 13 15:26:56.904945 kernel: Zone ranges: Feb 13 15:26:56.904951 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 15:26:56.904959 kernel: DMA32 empty Feb 13 15:26:56.904965 kernel: Normal empty Feb 13 15:26:56.904971 kernel: Movable zone start for each node Feb 13 15:26:56.904977 kernel: Early memory node ranges Feb 13 15:26:56.904983 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Feb 13 15:26:56.904990 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Feb 13 15:26:56.904996 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Feb 13 15:26:56.905002 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Feb 13 15:26:56.905008 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Feb 13 15:26:56.905014 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Feb 13 15:26:56.905020 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Feb 13 15:26:56.905027 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 15:26:56.905034 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 13 15:26:56.905040 kernel: psci: probing for conduit method from ACPI. Feb 13 15:26:56.905047 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 15:26:56.905056 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 15:26:56.905062 kernel: psci: Trusted OS migration not required Feb 13 15:26:56.905069 kernel: psci: SMC Calling Convention v1.1 Feb 13 15:26:56.905077 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 13 15:26:56.905084 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 15:26:56.905090 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 15:26:56.905097 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 13 15:26:56.905104 kernel: Detected PIPT I-cache on CPU0 Feb 13 15:26:56.905111 kernel: CPU features: detected: GIC system register CPU interface Feb 13 15:26:56.905118 kernel: CPU features: detected: Hardware dirty bit management Feb 13 15:26:56.905124 kernel: CPU features: detected: Spectre-v4 Feb 13 15:26:56.905131 kernel: CPU features: detected: Spectre-BHB Feb 13 15:26:56.905138 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 15:26:56.905146 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 15:26:56.905152 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 15:26:56.905159 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 15:26:56.905166 kernel: alternatives: applying boot alternatives Feb 13 15:26:56.905174 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=07e9b8867aadd0b2e77ba5338d18cdd10706c658e0d745a78e129bcae9a0e4c6 Feb 13 15:26:56.905181 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:26:56.905188 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 15:26:56.905195 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 15:26:56.905201 kernel: Fallback order for Node 0: 0 Feb 13 15:26:56.905208 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 13 15:26:56.905215 kernel: Policy zone: DMA Feb 13 15:26:56.905223 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:26:56.905230 kernel: software IO TLB: area num 4. Feb 13 15:26:56.905237 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Feb 13 15:26:56.905244 kernel: Memory: 2386332K/2572288K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39680K init, 897K bss, 185956K reserved, 0K cma-reserved) Feb 13 15:26:56.905251 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 15:26:56.905258 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:26:56.905266 kernel: rcu: RCU event tracing is enabled. Feb 13 15:26:56.905272 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 15:26:56.905279 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:26:56.905286 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:26:56.905292 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:26:56.905299 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 15:26:56.905307 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 15:26:56.905313 kernel: GICv3: 256 SPIs implemented Feb 13 15:26:56.905320 kernel: GICv3: 0 Extended SPIs implemented Feb 13 15:26:56.905327 kernel: Root IRQ handler: gic_handle_irq Feb 13 15:26:56.905334 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 15:26:56.905340 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 13 15:26:56.905346 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 13 15:26:56.905353 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 15:26:56.905360 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Feb 13 15:26:56.905367 kernel: GICv3: using LPI property table @0x00000000400f0000 Feb 13 15:26:56.905374 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Feb 13 15:26:56.905382 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:26:56.905389 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:26:56.905396 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 15:26:56.905403 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 15:26:56.905409 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 15:26:56.905416 kernel: arm-pv: using stolen time PV Feb 13 15:26:56.905423 kernel: Console: colour dummy device 80x25 Feb 13 15:26:56.905430 kernel: ACPI: Core revision 20230628 Feb 13 15:26:56.905437 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 15:26:56.905444 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:26:56.905452 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:26:56.905459 kernel: landlock: Up and running. Feb 13 15:26:56.905465 kernel: SELinux: Initializing. Feb 13 15:26:56.905472 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:26:56.905479 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:26:56.905486 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:26:56.905493 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:26:56.905500 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:26:56.905508 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:26:56.905515 kernel: Platform MSI: ITS@0x8080000 domain created Feb 13 15:26:56.905523 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 13 15:26:56.905530 kernel: Remapping and enabling EFI services. Feb 13 15:26:56.905537 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:26:56.905544 kernel: Detected PIPT I-cache on CPU1 Feb 13 15:26:56.905551 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 13 15:26:56.905558 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Feb 13 15:26:56.905565 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:26:56.905572 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 15:26:56.905579 kernel: Detected PIPT I-cache on CPU2 Feb 13 15:26:56.905587 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 13 15:26:56.905594 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Feb 13 15:26:56.905606 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:26:56.905635 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 13 15:26:56.905643 kernel: Detected PIPT I-cache on CPU3 Feb 13 15:26:56.905651 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 13 15:26:56.905658 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Feb 13 15:26:56.905665 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:26:56.905672 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 13 15:26:56.905681 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 15:26:56.905688 kernel: SMP: Total of 4 processors activated. Feb 13 15:26:56.905695 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 15:26:56.905703 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 15:26:56.905710 kernel: CPU features: detected: Common not Private translations Feb 13 15:26:56.905717 kernel: CPU features: detected: CRC32 instructions Feb 13 15:26:56.905724 kernel: CPU features: detected: Enhanced Virtualization Traps Feb 13 15:26:56.905731 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 15:26:56.905740 kernel: CPU features: detected: LSE atomic instructions Feb 13 15:26:56.905747 kernel: CPU features: detected: Privileged Access Never Feb 13 15:26:56.905754 kernel: CPU features: detected: RAS Extension Support Feb 13 15:26:56.905762 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 13 15:26:56.905769 kernel: CPU: All CPU(s) started at EL1 Feb 13 15:26:56.905776 kernel: alternatives: applying system-wide alternatives Feb 13 15:26:56.905783 kernel: devtmpfs: initialized Feb 13 15:26:56.905791 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:26:56.905803 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 15:26:56.905813 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:26:56.905820 kernel: SMBIOS 3.0.0 present. Feb 13 15:26:56.905827 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Feb 13 15:26:56.905835 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:26:56.905843 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 15:26:56.905851 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 15:26:56.905859 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 15:26:56.905866 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:26:56.905873 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Feb 13 15:26:56.905882 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:26:56.905890 kernel: cpuidle: using governor menu Feb 13 15:26:56.905897 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 15:26:56.905904 kernel: ASID allocator initialised with 32768 entries Feb 13 15:26:56.905912 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:26:56.905919 kernel: Serial: AMBA PL011 UART driver Feb 13 15:26:56.905926 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 15:26:56.905933 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 15:26:56.905940 kernel: Modules: 508960 pages in range for PLT usage Feb 13 15:26:56.905949 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 15:26:56.905956 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 15:26:56.905963 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 15:26:56.905971 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 15:26:56.905978 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:26:56.905985 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:26:56.905992 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 15:26:56.905999 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 15:26:56.906006 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:26:56.906015 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:26:56.906023 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:26:56.906030 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:26:56.906037 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 15:26:56.906045 kernel: ACPI: Interpreter enabled Feb 13 15:26:56.906052 kernel: ACPI: Using GIC for interrupt routing Feb 13 15:26:56.906059 kernel: ACPI: MCFG table detected, 1 entries Feb 13 15:26:56.906078 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 13 15:26:56.906085 kernel: printk: console [ttyAMA0] enabled Feb 13 15:26:56.906093 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 15:26:56.906238 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 15:26:56.906311 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 15:26:56.906375 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 15:26:56.906438 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 13 15:26:56.906498 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 13 15:26:56.906508 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 13 15:26:56.906518 kernel: PCI host bridge to bus 0000:00 Feb 13 15:26:56.906586 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 13 15:26:56.906687 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 15:26:56.906747 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 13 15:26:56.906809 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 15:26:56.906897 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 13 15:26:56.906972 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 15:26:56.907042 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 13 15:26:56.907111 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 13 15:26:56.907174 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 15:26:56.907239 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 15:26:56.907302 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 13 15:26:56.907364 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 13 15:26:56.907419 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 13 15:26:56.907477 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 15:26:56.907534 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 13 15:26:56.907543 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 15:26:56.907551 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 15:26:56.907559 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 15:26:56.907566 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 15:26:56.907573 kernel: iommu: Default domain type: Translated Feb 13 15:26:56.907581 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 15:26:56.907590 kernel: efivars: Registered efivars operations Feb 13 15:26:56.907597 kernel: vgaarb: loaded Feb 13 15:26:56.907605 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 15:26:56.907612 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:26:56.907630 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:26:56.907667 kernel: pnp: PnP ACPI init Feb 13 15:26:56.907760 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 13 15:26:56.907772 kernel: pnp: PnP ACPI: found 1 devices Feb 13 15:26:56.907782 kernel: NET: Registered PF_INET protocol family Feb 13 15:26:56.907790 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 15:26:56.907805 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 15:26:56.907815 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:26:56.907822 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 15:26:56.907829 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 15:26:56.907837 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 15:26:56.907845 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:26:56.907852 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:26:56.907862 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:26:56.907869 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:26:56.907877 kernel: kvm [1]: HYP mode not available Feb 13 15:26:56.907884 kernel: Initialise system trusted keyrings Feb 13 15:26:56.907892 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 15:26:56.907899 kernel: Key type asymmetric registered Feb 13 15:26:56.907906 kernel: Asymmetric key parser 'x509' registered Feb 13 15:26:56.907914 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 15:26:56.907921 kernel: io scheduler mq-deadline registered Feb 13 15:26:56.907930 kernel: io scheduler kyber registered Feb 13 15:26:56.907937 kernel: io scheduler bfq registered Feb 13 15:26:56.907944 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 15:26:56.907952 kernel: ACPI: button: Power Button [PWRB] Feb 13 15:26:56.907960 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 15:26:56.908039 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 13 15:26:56.908051 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:26:56.908059 kernel: thunder_xcv, ver 1.0 Feb 13 15:26:56.908069 kernel: thunder_bgx, ver 1.0 Feb 13 15:26:56.908079 kernel: nicpf, ver 1.0 Feb 13 15:26:56.908086 kernel: nicvf, ver 1.0 Feb 13 15:26:56.908171 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 15:26:56.908249 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T15:26:56 UTC (1739460416) Feb 13 15:26:56.908259 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 15:26:56.908266 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 13 15:26:56.908274 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 15:26:56.908281 kernel: watchdog: Hard watchdog permanently disabled Feb 13 15:26:56.908291 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:26:56.908298 kernel: Segment Routing with IPv6 Feb 13 15:26:56.908305 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:26:56.908312 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:26:56.908320 kernel: Key type dns_resolver registered Feb 13 15:26:56.908327 kernel: registered taskstats version 1 Feb 13 15:26:56.908335 kernel: Loading compiled-in X.509 certificates Feb 13 15:26:56.908342 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 4531cdb19689f90a81e7969ac7d8e25a95254f51' Feb 13 15:26:56.908349 kernel: Key type .fscrypt registered Feb 13 15:26:56.908356 kernel: Key type fscrypt-provisioning registered Feb 13 15:26:56.908365 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 15:26:56.908372 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:26:56.908379 kernel: ima: No architecture policies found Feb 13 15:26:56.908387 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 15:26:56.908394 kernel: clk: Disabling unused clocks Feb 13 15:26:56.908401 kernel: Freeing unused kernel memory: 39680K Feb 13 15:26:56.908408 kernel: Run /init as init process Feb 13 15:26:56.908415 kernel: with arguments: Feb 13 15:26:56.908424 kernel: /init Feb 13 15:26:56.908431 kernel: with environment: Feb 13 15:26:56.908438 kernel: HOME=/ Feb 13 15:26:56.908445 kernel: TERM=linux Feb 13 15:26:56.908452 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:26:56.908461 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:26:56.908470 systemd[1]: Detected virtualization kvm. Feb 13 15:26:56.908478 systemd[1]: Detected architecture arm64. Feb 13 15:26:56.908487 systemd[1]: Running in initrd. Feb 13 15:26:56.908494 systemd[1]: No hostname configured, using default hostname. Feb 13 15:26:56.908502 systemd[1]: Hostname set to . Feb 13 15:26:56.908510 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:26:56.908518 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:26:56.908525 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:26:56.908533 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:26:56.908541 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:26:56.908551 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:26:56.908559 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:26:56.908567 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:26:56.908577 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:26:56.908585 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:26:56.908593 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:26:56.908601 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:26:56.908610 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:26:56.908632 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:26:56.908656 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:26:56.908664 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:26:56.908672 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:26:56.908680 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:26:56.908688 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:26:56.908696 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 15:26:56.908706 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:26:56.908714 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:26:56.908722 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:26:56.908730 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:26:56.908738 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:26:56.908746 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:26:56.908754 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:26:56.908762 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:26:56.908770 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:26:56.908780 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:26:56.908787 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:26:56.908795 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:26:56.908810 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:26:56.908818 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:26:56.908827 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:26:56.908837 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:26:56.908845 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:26:56.908853 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:26:56.908881 systemd-journald[239]: Collecting audit messages is disabled. Feb 13 15:26:56.908903 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:26:56.908911 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:26:56.908923 systemd-journald[239]: Journal started Feb 13 15:26:56.908942 systemd-journald[239]: Runtime Journal (/run/log/journal/ea5201c5faa84b7eb94ed4625629fa70) is 5.9M, max 47.3M, 41.4M free. Feb 13 15:26:56.891879 systemd-modules-load[240]: Inserted module 'overlay' Feb 13 15:26:56.910877 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:26:56.914342 systemd-modules-load[240]: Inserted module 'br_netfilter' Feb 13 15:26:56.915125 kernel: Bridge firewalling registered Feb 13 15:26:56.919191 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:26:56.920243 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:26:56.921313 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:26:56.925025 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:26:56.925995 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:26:56.928806 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:26:56.929671 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:26:56.938206 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:26:56.941132 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:26:56.945296 dracut-cmdline[274]: dracut-dracut-053 Feb 13 15:26:56.948091 dracut-cmdline[274]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=07e9b8867aadd0b2e77ba5338d18cdd10706c658e0d745a78e129bcae9a0e4c6 Feb 13 15:26:56.973828 systemd-resolved[283]: Positive Trust Anchors: Feb 13 15:26:56.973909 systemd-resolved[283]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:26:56.973941 systemd-resolved[283]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:26:56.979095 systemd-resolved[283]: Defaulting to hostname 'linux'. Feb 13 15:26:56.980338 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:26:56.981703 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:26:57.020654 kernel: SCSI subsystem initialized Feb 13 15:26:57.025636 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:26:57.033649 kernel: iscsi: registered transport (tcp) Feb 13 15:26:57.046633 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:26:57.046660 kernel: QLogic iSCSI HBA Driver Feb 13 15:26:57.096396 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:26:57.107812 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:26:57.131234 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:26:57.131293 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:26:57.131317 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:26:57.180654 kernel: raid6: neonx8 gen() 15361 MB/s Feb 13 15:26:57.197642 kernel: raid6: neonx4 gen() 15316 MB/s Feb 13 15:26:57.214641 kernel: raid6: neonx2 gen() 12566 MB/s Feb 13 15:26:57.231632 kernel: raid6: neonx1 gen() 10141 MB/s Feb 13 15:26:57.248633 kernel: raid6: int64x8 gen() 6805 MB/s Feb 13 15:26:57.265631 kernel: raid6: int64x4 gen() 7234 MB/s Feb 13 15:26:57.282647 kernel: raid6: int64x2 gen() 5922 MB/s Feb 13 15:26:57.299637 kernel: raid6: int64x1 gen() 4977 MB/s Feb 13 15:26:57.299675 kernel: raid6: using algorithm neonx8 gen() 15361 MB/s Feb 13 15:26:57.316656 kernel: raid6: .... xor() 11592 MB/s, rmw enabled Feb 13 15:26:57.316688 kernel: raid6: using neon recovery algorithm Feb 13 15:26:57.321713 kernel: xor: measuring software checksum speed Feb 13 15:26:57.321740 kernel: 8regs : 19797 MB/sec Feb 13 15:26:57.322926 kernel: 32regs : 19168 MB/sec Feb 13 15:26:57.322941 kernel: arm64_neon : 19057 MB/sec Feb 13 15:26:57.322960 kernel: xor: using function: 8regs (19797 MB/sec) Feb 13 15:26:57.374648 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:26:57.385887 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:26:57.396896 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:26:57.409205 systemd-udevd[463]: Using default interface naming scheme 'v255'. Feb 13 15:26:57.412425 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:26:57.415396 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:26:57.430192 dracut-pre-trigger[471]: rd.md=0: removing MD RAID activation Feb 13 15:26:57.460683 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:26:57.469818 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:26:57.513765 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:26:57.524448 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:26:57.536268 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:26:57.538074 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:26:57.541701 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:26:57.543325 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:26:57.552924 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:26:57.559653 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Feb 13 15:26:57.571913 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 15:26:57.572020 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 15:26:57.572039 kernel: GPT:9289727 != 19775487 Feb 13 15:26:57.572049 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 15:26:57.572059 kernel: GPT:9289727 != 19775487 Feb 13 15:26:57.572069 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 15:26:57.572079 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:26:57.566209 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:26:57.571061 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:26:57.571169 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:26:57.572963 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:26:57.573770 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:26:57.573948 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:26:57.575769 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:26:57.586939 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:26:57.593971 kernel: BTRFS: device fsid 27ad543d-6fdb-4ace-b8f1-8f50b124bd06 devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (519) Feb 13 15:26:57.596515 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (517) Feb 13 15:26:57.598831 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 15:26:57.600072 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:26:57.605768 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 15:26:57.615328 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 15:26:57.616339 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 15:26:57.621605 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 15:26:57.632820 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:26:57.635316 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:26:57.641212 disk-uuid[552]: Primary Header is updated. Feb 13 15:26:57.641212 disk-uuid[552]: Secondary Entries is updated. Feb 13 15:26:57.641212 disk-uuid[552]: Secondary Header is updated. Feb 13 15:26:57.643999 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:26:57.660855 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:26:58.659647 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:26:58.659902 disk-uuid[554]: The operation has completed successfully. Feb 13 15:26:58.692982 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:26:58.693082 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:26:58.714820 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:26:58.717844 sh[574]: Success Feb 13 15:26:58.733647 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 15:26:58.785110 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:26:58.786779 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:26:58.787581 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:26:58.805000 kernel: BTRFS info (device dm-0): first mount of filesystem 27ad543d-6fdb-4ace-b8f1-8f50b124bd06 Feb 13 15:26:58.805047 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:26:58.805058 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:26:58.805862 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:26:58.806941 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:26:58.811053 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:26:58.812414 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:26:58.831845 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:26:58.833275 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:26:58.850114 kernel: BTRFS info (device vda6): first mount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:26:58.850175 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:26:58.850187 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:26:58.857673 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:26:58.866807 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 15:26:58.868628 kernel: BTRFS info (device vda6): last unmount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:26:58.878218 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:26:58.883829 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:26:58.946901 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:26:58.959892 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:26:58.983097 systemd-networkd[757]: lo: Link UP Feb 13 15:26:58.983109 systemd-networkd[757]: lo: Gained carrier Feb 13 15:26:58.983902 systemd-networkd[757]: Enumeration completed Feb 13 15:26:58.984448 systemd-networkd[757]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:26:58.984451 systemd-networkd[757]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:26:58.986688 systemd-networkd[757]: eth0: Link UP Feb 13 15:26:58.986691 systemd-networkd[757]: eth0: Gained carrier Feb 13 15:26:58.986700 systemd-networkd[757]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:26:58.987100 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:26:58.988067 systemd[1]: Reached target network.target - Network. Feb 13 15:26:58.999137 ignition[681]: Ignition 2.20.0 Feb 13 15:26:58.999149 ignition[681]: Stage: fetch-offline Feb 13 15:26:58.999184 ignition[681]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:26:58.999192 ignition[681]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:26:58.999341 ignition[681]: parsed url from cmdline: "" Feb 13 15:26:58.999344 ignition[681]: no config URL provided Feb 13 15:26:58.999349 ignition[681]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:26:58.999356 ignition[681]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:26:58.999384 ignition[681]: op(1): [started] loading QEMU firmware config module Feb 13 15:26:59.004708 systemd-networkd[757]: eth0: DHCPv4 address 10.0.0.91/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:26:58.999390 ignition[681]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 15:26:59.008159 ignition[681]: op(1): [finished] loading QEMU firmware config module Feb 13 15:26:59.048248 ignition[681]: parsing config with SHA512: 170018e67c25e13173b4cfd0643d329411a508efc1c8e667a24c30f7f8ef6c99352dc2e09cf886a999ac224e93876bd5deae16c8d860838f128202b02e59399f Feb 13 15:26:59.054416 unknown[681]: fetched base config from "system" Feb 13 15:26:59.054433 unknown[681]: fetched user config from "qemu" Feb 13 15:26:59.055033 ignition[681]: fetch-offline: fetch-offline passed Feb 13 15:26:59.055148 ignition[681]: Ignition finished successfully Feb 13 15:26:59.056798 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:26:59.058101 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 15:26:59.070862 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:26:59.082103 ignition[769]: Ignition 2.20.0 Feb 13 15:26:59.082121 ignition[769]: Stage: kargs Feb 13 15:26:59.082286 ignition[769]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:26:59.082296 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:26:59.083212 ignition[769]: kargs: kargs passed Feb 13 15:26:59.085511 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:26:59.083259 ignition[769]: Ignition finished successfully Feb 13 15:26:59.087917 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:26:59.103246 ignition[777]: Ignition 2.20.0 Feb 13 15:26:59.103257 ignition[777]: Stage: disks Feb 13 15:26:59.103429 ignition[777]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:26:59.103438 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:26:59.105826 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:26:59.104323 ignition[777]: disks: disks passed Feb 13 15:26:59.106858 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:26:59.104371 ignition[777]: Ignition finished successfully Feb 13 15:26:59.108095 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:26:59.109396 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:26:59.110916 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:26:59.112189 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:26:59.120841 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:26:59.134589 systemd-fsck[789]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 15:26:59.138497 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:26:59.154803 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:26:59.204634 kernel: EXT4-fs (vda9): mounted filesystem b8d8a7c2-9667-48db-9266-035fd118dfdf r/w with ordered data mode. Quota mode: none. Feb 13 15:26:59.205153 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:26:59.206329 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:26:59.218725 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:26:59.221091 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:26:59.222096 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 15:26:59.222140 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:26:59.222163 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:26:59.227957 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:26:59.229824 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:26:59.234385 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (797) Feb 13 15:26:59.234424 kernel: BTRFS info (device vda6): first mount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:26:59.234435 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:26:59.234445 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:26:59.239643 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:26:59.240751 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:26:59.292464 initrd-setup-root[822]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:26:59.295846 initrd-setup-root[829]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:26:59.300156 initrd-setup-root[836]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:26:59.303475 initrd-setup-root[843]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:26:59.404564 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:26:59.420805 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:26:59.422309 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:26:59.427643 kernel: BTRFS info (device vda6): last unmount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:26:59.451946 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:26:59.462355 ignition[911]: INFO : Ignition 2.20.0 Feb 13 15:26:59.462355 ignition[911]: INFO : Stage: mount Feb 13 15:26:59.463678 ignition[911]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:26:59.463678 ignition[911]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:26:59.463678 ignition[911]: INFO : mount: mount passed Feb 13 15:26:59.463678 ignition[911]: INFO : Ignition finished successfully Feb 13 15:26:59.465058 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:26:59.475762 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:26:59.803877 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:26:59.822214 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:26:59.834639 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (925) Feb 13 15:26:59.835641 kernel: BTRFS info (device vda6): first mount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:26:59.835656 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:26:59.836839 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:26:59.839646 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:26:59.840348 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:26:59.860935 ignition[942]: INFO : Ignition 2.20.0 Feb 13 15:26:59.860935 ignition[942]: INFO : Stage: files Feb 13 15:26:59.862224 ignition[942]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:26:59.862224 ignition[942]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:26:59.862224 ignition[942]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:26:59.864830 ignition[942]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:26:59.864830 ignition[942]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:26:59.872196 ignition[942]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:26:59.873417 ignition[942]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:26:59.873417 ignition[942]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:26:59.872725 unknown[942]: wrote ssh authorized keys file for user: core Feb 13 15:26:59.876870 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 15:26:59.876870 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 15:26:59.934527 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 15:27:00.105106 systemd-networkd[757]: eth0: Gained IPv6LL Feb 13 15:27:00.463906 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 15:27:00.463906 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:27:00.467162 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:27:00.467162 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:27:00.467162 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:27:00.467162 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:27:00.467162 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:27:00.467162 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:27:00.467162 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:27:00.467162 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:27:00.467162 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:27:00.467162 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 15:27:00.467162 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 15:27:00.467162 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 15:27:00.467162 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Feb 13 15:27:00.810843 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 15:27:01.028409 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 15:27:01.028409 ignition[942]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 15:27:01.031686 ignition[942]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:27:01.031686 ignition[942]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:27:01.031686 ignition[942]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 15:27:01.031686 ignition[942]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Feb 13 15:27:01.031686 ignition[942]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 15:27:01.031686 ignition[942]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 15:27:01.031686 ignition[942]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Feb 13 15:27:01.031686 ignition[942]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 15:27:01.075170 ignition[942]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 15:27:01.081061 ignition[942]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 15:27:01.083161 ignition[942]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 15:27:01.083161 ignition[942]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Feb 13 15:27:01.083161 ignition[942]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 15:27:01.083161 ignition[942]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:27:01.083161 ignition[942]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:27:01.083161 ignition[942]: INFO : files: files passed Feb 13 15:27:01.083161 ignition[942]: INFO : Ignition finished successfully Feb 13 15:27:01.084127 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:27:01.097943 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:27:01.100160 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:27:01.103173 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:27:01.103263 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:27:01.109232 initrd-setup-root-after-ignition[970]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 15:27:01.111574 initrd-setup-root-after-ignition[972]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:27:01.111574 initrd-setup-root-after-ignition[972]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:27:01.114686 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:27:01.114178 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:27:01.115751 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:27:01.125838 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:27:01.146837 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:27:01.146945 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:27:01.148567 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:27:01.149998 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:27:01.151466 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:27:01.152295 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:27:01.168150 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:27:01.170460 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:27:01.184989 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:27:01.186187 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:27:01.187709 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:27:01.189224 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:27:01.189412 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:27:01.191238 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:27:01.192729 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:27:01.194007 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:27:01.195322 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:27:01.196739 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:27:01.198225 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:27:01.199573 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:27:01.201186 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:27:01.202576 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:27:01.203980 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:27:01.205097 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:27:01.205226 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:27:01.207103 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:27:01.208484 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:27:01.210018 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:27:01.210181 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:27:01.211491 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:27:01.211602 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:27:01.213594 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:27:01.213716 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:27:01.215180 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:27:01.216384 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:27:01.219705 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:27:01.221554 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:27:01.222307 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:27:01.224222 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:27:01.224313 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:27:01.225648 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:27:01.225730 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:27:01.227104 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:27:01.227214 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:27:01.228887 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:27:01.228977 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:27:01.241852 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:27:01.242534 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:27:01.242696 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:27:01.245038 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:27:01.246199 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:27:01.246309 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:27:01.247592 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:27:01.247696 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:27:01.252638 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:27:01.253298 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:27:01.255170 ignition[996]: INFO : Ignition 2.20.0 Feb 13 15:27:01.255170 ignition[996]: INFO : Stage: umount Feb 13 15:27:01.255170 ignition[996]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:27:01.255170 ignition[996]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:27:01.255170 ignition[996]: INFO : umount: umount passed Feb 13 15:27:01.255170 ignition[996]: INFO : Ignition finished successfully Feb 13 15:27:01.255899 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:27:01.255991 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:27:01.257871 systemd[1]: Stopped target network.target - Network. Feb 13 15:27:01.259805 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:27:01.259941 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:27:01.261269 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:27:01.261405 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:27:01.262830 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:27:01.262874 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:27:01.264538 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:27:01.264583 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:27:01.266422 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:27:01.268108 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:27:01.270915 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:27:01.277675 systemd-networkd[757]: eth0: DHCPv6 lease lost Feb 13 15:27:01.280366 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:27:01.280484 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:27:01.282098 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:27:01.282126 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:27:01.294740 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:27:01.295408 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:27:01.295468 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:27:01.297054 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:27:01.299598 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:27:01.299709 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:27:01.303904 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:27:01.303957 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:27:01.305808 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:27:01.305854 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:27:01.307408 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:27:01.307559 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:27:01.310219 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:27:01.311704 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:27:01.313766 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:27:01.314023 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:27:01.317206 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:27:01.317256 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:27:01.319196 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:27:01.319230 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:27:01.320657 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:27:01.320705 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:27:01.322842 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:27:01.322884 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:27:01.324836 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:27:01.324874 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:27:01.336820 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:27:01.337812 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:27:01.337879 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:27:01.339845 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:27:01.339893 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:27:01.341834 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:27:01.341963 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:27:01.344190 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:27:01.344288 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:27:01.346403 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:27:01.346504 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:27:01.348043 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:27:01.349930 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:27:01.359901 systemd[1]: Switching root. Feb 13 15:27:01.387572 systemd-journald[239]: Journal stopped Feb 13 15:27:02.205847 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). Feb 13 15:27:02.205957 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 15:27:02.205972 kernel: SELinux: policy capability open_perms=1 Feb 13 15:27:02.205984 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 15:27:02.205994 kernel: SELinux: policy capability always_check_network=0 Feb 13 15:27:02.206031 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 15:27:02.206049 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 15:27:02.206059 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 15:27:02.206068 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 15:27:02.206078 kernel: audit: type=1403 audit(1739460421.530:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 15:27:02.206089 systemd[1]: Successfully loaded SELinux policy in 34.643ms. Feb 13 15:27:02.206107 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.275ms. Feb 13 15:27:02.206120 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:27:02.206131 systemd[1]: Detected virtualization kvm. Feb 13 15:27:02.206142 systemd[1]: Detected architecture arm64. Feb 13 15:27:02.206154 systemd[1]: Detected first boot. Feb 13 15:27:02.206165 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:27:02.206176 zram_generator::config[1040]: No configuration found. Feb 13 15:27:02.206188 systemd[1]: Populated /etc with preset unit settings. Feb 13 15:27:02.206201 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 15:27:02.206212 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 15:27:02.206224 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 15:27:02.206236 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 15:27:02.206249 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 15:27:02.206259 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 15:27:02.206270 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 15:27:02.206281 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 15:27:02.206291 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 15:27:02.206303 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 15:27:02.206313 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 15:27:02.206324 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:27:02.206335 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:27:02.206347 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 15:27:02.206358 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 15:27:02.206368 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 15:27:02.206379 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:27:02.206390 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 15:27:02.206401 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:27:02.206456 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 15:27:02.206470 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 15:27:02.206486 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 15:27:02.206498 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 15:27:02.206509 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:27:02.206519 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:27:02.206530 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:27:02.206540 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:27:02.206550 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 15:27:02.206561 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 15:27:02.206573 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:27:02.206584 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:27:02.206594 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:27:02.206605 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 15:27:02.206853 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 15:27:02.206878 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 15:27:02.206889 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 15:27:02.206900 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 15:27:02.206910 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 15:27:02.206926 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 15:27:02.206943 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 15:27:02.206962 systemd[1]: Reached target machines.target - Containers. Feb 13 15:27:02.206973 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 15:27:02.206984 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:27:02.206994 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:27:02.207016 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 15:27:02.207028 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:27:02.207043 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:27:02.207056 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:27:02.207066 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 15:27:02.207076 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:27:02.207088 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 15:27:02.207098 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 15:27:02.207109 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 15:27:02.207119 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 15:27:02.207130 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 15:27:02.207141 kernel: fuse: init (API version 7.39) Feb 13 15:27:02.207152 kernel: loop: module loaded Feb 13 15:27:02.207161 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:27:02.207173 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:27:02.207184 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 15:27:02.207194 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 15:27:02.207204 kernel: ACPI: bus type drm_connector registered Feb 13 15:27:02.207213 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:27:02.207224 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 15:27:02.207238 systemd[1]: Stopped verity-setup.service. Feb 13 15:27:02.207248 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 15:27:02.207259 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 15:27:02.207270 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 15:27:02.207280 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 15:27:02.207293 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 15:27:02.207303 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 15:27:02.207313 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:27:02.207352 systemd-journald[1104]: Collecting audit messages is disabled. Feb 13 15:27:02.207374 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 15:27:02.207385 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 15:27:02.207396 systemd-journald[1104]: Journal started Feb 13 15:27:02.207419 systemd-journald[1104]: Runtime Journal (/run/log/journal/ea5201c5faa84b7eb94ed4625629fa70) is 5.9M, max 47.3M, 41.4M free. Feb 13 15:27:01.979137 systemd[1]: Queued start job for default target multi-user.target. Feb 13 15:27:02.000810 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 15:27:02.001193 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 15:27:02.210667 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:27:02.211334 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:27:02.211484 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:27:02.213825 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:27:02.213965 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:27:02.216696 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 15:27:02.217847 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:27:02.217983 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:27:02.219199 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 15:27:02.219333 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 15:27:02.220524 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:27:02.220702 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:27:02.222183 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:27:02.223491 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 15:27:02.225872 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 15:27:02.238374 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 15:27:02.243742 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 15:27:02.248790 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 15:27:02.250774 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 15:27:02.250809 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:27:02.252739 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 15:27:02.254786 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 15:27:02.256688 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 15:27:02.257611 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:27:02.259210 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 15:27:02.262063 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 15:27:02.263372 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:27:02.266822 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 15:27:02.267919 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:27:02.270923 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:27:02.273960 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 15:27:02.276935 systemd-journald[1104]: Time spent on flushing to /var/log/journal/ea5201c5faa84b7eb94ed4625629fa70 is 17.112ms for 854 entries. Feb 13 15:27:02.276935 systemd-journald[1104]: System Journal (/var/log/journal/ea5201c5faa84b7eb94ed4625629fa70) is 8.0M, max 195.6M, 187.6M free. Feb 13 15:27:02.301108 systemd-journald[1104]: Received client request to flush runtime journal. Feb 13 15:27:02.301157 kernel: loop0: detected capacity change from 0 to 116808 Feb 13 15:27:02.281234 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 15:27:02.285338 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:27:02.286719 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 15:27:02.287802 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 15:27:02.290655 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 15:27:02.292126 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 15:27:02.298607 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 15:27:02.309642 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 15:27:02.316874 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 15:27:02.321861 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 15:27:02.324723 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 15:27:02.326323 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:27:02.338815 kernel: loop1: detected capacity change from 0 to 194096 Feb 13 15:27:02.339400 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 15:27:02.340162 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 15:27:02.348046 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 15:27:02.357879 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:27:02.361539 udevadm[1164]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 15:27:02.381053 kernel: loop2: detected capacity change from 0 to 113536 Feb 13 15:27:02.385861 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Feb 13 15:27:02.385879 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Feb 13 15:27:02.390236 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:27:02.432646 kernel: loop3: detected capacity change from 0 to 116808 Feb 13 15:27:02.436660 kernel: loop4: detected capacity change from 0 to 194096 Feb 13 15:27:02.441883 kernel: loop5: detected capacity change from 0 to 113536 Feb 13 15:27:02.444820 (sd-merge)[1176]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 15:27:02.445202 (sd-merge)[1176]: Merged extensions into '/usr'. Feb 13 15:27:02.448970 systemd[1]: Reloading requested from client PID 1151 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 15:27:02.448990 systemd[1]: Reloading... Feb 13 15:27:02.515661 zram_generator::config[1209]: No configuration found. Feb 13 15:27:02.547258 ldconfig[1146]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 15:27:02.606946 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:27:02.643033 systemd[1]: Reloading finished in 193 ms. Feb 13 15:27:02.682111 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 15:27:02.684918 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 15:27:02.698923 systemd[1]: Starting ensure-sysext.service... Feb 13 15:27:02.701145 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:27:02.717688 systemd[1]: Reloading requested from client PID 1239 ('systemctl') (unit ensure-sysext.service)... Feb 13 15:27:02.717704 systemd[1]: Reloading... Feb 13 15:27:02.724447 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 15:27:02.724725 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 15:27:02.725352 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 15:27:02.725563 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Feb 13 15:27:02.725611 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Feb 13 15:27:02.733878 systemd-tmpfiles[1241]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:27:02.733891 systemd-tmpfiles[1241]: Skipping /boot Feb 13 15:27:02.741138 systemd-tmpfiles[1241]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:27:02.741155 systemd-tmpfiles[1241]: Skipping /boot Feb 13 15:27:02.773870 zram_generator::config[1268]: No configuration found. Feb 13 15:27:02.858685 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:27:02.894123 systemd[1]: Reloading finished in 176 ms. Feb 13 15:27:02.912027 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 15:27:02.925093 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:27:02.934523 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:27:02.937107 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 15:27:02.939373 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 15:27:02.943111 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:27:02.948007 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:27:02.955666 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 15:27:02.960728 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 15:27:02.964917 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:27:02.971938 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:27:02.973895 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:27:02.977935 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:27:02.978980 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:27:02.981350 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 15:27:02.983358 systemd-udevd[1309]: Using default interface naming scheme 'v255'. Feb 13 15:27:02.984062 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 15:27:02.987575 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:27:02.987757 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:27:02.993926 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:27:03.001089 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:27:03.002261 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:27:03.002982 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:27:03.003153 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:27:03.010643 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 15:27:03.012308 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 15:27:03.015385 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 15:27:03.017116 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:27:03.017238 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:27:03.019446 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:27:03.019577 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:27:03.021694 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:27:03.032512 systemd[1]: Finished ensure-sysext.service. Feb 13 15:27:03.037269 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:27:03.045199 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:27:03.047808 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:27:03.048863 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:27:03.053982 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:27:03.056860 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:27:03.063526 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 15:27:03.064974 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:27:03.065213 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 15:27:03.068415 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Feb 13 15:27:03.074976 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:27:03.075119 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:27:03.093722 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:27:03.093926 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:27:03.098043 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:27:03.107493 augenrules[1380]: No rules Feb 13 15:27:03.110214 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:27:03.110434 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:27:03.121654 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1352) Feb 13 15:27:03.152141 systemd-networkd[1363]: lo: Link UP Feb 13 15:27:03.152149 systemd-networkd[1363]: lo: Gained carrier Feb 13 15:27:03.156175 systemd-networkd[1363]: Enumeration completed Feb 13 15:27:03.156301 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:27:03.156877 systemd-networkd[1363]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:27:03.156881 systemd-networkd[1363]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:27:03.160998 systemd-networkd[1363]: eth0: Link UP Feb 13 15:27:03.161009 systemd-networkd[1363]: eth0: Gained carrier Feb 13 15:27:03.161022 systemd-networkd[1363]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:27:03.162022 systemd-resolved[1307]: Positive Trust Anchors: Feb 13 15:27:03.162116 systemd-resolved[1307]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:27:03.162147 systemd-resolved[1307]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:27:03.170991 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 15:27:03.171164 systemd-resolved[1307]: Defaulting to hostname 'linux'. Feb 13 15:27:03.172586 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 15:27:03.173906 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:27:03.177643 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 15:27:03.178692 systemd[1]: Reached target network.target - Network. Feb 13 15:27:03.178864 systemd-networkd[1363]: eth0: DHCPv4 address 10.0.0.91/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:27:03.179439 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:27:03.179989 systemd-timesyncd[1364]: Network configuration changed, trying to establish connection. Feb 13 15:27:03.180559 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 15:27:03.181261 systemd-timesyncd[1364]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 15:27:03.181305 systemd-timesyncd[1364]: Initial clock synchronization to Thu 2025-02-13 15:27:03.379969 UTC. Feb 13 15:27:03.182842 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 15:27:03.206142 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 15:27:03.219949 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:27:03.232144 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 15:27:03.244846 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 15:27:03.259794 lvm[1399]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:27:03.265373 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:27:03.295239 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 15:27:03.296668 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:27:03.297484 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:27:03.299794 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 15:27:03.300774 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 15:27:03.301843 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 15:27:03.302725 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 15:27:03.303663 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 15:27:03.304655 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 15:27:03.304695 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:27:03.305345 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:27:03.306890 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 15:27:03.309286 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 15:27:03.319808 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 15:27:03.321926 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 15:27:03.323311 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 15:27:03.324330 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:27:03.325144 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:27:03.325896 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:27:03.325929 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:27:03.327014 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 15:27:03.328869 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 15:27:03.331829 lvm[1406]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:27:03.331752 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 15:27:03.333809 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 15:27:03.340692 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 15:27:03.344741 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 15:27:03.347070 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 15:27:03.350538 jq[1409]: false Feb 13 15:27:03.354293 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 15:27:03.357003 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 15:27:03.359263 extend-filesystems[1410]: Found loop3 Feb 13 15:27:03.360075 extend-filesystems[1410]: Found loop4 Feb 13 15:27:03.360075 extend-filesystems[1410]: Found loop5 Feb 13 15:27:03.360075 extend-filesystems[1410]: Found vda Feb 13 15:27:03.360075 extend-filesystems[1410]: Found vda1 Feb 13 15:27:03.360075 extend-filesystems[1410]: Found vda2 Feb 13 15:27:03.360075 extend-filesystems[1410]: Found vda3 Feb 13 15:27:03.360075 extend-filesystems[1410]: Found usr Feb 13 15:27:03.360075 extend-filesystems[1410]: Found vda4 Feb 13 15:27:03.360075 extend-filesystems[1410]: Found vda6 Feb 13 15:27:03.360075 extend-filesystems[1410]: Found vda7 Feb 13 15:27:03.360075 extend-filesystems[1410]: Found vda9 Feb 13 15:27:03.360075 extend-filesystems[1410]: Checking size of /dev/vda9 Feb 13 15:27:03.364944 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 15:27:03.369034 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 15:27:03.369574 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 15:27:03.370386 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 15:27:03.372883 extend-filesystems[1410]: Resized partition /dev/vda9 Feb 13 15:27:03.374158 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 15:27:03.377176 extend-filesystems[1429]: resize2fs 1.47.1 (20-May-2024) Feb 13 15:27:03.377679 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 15:27:03.381908 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 15:27:03.385255 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 15:27:03.388088 dbus-daemon[1408]: [system] SELinux support is enabled Feb 13 15:27:03.385546 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 15:27:03.386035 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 15:27:03.386334 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 15:27:03.389044 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 15:27:03.393502 jq[1430]: true Feb 13 15:27:03.398492 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 15:27:03.398766 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 15:27:03.404241 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1349) Feb 13 15:27:03.404312 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 15:27:03.416850 jq[1435]: true Feb 13 15:27:03.429450 extend-filesystems[1429]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 15:27:03.429450 extend-filesystems[1429]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 15:27:03.429450 extend-filesystems[1429]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 15:27:03.429290 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 15:27:03.437444 extend-filesystems[1410]: Resized filesystem in /dev/vda9 Feb 13 15:27:03.432489 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 15:27:03.445845 update_engine[1426]: I20250213 15:27:03.443588 1426 main.cc:92] Flatcar Update Engine starting Feb 13 15:27:03.455046 tar[1433]: linux-arm64/helm Feb 13 15:27:03.450750 systemd[1]: Started update-engine.service - Update Engine. Feb 13 15:27:03.456733 update_engine[1426]: I20250213 15:27:03.448823 1426 update_check_scheduler.cc:74] Next update check in 7m54s Feb 13 15:27:03.451772 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 15:27:03.451797 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 15:27:03.451986 (ntainerd)[1457]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 15:27:03.452890 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 15:27:03.452909 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 15:27:03.455285 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 15:27:03.490188 bash[1462]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:27:03.496052 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 15:27:03.501934 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 15:27:03.502258 systemd-logind[1422]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 15:27:03.503005 systemd-logind[1422]: New seat seat0. Feb 13 15:27:03.504252 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 15:27:03.561292 locksmithd[1461]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 15:27:03.686326 containerd[1457]: time="2025-02-13T15:27:03.686192800Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 15:27:03.710762 containerd[1457]: time="2025-02-13T15:27:03.710708480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:27:03.714293 containerd[1457]: time="2025-02-13T15:27:03.714080320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:27:03.714293 containerd[1457]: time="2025-02-13T15:27:03.714120440Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 15:27:03.714293 containerd[1457]: time="2025-02-13T15:27:03.714136680Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 15:27:03.714863 containerd[1457]: time="2025-02-13T15:27:03.714565960Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 15:27:03.714863 containerd[1457]: time="2025-02-13T15:27:03.714597000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 15:27:03.714863 containerd[1457]: time="2025-02-13T15:27:03.714744120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:27:03.714863 containerd[1457]: time="2025-02-13T15:27:03.714771840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:27:03.715196 containerd[1457]: time="2025-02-13T15:27:03.715173000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:27:03.715320 containerd[1457]: time="2025-02-13T15:27:03.715303640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 15:27:03.715595 containerd[1457]: time="2025-02-13T15:27:03.715365000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:27:03.715595 containerd[1457]: time="2025-02-13T15:27:03.715380120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 15:27:03.716380 containerd[1457]: time="2025-02-13T15:27:03.715797880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:27:03.716380 containerd[1457]: time="2025-02-13T15:27:03.716032160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:27:03.716380 containerd[1457]: time="2025-02-13T15:27:03.716138000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:27:03.716380 containerd[1457]: time="2025-02-13T15:27:03.716151360Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 15:27:03.716380 containerd[1457]: time="2025-02-13T15:27:03.716269720Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 15:27:03.716380 containerd[1457]: time="2025-02-13T15:27:03.716310400Z" level=info msg="metadata content store policy set" policy=shared Feb 13 15:27:03.720462 containerd[1457]: time="2025-02-13T15:27:03.720434040Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 15:27:03.720587 containerd[1457]: time="2025-02-13T15:27:03.720571480Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 15:27:03.720707 containerd[1457]: time="2025-02-13T15:27:03.720693280Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 15:27:03.720780 containerd[1457]: time="2025-02-13T15:27:03.720764800Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 15:27:03.720839 containerd[1457]: time="2025-02-13T15:27:03.720825920Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 15:27:03.721023 containerd[1457]: time="2025-02-13T15:27:03.721003040Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 15:27:03.721362 containerd[1457]: time="2025-02-13T15:27:03.721343480Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 15:27:03.721549 containerd[1457]: time="2025-02-13T15:27:03.721529040Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 15:27:03.721653 containerd[1457]: time="2025-02-13T15:27:03.721611760Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 15:27:03.721716 containerd[1457]: time="2025-02-13T15:27:03.721702200Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 15:27:03.721830 containerd[1457]: time="2025-02-13T15:27:03.721779920Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 15:27:03.721886 containerd[1457]: time="2025-02-13T15:27:03.721874280Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 15:27:03.721936 containerd[1457]: time="2025-02-13T15:27:03.721925600Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 15:27:03.721987 containerd[1457]: time="2025-02-13T15:27:03.721976120Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 15:27:03.722058 containerd[1457]: time="2025-02-13T15:27:03.722044040Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 15:27:03.722108 containerd[1457]: time="2025-02-13T15:27:03.722097360Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 15:27:03.722158 containerd[1457]: time="2025-02-13T15:27:03.722146960Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 15:27:03.722205 containerd[1457]: time="2025-02-13T15:27:03.722194760Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 15:27:03.722265 containerd[1457]: time="2025-02-13T15:27:03.722252840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 15:27:03.722332 containerd[1457]: time="2025-02-13T15:27:03.722320760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 15:27:03.722395 containerd[1457]: time="2025-02-13T15:27:03.722382120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 15:27:03.722447 containerd[1457]: time="2025-02-13T15:27:03.722435720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 15:27:03.723603 containerd[1457]: time="2025-02-13T15:27:03.722485160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 15:27:03.723603 containerd[1457]: time="2025-02-13T15:27:03.722503720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 15:27:03.723603 containerd[1457]: time="2025-02-13T15:27:03.722517320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 15:27:03.723603 containerd[1457]: time="2025-02-13T15:27:03.722530880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 15:27:03.723603 containerd[1457]: time="2025-02-13T15:27:03.722544400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 15:27:03.723603 containerd[1457]: time="2025-02-13T15:27:03.722560120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 15:27:03.723603 containerd[1457]: time="2025-02-13T15:27:03.722572720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 15:27:03.723603 containerd[1457]: time="2025-02-13T15:27:03.722585640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 15:27:03.723603 containerd[1457]: time="2025-02-13T15:27:03.722604000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 15:27:03.723603 containerd[1457]: time="2025-02-13T15:27:03.722641840Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 15:27:03.723603 containerd[1457]: time="2025-02-13T15:27:03.722667200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 15:27:03.723603 containerd[1457]: time="2025-02-13T15:27:03.722699240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 15:27:03.723603 containerd[1457]: time="2025-02-13T15:27:03.722711240Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 15:27:03.723939 containerd[1457]: time="2025-02-13T15:27:03.723920600Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 15:27:03.724083 containerd[1457]: time="2025-02-13T15:27:03.724062720Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 15:27:03.724140 containerd[1457]: time="2025-02-13T15:27:03.724126600Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 15:27:03.724193 containerd[1457]: time="2025-02-13T15:27:03.724179640Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 15:27:03.724236 containerd[1457]: time="2025-02-13T15:27:03.724225360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 15:27:03.724302 containerd[1457]: time="2025-02-13T15:27:03.724289840Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 15:27:03.724379 containerd[1457]: time="2025-02-13T15:27:03.724366120Z" level=info msg="NRI interface is disabled by configuration." Feb 13 15:27:03.724430 containerd[1457]: time="2025-02-13T15:27:03.724418200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 15:27:03.724861 sshd_keygen[1431]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 15:27:03.725436 containerd[1457]: time="2025-02-13T15:27:03.725373920Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 15:27:03.725596 containerd[1457]: time="2025-02-13T15:27:03.725579560Z" level=info msg="Connect containerd service" Feb 13 15:27:03.725706 containerd[1457]: time="2025-02-13T15:27:03.725690120Z" level=info msg="using legacy CRI server" Feb 13 15:27:03.725790 containerd[1457]: time="2025-02-13T15:27:03.725773560Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 15:27:03.726352 containerd[1457]: time="2025-02-13T15:27:03.726333920Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 15:27:03.727337 containerd[1457]: time="2025-02-13T15:27:03.727308400Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:27:03.728220 containerd[1457]: time="2025-02-13T15:27:03.728061680Z" level=info msg="Start subscribing containerd event" Feb 13 15:27:03.728220 containerd[1457]: time="2025-02-13T15:27:03.728135240Z" level=info msg="Start recovering state" Feb 13 15:27:03.728220 containerd[1457]: time="2025-02-13T15:27:03.728213800Z" level=info msg="Start event monitor" Feb 13 15:27:03.728295 containerd[1457]: time="2025-02-13T15:27:03.728227240Z" level=info msg="Start snapshots syncer" Feb 13 15:27:03.728295 containerd[1457]: time="2025-02-13T15:27:03.728238160Z" level=info msg="Start cni network conf syncer for default" Feb 13 15:27:03.728295 containerd[1457]: time="2025-02-13T15:27:03.728245880Z" level=info msg="Start streaming server" Feb 13 15:27:03.729704 containerd[1457]: time="2025-02-13T15:27:03.729678200Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 15:27:03.729833 containerd[1457]: time="2025-02-13T15:27:03.729816440Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 15:27:03.732092 containerd[1457]: time="2025-02-13T15:27:03.730782600Z" level=info msg="containerd successfully booted in 0.049322s" Feb 13 15:27:03.730904 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 15:27:03.750768 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 15:27:03.759889 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 15:27:03.767570 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 15:27:03.768013 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 15:27:03.776911 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 15:27:03.783989 tar[1433]: linux-arm64/LICENSE Feb 13 15:27:03.784084 tar[1433]: linux-arm64/README.md Feb 13 15:27:03.786016 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 15:27:03.790571 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 15:27:03.792702 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 15:27:03.793767 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 15:27:03.795293 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 15:27:04.329191 systemd-networkd[1363]: eth0: Gained IPv6LL Feb 13 15:27:04.332106 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 15:27:04.333492 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 15:27:04.347887 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 15:27:04.350027 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:27:04.351786 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 15:27:04.365254 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 15:27:04.365491 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 15:27:04.367107 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 15:27:04.370050 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 15:27:04.829415 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:27:04.830746 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 15:27:04.831993 systemd[1]: Startup finished in 564ms (kernel) + 4.823s (initrd) + 3.337s (userspace) = 8.726s. Feb 13 15:27:04.834097 (kubelet)[1520]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:27:05.326122 kubelet[1520]: E0213 15:27:05.326024 1520 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:27:05.328892 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:27:05.329036 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:27:09.308609 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 15:27:09.309792 systemd[1]: Started sshd@0-10.0.0.91:22-10.0.0.1:33688.service - OpenSSH per-connection server daemon (10.0.0.1:33688). Feb 13 15:27:09.368327 sshd[1534]: Accepted publickey for core from 10.0.0.1 port 33688 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:27:09.370573 sshd-session[1534]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:27:09.377937 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 15:27:09.387913 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 15:27:09.391157 systemd-logind[1422]: New session 1 of user core. Feb 13 15:27:09.398233 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 15:27:09.400603 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 15:27:09.407838 (systemd)[1538]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 15:27:09.486294 systemd[1538]: Queued start job for default target default.target. Feb 13 15:27:09.492575 systemd[1538]: Created slice app.slice - User Application Slice. Feb 13 15:27:09.492622 systemd[1538]: Reached target paths.target - Paths. Feb 13 15:27:09.492668 systemd[1538]: Reached target timers.target - Timers. Feb 13 15:27:09.493962 systemd[1538]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 15:27:09.503916 systemd[1538]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 15:27:09.503985 systemd[1538]: Reached target sockets.target - Sockets. Feb 13 15:27:09.503997 systemd[1538]: Reached target basic.target - Basic System. Feb 13 15:27:09.504033 systemd[1538]: Reached target default.target - Main User Target. Feb 13 15:27:09.504060 systemd[1538]: Startup finished in 90ms. Feb 13 15:27:09.504274 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 15:27:09.505557 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 15:27:09.572661 systemd[1]: Started sshd@1-10.0.0.91:22-10.0.0.1:33700.service - OpenSSH per-connection server daemon (10.0.0.1:33700). Feb 13 15:27:09.620093 sshd[1549]: Accepted publickey for core from 10.0.0.1 port 33700 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:27:09.621623 sshd-session[1549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:27:09.625873 systemd-logind[1422]: New session 2 of user core. Feb 13 15:27:09.634825 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 15:27:09.688343 sshd[1551]: Connection closed by 10.0.0.1 port 33700 Feb 13 15:27:09.688830 sshd-session[1549]: pam_unix(sshd:session): session closed for user core Feb 13 15:27:09.699064 systemd[1]: sshd@1-10.0.0.91:22-10.0.0.1:33700.service: Deactivated successfully. Feb 13 15:27:09.701335 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 15:27:09.702800 systemd-logind[1422]: Session 2 logged out. Waiting for processes to exit. Feb 13 15:27:09.712011 systemd[1]: Started sshd@2-10.0.0.91:22-10.0.0.1:33714.service - OpenSSH per-connection server daemon (10.0.0.1:33714). Feb 13 15:27:09.713089 systemd-logind[1422]: Removed session 2. Feb 13 15:27:09.755515 sshd[1556]: Accepted publickey for core from 10.0.0.1 port 33714 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:27:09.756903 sshd-session[1556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:27:09.761699 systemd-logind[1422]: New session 3 of user core. Feb 13 15:27:09.767831 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 15:27:09.816486 sshd[1558]: Connection closed by 10.0.0.1 port 33714 Feb 13 15:27:09.817102 sshd-session[1556]: pam_unix(sshd:session): session closed for user core Feb 13 15:27:09.832113 systemd[1]: sshd@2-10.0.0.91:22-10.0.0.1:33714.service: Deactivated successfully. Feb 13 15:27:09.834238 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 15:27:09.836701 systemd-logind[1422]: Session 3 logged out. Waiting for processes to exit. Feb 13 15:27:09.844928 systemd[1]: Started sshd@3-10.0.0.91:22-10.0.0.1:33720.service - OpenSSH per-connection server daemon (10.0.0.1:33720). Feb 13 15:27:09.846086 systemd-logind[1422]: Removed session 3. Feb 13 15:27:09.884028 sshd[1563]: Accepted publickey for core from 10.0.0.1 port 33720 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:27:09.885325 sshd-session[1563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:27:09.888859 systemd-logind[1422]: New session 4 of user core. Feb 13 15:27:09.902815 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 15:27:09.955241 sshd[1565]: Connection closed by 10.0.0.1 port 33720 Feb 13 15:27:09.955127 sshd-session[1563]: pam_unix(sshd:session): session closed for user core Feb 13 15:27:09.970027 systemd[1]: sshd@3-10.0.0.91:22-10.0.0.1:33720.service: Deactivated successfully. Feb 13 15:27:09.971454 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 15:27:09.972671 systemd-logind[1422]: Session 4 logged out. Waiting for processes to exit. Feb 13 15:27:09.973783 systemd[1]: Started sshd@4-10.0.0.91:22-10.0.0.1:33730.service - OpenSSH per-connection server daemon (10.0.0.1:33730). Feb 13 15:27:09.974652 systemd-logind[1422]: Removed session 4. Feb 13 15:27:10.016059 sshd[1570]: Accepted publickey for core from 10.0.0.1 port 33730 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:27:10.017367 sshd-session[1570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:27:10.021592 systemd-logind[1422]: New session 5 of user core. Feb 13 15:27:10.042834 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 15:27:10.117882 sudo[1573]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 15:27:10.118155 sudo[1573]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:27:10.135844 sudo[1573]: pam_unix(sudo:session): session closed for user root Feb 13 15:27:10.137732 sshd[1572]: Connection closed by 10.0.0.1 port 33730 Feb 13 15:27:10.138483 sshd-session[1570]: pam_unix(sshd:session): session closed for user core Feb 13 15:27:10.150446 systemd[1]: sshd@4-10.0.0.91:22-10.0.0.1:33730.service: Deactivated successfully. Feb 13 15:27:10.152245 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 15:27:10.153616 systemd-logind[1422]: Session 5 logged out. Waiting for processes to exit. Feb 13 15:27:10.155829 systemd[1]: Started sshd@5-10.0.0.91:22-10.0.0.1:33740.service - OpenSSH per-connection server daemon (10.0.0.1:33740). Feb 13 15:27:10.156730 systemd-logind[1422]: Removed session 5. Feb 13 15:27:10.198340 sshd[1578]: Accepted publickey for core from 10.0.0.1 port 33740 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:27:10.199683 sshd-session[1578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:27:10.203651 systemd-logind[1422]: New session 6 of user core. Feb 13 15:27:10.210776 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 15:27:10.261825 sudo[1582]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 15:27:10.262122 sudo[1582]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:27:10.265232 sudo[1582]: pam_unix(sudo:session): session closed for user root Feb 13 15:27:10.269803 sudo[1581]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 15:27:10.270071 sudo[1581]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:27:10.289132 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:27:10.315130 augenrules[1604]: No rules Feb 13 15:27:10.316313 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:27:10.316550 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:27:10.317708 sudo[1581]: pam_unix(sudo:session): session closed for user root Feb 13 15:27:10.319548 sshd[1580]: Connection closed by 10.0.0.1 port 33740 Feb 13 15:27:10.319473 sshd-session[1578]: pam_unix(sshd:session): session closed for user core Feb 13 15:27:10.330128 systemd[1]: sshd@5-10.0.0.91:22-10.0.0.1:33740.service: Deactivated successfully. Feb 13 15:27:10.331506 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 15:27:10.333837 systemd-logind[1422]: Session 6 logged out. Waiting for processes to exit. Feb 13 15:27:10.334926 systemd[1]: Started sshd@6-10.0.0.91:22-10.0.0.1:33752.service - OpenSSH per-connection server daemon (10.0.0.1:33752). Feb 13 15:27:10.335712 systemd-logind[1422]: Removed session 6. Feb 13 15:27:10.380394 sshd[1612]: Accepted publickey for core from 10.0.0.1 port 33752 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:27:10.382030 sshd-session[1612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:27:10.386130 systemd-logind[1422]: New session 7 of user core. Feb 13 15:27:10.393798 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 15:27:10.444269 sudo[1615]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 15:27:10.444889 sudo[1615]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:27:10.760909 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 15:27:10.761086 (dockerd)[1635]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 15:27:11.025586 dockerd[1635]: time="2025-02-13T15:27:11.025450907Z" level=info msg="Starting up" Feb 13 15:27:11.170346 dockerd[1635]: time="2025-02-13T15:27:11.170294429Z" level=info msg="Loading containers: start." Feb 13 15:27:11.314660 kernel: Initializing XFRM netlink socket Feb 13 15:27:11.385823 systemd-networkd[1363]: docker0: Link UP Feb 13 15:27:11.423024 dockerd[1635]: time="2025-02-13T15:27:11.422982890Z" level=info msg="Loading containers: done." Feb 13 15:27:11.443061 dockerd[1635]: time="2025-02-13T15:27:11.442669617Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 15:27:11.443061 dockerd[1635]: time="2025-02-13T15:27:11.442774389Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Feb 13 15:27:11.443061 dockerd[1635]: time="2025-02-13T15:27:11.442879000Z" level=info msg="Daemon has completed initialization" Feb 13 15:27:11.471241 dockerd[1635]: time="2025-02-13T15:27:11.471103876Z" level=info msg="API listen on /run/docker.sock" Feb 13 15:27:11.471525 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 15:27:12.111364 containerd[1457]: time="2025-02-13T15:27:12.111309728Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\"" Feb 13 15:27:12.682283 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2320622692.mount: Deactivated successfully. Feb 13 15:27:13.673236 containerd[1457]: time="2025-02-13T15:27:13.673166093Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:13.674011 containerd[1457]: time="2025-02-13T15:27:13.673957844Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.10: active requests=0, bytes read=29865209" Feb 13 15:27:13.674562 containerd[1457]: time="2025-02-13T15:27:13.674520255Z" level=info msg="ImageCreate event name:\"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:13.677629 containerd[1457]: time="2025-02-13T15:27:13.677587601Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:13.678732 containerd[1457]: time="2025-02-13T15:27:13.678696020Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.10\" with image id \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\", size \"29862007\" in 1.567336158s" Feb 13 15:27:13.678762 containerd[1457]: time="2025-02-13T15:27:13.678735554Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\" returns image reference \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\"" Feb 13 15:27:13.697108 containerd[1457]: time="2025-02-13T15:27:13.697070638Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\"" Feb 13 15:27:15.051694 containerd[1457]: time="2025-02-13T15:27:15.051632061Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:15.054694 containerd[1457]: time="2025-02-13T15:27:15.054643629Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.10: active requests=0, bytes read=26898596" Feb 13 15:27:15.055728 containerd[1457]: time="2025-02-13T15:27:15.055691791Z" level=info msg="ImageCreate event name:\"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:15.058476 containerd[1457]: time="2025-02-13T15:27:15.058416527Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:15.059655 containerd[1457]: time="2025-02-13T15:27:15.059632805Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.10\" with image id \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\", size \"28302323\" in 1.362516215s" Feb 13 15:27:15.059819 containerd[1457]: time="2025-02-13T15:27:15.059722998Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\" returns image reference \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\"" Feb 13 15:27:15.079450 containerd[1457]: time="2025-02-13T15:27:15.079412500Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\"" Feb 13 15:27:15.579393 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 15:27:15.589813 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:27:15.689982 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:27:15.693597 (kubelet)[1921]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:27:15.736986 kubelet[1921]: E0213 15:27:15.736930 1921 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:27:15.740442 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:27:15.740628 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:27:16.073782 containerd[1457]: time="2025-02-13T15:27:16.073484940Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:16.074305 containerd[1457]: time="2025-02-13T15:27:16.074209871Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.10: active requests=0, bytes read=16164936" Feb 13 15:27:16.075276 containerd[1457]: time="2025-02-13T15:27:16.075216162Z" level=info msg="ImageCreate event name:\"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:16.078329 containerd[1457]: time="2025-02-13T15:27:16.078273226Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:16.079374 containerd[1457]: time="2025-02-13T15:27:16.079339416Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.10\" with image id \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\", size \"17568681\" in 999.888664ms" Feb 13 15:27:16.079431 containerd[1457]: time="2025-02-13T15:27:16.079372742Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\" returns image reference \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\"" Feb 13 15:27:16.098269 containerd[1457]: time="2025-02-13T15:27:16.098218595Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 15:27:17.137912 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount232884329.mount: Deactivated successfully. Feb 13 15:27:17.325119 containerd[1457]: time="2025-02-13T15:27:17.325046806Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:17.326825 containerd[1457]: time="2025-02-13T15:27:17.326774580Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=25663372" Feb 13 15:27:17.327823 containerd[1457]: time="2025-02-13T15:27:17.327789083Z" level=info msg="ImageCreate event name:\"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:17.329828 containerd[1457]: time="2025-02-13T15:27:17.329791936Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:17.330518 containerd[1457]: time="2025-02-13T15:27:17.330490422Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"25662389\" in 1.232237862s" Feb 13 15:27:17.330547 containerd[1457]: time="2025-02-13T15:27:17.330518464Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\"" Feb 13 15:27:17.353220 containerd[1457]: time="2025-02-13T15:27:17.353182163Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 15:27:17.969657 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2818862923.mount: Deactivated successfully. Feb 13 15:27:18.711245 containerd[1457]: time="2025-02-13T15:27:18.711166280Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:18.711675 containerd[1457]: time="2025-02-13T15:27:18.711583231Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Feb 13 15:27:18.712636 containerd[1457]: time="2025-02-13T15:27:18.712590275Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:18.717890 containerd[1457]: time="2025-02-13T15:27:18.717818594Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:18.719443 containerd[1457]: time="2025-02-13T15:27:18.719394730Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.366172799s" Feb 13 15:27:18.719482 containerd[1457]: time="2025-02-13T15:27:18.719442914Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 15:27:18.739525 containerd[1457]: time="2025-02-13T15:27:18.739478154Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 15:27:19.180446 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount932816973.mount: Deactivated successfully. Feb 13 15:27:19.187920 containerd[1457]: time="2025-02-13T15:27:19.187864264Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:19.189427 containerd[1457]: time="2025-02-13T15:27:19.189322577Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Feb 13 15:27:19.190292 containerd[1457]: time="2025-02-13T15:27:19.190253167Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:19.192380 containerd[1457]: time="2025-02-13T15:27:19.192318950Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:19.193124 containerd[1457]: time="2025-02-13T15:27:19.193040602Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 453.512421ms" Feb 13 15:27:19.193124 containerd[1457]: time="2025-02-13T15:27:19.193072468Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 13 15:27:19.212929 containerd[1457]: time="2025-02-13T15:27:19.212889170Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Feb 13 15:27:20.043820 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2365574676.mount: Deactivated successfully. Feb 13 15:27:21.729360 containerd[1457]: time="2025-02-13T15:27:21.729295389Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:21.729954 containerd[1457]: time="2025-02-13T15:27:21.729786447Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" Feb 13 15:27:21.731436 containerd[1457]: time="2025-02-13T15:27:21.731389311Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:21.734764 containerd[1457]: time="2025-02-13T15:27:21.734722646Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:21.736713 containerd[1457]: time="2025-02-13T15:27:21.736665982Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 2.52373528s" Feb 13 15:27:21.736764 containerd[1457]: time="2025-02-13T15:27:21.736712742Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Feb 13 15:27:25.990902 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 15:27:26.001850 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:27:26.107885 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:27:26.112842 (kubelet)[2141]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:27:26.191487 kubelet[2141]: E0213 15:27:26.191430 2141 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:27:26.194159 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:27:26.194319 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:27:26.860992 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:27:26.872956 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:27:26.893172 systemd[1]: Reloading requested from client PID 2158 ('systemctl') (unit session-7.scope)... Feb 13 15:27:26.893191 systemd[1]: Reloading... Feb 13 15:27:26.968648 zram_generator::config[2198]: No configuration found. Feb 13 15:27:27.204149 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:27:27.259833 systemd[1]: Reloading finished in 366 ms. Feb 13 15:27:27.310832 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:27:27.314970 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:27:27.315212 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:27:27.323919 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:27:27.438991 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:27:27.443279 (kubelet)[2244]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:27:27.485901 kubelet[2244]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:27:27.485901 kubelet[2244]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:27:27.485901 kubelet[2244]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:27:27.486961 kubelet[2244]: I0213 15:27:27.486880 2244 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:27:27.944722 kubelet[2244]: I0213 15:27:27.943890 2244 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 15:27:27.944722 kubelet[2244]: I0213 15:27:27.943924 2244 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:27:27.944722 kubelet[2244]: I0213 15:27:27.944137 2244 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 15:27:28.000253 kubelet[2244]: E0213 15:27:28.000193 2244 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.91:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.91:6443: connect: connection refused Feb 13 15:27:28.000382 kubelet[2244]: I0213 15:27:28.000346 2244 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:27:28.016118 kubelet[2244]: I0213 15:27:28.016064 2244 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:27:28.017398 kubelet[2244]: I0213 15:27:28.017338 2244 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:27:28.017627 kubelet[2244]: I0213 15:27:28.017398 2244 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:27:28.017733 kubelet[2244]: I0213 15:27:28.017689 2244 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:27:28.017733 kubelet[2244]: I0213 15:27:28.017699 2244 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:27:28.018001 kubelet[2244]: I0213 15:27:28.017974 2244 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:27:28.019870 kubelet[2244]: W0213 15:27:28.019812 2244 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.91:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Feb 13 15:27:28.020023 kubelet[2244]: E0213 15:27:28.019999 2244 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.91:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Feb 13 15:27:28.020530 kubelet[2244]: I0213 15:27:28.020501 2244 kubelet.go:400] "Attempting to sync node with API server" Feb 13 15:27:28.020580 kubelet[2244]: I0213 15:27:28.020537 2244 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:27:28.020670 kubelet[2244]: I0213 15:27:28.020655 2244 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:27:28.020713 kubelet[2244]: I0213 15:27:28.020674 2244 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:27:28.021326 kubelet[2244]: W0213 15:27:28.021221 2244 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.91:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Feb 13 15:27:28.021326 kubelet[2244]: E0213 15:27:28.021281 2244 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.91:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Feb 13 15:27:28.025782 kubelet[2244]: I0213 15:27:28.025689 2244 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:27:28.026191 kubelet[2244]: I0213 15:27:28.026163 2244 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:27:28.026308 kubelet[2244]: W0213 15:27:28.026296 2244 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 15:27:28.027170 kubelet[2244]: I0213 15:27:28.027150 2244 server.go:1264] "Started kubelet" Feb 13 15:27:28.027814 kubelet[2244]: I0213 15:27:28.027453 2244 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:27:28.028778 kubelet[2244]: I0213 15:27:28.028568 2244 server.go:455] "Adding debug handlers to kubelet server" Feb 13 15:27:28.029472 kubelet[2244]: I0213 15:27:28.029418 2244 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:27:28.029804 kubelet[2244]: I0213 15:27:28.029782 2244 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:27:28.032306 kubelet[2244]: I0213 15:27:28.031079 2244 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:27:28.032306 kubelet[2244]: E0213 15:27:28.031238 2244 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.91:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.91:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823ce0fb97fad72 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 15:27:28.027127154 +0000 UTC m=+0.579568488,LastTimestamp:2025-02-13 15:27:28.027127154 +0000 UTC m=+0.579568488,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 15:27:28.032671 kubelet[2244]: E0213 15:27:28.032631 2244 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:27:28.032780 kubelet[2244]: I0213 15:27:28.032757 2244 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:27:28.032906 kubelet[2244]: I0213 15:27:28.032885 2244 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 15:27:28.034769 kubelet[2244]: I0213 15:27:28.034121 2244 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:27:28.034769 kubelet[2244]: W0213 15:27:28.034596 2244 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.91:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Feb 13 15:27:28.034769 kubelet[2244]: E0213 15:27:28.034680 2244 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.91:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Feb 13 15:27:28.034769 kubelet[2244]: E0213 15:27:28.034723 2244 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.91:6443: connect: connection refused" interval="200ms" Feb 13 15:27:28.035680 kubelet[2244]: I0213 15:27:28.035641 2244 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:27:28.038373 kubelet[2244]: E0213 15:27:28.038324 2244 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:27:28.038485 kubelet[2244]: I0213 15:27:28.038438 2244 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:27:28.038485 kubelet[2244]: I0213 15:27:28.038460 2244 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:27:28.050212 kubelet[2244]: I0213 15:27:28.050156 2244 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:27:28.051545 kubelet[2244]: I0213 15:27:28.051486 2244 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:27:28.051545 kubelet[2244]: I0213 15:27:28.051543 2244 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:27:28.051702 kubelet[2244]: I0213 15:27:28.051564 2244 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 15:27:28.051702 kubelet[2244]: E0213 15:27:28.051689 2244 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:27:28.052473 kubelet[2244]: I0213 15:27:28.052278 2244 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:27:28.052473 kubelet[2244]: I0213 15:27:28.052297 2244 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:27:28.052473 kubelet[2244]: I0213 15:27:28.052316 2244 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:27:28.052473 kubelet[2244]: W0213 15:27:28.052286 2244 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.91:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Feb 13 15:27:28.052473 kubelet[2244]: E0213 15:27:28.052352 2244 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.91:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Feb 13 15:27:28.126533 kubelet[2244]: I0213 15:27:28.126482 2244 policy_none.go:49] "None policy: Start" Feb 13 15:27:28.127572 kubelet[2244]: I0213 15:27:28.127547 2244 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:27:28.127672 kubelet[2244]: I0213 15:27:28.127583 2244 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:27:28.134468 kubelet[2244]: I0213 15:27:28.134436 2244 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:27:28.135463 kubelet[2244]: E0213 15:27:28.135432 2244 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.91:6443/api/v1/nodes\": dial tcp 10.0.0.91:6443: connect: connection refused" node="localhost" Feb 13 15:27:28.136495 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 15:27:28.152142 kubelet[2244]: E0213 15:27:28.152098 2244 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 15:27:28.157886 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 15:27:28.161318 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 15:27:28.172918 kubelet[2244]: I0213 15:27:28.172720 2244 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:27:28.173606 kubelet[2244]: I0213 15:27:28.172994 2244 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:27:28.173606 kubelet[2244]: I0213 15:27:28.173108 2244 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:27:28.174818 kubelet[2244]: E0213 15:27:28.174790 2244 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 15:27:28.236446 kubelet[2244]: E0213 15:27:28.236311 2244 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.91:6443: connect: connection refused" interval="400ms" Feb 13 15:27:28.337071 kubelet[2244]: I0213 15:27:28.337035 2244 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:27:28.337536 kubelet[2244]: E0213 15:27:28.337505 2244 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.91:6443/api/v1/nodes\": dial tcp 10.0.0.91:6443: connect: connection refused" node="localhost" Feb 13 15:27:28.352705 kubelet[2244]: I0213 15:27:28.352651 2244 topology_manager.go:215] "Topology Admit Handler" podUID="03a675c73794c37026dc97b22a0ce52d" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 15:27:28.354041 kubelet[2244]: I0213 15:27:28.353941 2244 topology_manager.go:215] "Topology Admit Handler" podUID="dd3721fb1a67092819e35b40473f4063" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 15:27:28.354900 kubelet[2244]: I0213 15:27:28.354873 2244 topology_manager.go:215] "Topology Admit Handler" podUID="8d610d6c43052dbc8df47eb68906a982" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 15:27:28.360529 systemd[1]: Created slice kubepods-burstable-pod03a675c73794c37026dc97b22a0ce52d.slice - libcontainer container kubepods-burstable-pod03a675c73794c37026dc97b22a0ce52d.slice. Feb 13 15:27:28.384001 systemd[1]: Created slice kubepods-burstable-poddd3721fb1a67092819e35b40473f4063.slice - libcontainer container kubepods-burstable-poddd3721fb1a67092819e35b40473f4063.slice. Feb 13 15:27:28.397746 systemd[1]: Created slice kubepods-burstable-pod8d610d6c43052dbc8df47eb68906a982.slice - libcontainer container kubepods-burstable-pod8d610d6c43052dbc8df47eb68906a982.slice. Feb 13 15:27:28.436726 kubelet[2244]: I0213 15:27:28.436681 2244 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/03a675c73794c37026dc97b22a0ce52d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"03a675c73794c37026dc97b22a0ce52d\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:27:28.436726 kubelet[2244]: I0213 15:27:28.436721 2244 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:27:28.436886 kubelet[2244]: I0213 15:27:28.436744 2244 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:27:28.436886 kubelet[2244]: I0213 15:27:28.436773 2244 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:27:28.436886 kubelet[2244]: I0213 15:27:28.436812 2244 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d610d6c43052dbc8df47eb68906a982-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8d610d6c43052dbc8df47eb68906a982\") " pod="kube-system/kube-scheduler-localhost" Feb 13 15:27:28.436886 kubelet[2244]: I0213 15:27:28.436874 2244 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/03a675c73794c37026dc97b22a0ce52d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"03a675c73794c37026dc97b22a0ce52d\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:27:28.436976 kubelet[2244]: I0213 15:27:28.436915 2244 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/03a675c73794c37026dc97b22a0ce52d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"03a675c73794c37026dc97b22a0ce52d\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:27:28.436976 kubelet[2244]: I0213 15:27:28.436936 2244 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:27:28.436976 kubelet[2244]: I0213 15:27:28.436959 2244 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:27:28.637223 kubelet[2244]: E0213 15:27:28.637098 2244 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.91:6443: connect: connection refused" interval="800ms" Feb 13 15:27:28.682640 kubelet[2244]: E0213 15:27:28.682596 2244 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:28.683378 containerd[1457]: time="2025-02-13T15:27:28.683330156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:03a675c73794c37026dc97b22a0ce52d,Namespace:kube-system,Attempt:0,}" Feb 13 15:27:28.695974 kubelet[2244]: E0213 15:27:28.695926 2244 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:28.696451 containerd[1457]: time="2025-02-13T15:27:28.696406765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd3721fb1a67092819e35b40473f4063,Namespace:kube-system,Attempt:0,}" Feb 13 15:27:28.699719 kubelet[2244]: E0213 15:27:28.699610 2244 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:28.700077 containerd[1457]: time="2025-02-13T15:27:28.700029973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8d610d6c43052dbc8df47eb68906a982,Namespace:kube-system,Attempt:0,}" Feb 13 15:27:28.739467 kubelet[2244]: I0213 15:27:28.739430 2244 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:27:28.739985 kubelet[2244]: E0213 15:27:28.739957 2244 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.91:6443/api/v1/nodes\": dial tcp 10.0.0.91:6443: connect: connection refused" node="localhost" Feb 13 15:27:28.978740 kubelet[2244]: W0213 15:27:28.978524 2244 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.91:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Feb 13 15:27:28.978740 kubelet[2244]: E0213 15:27:28.978644 2244 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.91:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Feb 13 15:27:29.138092 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1598007934.mount: Deactivated successfully. Feb 13 15:27:29.143596 containerd[1457]: time="2025-02-13T15:27:29.143484962Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:27:29.145862 containerd[1457]: time="2025-02-13T15:27:29.145780225Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Feb 13 15:27:29.146804 containerd[1457]: time="2025-02-13T15:27:29.146770177Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:27:29.148220 containerd[1457]: time="2025-02-13T15:27:29.148147110Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:27:29.148928 containerd[1457]: time="2025-02-13T15:27:29.148875873Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:27:29.149739 containerd[1457]: time="2025-02-13T15:27:29.149674376Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:27:29.150431 containerd[1457]: time="2025-02-13T15:27:29.150383441Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:27:29.151519 containerd[1457]: time="2025-02-13T15:27:29.151472241Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:27:29.154854 containerd[1457]: time="2025-02-13T15:27:29.154802376Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 471.390379ms" Feb 13 15:27:29.156207 containerd[1457]: time="2025-02-13T15:27:29.156160132Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 459.664439ms" Feb 13 15:27:29.158563 containerd[1457]: time="2025-02-13T15:27:29.158503437Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 458.402073ms" Feb 13 15:27:29.332376 containerd[1457]: time="2025-02-13T15:27:29.330525200Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:27:29.332376 containerd[1457]: time="2025-02-13T15:27:29.330605470Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:27:29.332376 containerd[1457]: time="2025-02-13T15:27:29.330660359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:27:29.332376 containerd[1457]: time="2025-02-13T15:27:29.330907977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:27:29.335311 containerd[1457]: time="2025-02-13T15:27:29.335187509Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:27:29.335311 containerd[1457]: time="2025-02-13T15:27:29.335244519Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:27:29.335311 containerd[1457]: time="2025-02-13T15:27:29.335257410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:27:29.335574 containerd[1457]: time="2025-02-13T15:27:29.335506550Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:27:29.335574 containerd[1457]: time="2025-02-13T15:27:29.335555393Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:27:29.335648 containerd[1457]: time="2025-02-13T15:27:29.335570806Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:27:29.335696 containerd[1457]: time="2025-02-13T15:27:29.335666210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:27:29.335882 containerd[1457]: time="2025-02-13T15:27:29.335790400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:27:29.353886 systemd[1]: Started cri-containerd-350a6abb5a7d7006b1f9d3933af737b0a4534cbf8128edf4db58144b66da31e9.scope - libcontainer container 350a6abb5a7d7006b1f9d3933af737b0a4534cbf8128edf4db58144b66da31e9. Feb 13 15:27:29.357933 systemd[1]: Started cri-containerd-08d18d4a6ce96507e24c1f61e01e4356457d63eed8a3e056ed0d98a56f20b7e4.scope - libcontainer container 08d18d4a6ce96507e24c1f61e01e4356457d63eed8a3e056ed0d98a56f20b7e4. Feb 13 15:27:29.359447 systemd[1]: Started cri-containerd-624a3991fb6b8908182461bd719dcd65df00d0c68b0508d10ad2f6347371320e.scope - libcontainer container 624a3991fb6b8908182461bd719dcd65df00d0c68b0508d10ad2f6347371320e. Feb 13 15:27:29.391679 containerd[1457]: time="2025-02-13T15:27:29.391627129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8d610d6c43052dbc8df47eb68906a982,Namespace:kube-system,Attempt:0,} returns sandbox id \"350a6abb5a7d7006b1f9d3933af737b0a4534cbf8128edf4db58144b66da31e9\"" Feb 13 15:27:29.393135 kubelet[2244]: E0213 15:27:29.393101 2244 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:29.396404 containerd[1457]: time="2025-02-13T15:27:29.396364824Z" level=info msg="CreateContainer within sandbox \"350a6abb5a7d7006b1f9d3933af737b0a4534cbf8128edf4db58144b66da31e9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 15:27:29.399534 containerd[1457]: time="2025-02-13T15:27:29.398312620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:03a675c73794c37026dc97b22a0ce52d,Namespace:kube-system,Attempt:0,} returns sandbox id \"08d18d4a6ce96507e24c1f61e01e4356457d63eed8a3e056ed0d98a56f20b7e4\"" Feb 13 15:27:29.400275 containerd[1457]: time="2025-02-13T15:27:29.400170418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd3721fb1a67092819e35b40473f4063,Namespace:kube-system,Attempt:0,} returns sandbox id \"624a3991fb6b8908182461bd719dcd65df00d0c68b0508d10ad2f6347371320e\"" Feb 13 15:27:29.400502 kubelet[2244]: E0213 15:27:29.400475 2244 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:29.400894 kubelet[2244]: E0213 15:27:29.400864 2244 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:29.403157 containerd[1457]: time="2025-02-13T15:27:29.403116054Z" level=info msg="CreateContainer within sandbox \"624a3991fb6b8908182461bd719dcd65df00d0c68b0508d10ad2f6347371320e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 15:27:29.405509 containerd[1457]: time="2025-02-13T15:27:29.403410153Z" level=info msg="CreateContainer within sandbox \"08d18d4a6ce96507e24c1f61e01e4356457d63eed8a3e056ed0d98a56f20b7e4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 15:27:29.408072 kubelet[2244]: W0213 15:27:29.408009 2244 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.91:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Feb 13 15:27:29.408166 kubelet[2244]: E0213 15:27:29.408090 2244 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.91:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Feb 13 15:27:29.424682 kubelet[2244]: W0213 15:27:29.424595 2244 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.91:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Feb 13 15:27:29.424682 kubelet[2244]: E0213 15:27:29.424690 2244 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.91:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Feb 13 15:27:29.429434 containerd[1457]: time="2025-02-13T15:27:29.429381521Z" level=info msg="CreateContainer within sandbox \"350a6abb5a7d7006b1f9d3933af737b0a4534cbf8128edf4db58144b66da31e9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7fa8d67d4039ce7ac3dea8c89efd38ac88a801d301452f404e490a891260af54\"" Feb 13 15:27:29.430196 containerd[1457]: time="2025-02-13T15:27:29.430165332Z" level=info msg="StartContainer for \"7fa8d67d4039ce7ac3dea8c89efd38ac88a801d301452f404e490a891260af54\"" Feb 13 15:27:29.433936 containerd[1457]: time="2025-02-13T15:27:29.433887693Z" level=info msg="CreateContainer within sandbox \"624a3991fb6b8908182461bd719dcd65df00d0c68b0508d10ad2f6347371320e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9f85b2959fc46651a2b2eb45b3b74a47a8b53eb1ea0d7f4c2888cc56cfd389a2\"" Feb 13 15:27:29.434434 containerd[1457]: time="2025-02-13T15:27:29.434400104Z" level=info msg="StartContainer for \"9f85b2959fc46651a2b2eb45b3b74a47a8b53eb1ea0d7f4c2888cc56cfd389a2\"" Feb 13 15:27:29.437912 kubelet[2244]: E0213 15:27:29.437868 2244 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.91:6443: connect: connection refused" interval="1.6s" Feb 13 15:27:29.438342 containerd[1457]: time="2025-02-13T15:27:29.438307948Z" level=info msg="CreateContainer within sandbox \"08d18d4a6ce96507e24c1f61e01e4356457d63eed8a3e056ed0d98a56f20b7e4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"fb7ac423b79581d06129c38d117d793406edffbbef95d0073fe7b5023ce51b39\"" Feb 13 15:27:29.439813 containerd[1457]: time="2025-02-13T15:27:29.439770437Z" level=info msg="StartContainer for \"fb7ac423b79581d06129c38d117d793406edffbbef95d0073fe7b5023ce51b39\"" Feb 13 15:27:29.463851 systemd[1]: Started cri-containerd-7fa8d67d4039ce7ac3dea8c89efd38ac88a801d301452f404e490a891260af54.scope - libcontainer container 7fa8d67d4039ce7ac3dea8c89efd38ac88a801d301452f404e490a891260af54. Feb 13 15:27:29.467591 systemd[1]: Started cri-containerd-9f85b2959fc46651a2b2eb45b3b74a47a8b53eb1ea0d7f4c2888cc56cfd389a2.scope - libcontainer container 9f85b2959fc46651a2b2eb45b3b74a47a8b53eb1ea0d7f4c2888cc56cfd389a2. Feb 13 15:27:29.468505 systemd[1]: Started cri-containerd-fb7ac423b79581d06129c38d117d793406edffbbef95d0073fe7b5023ce51b39.scope - libcontainer container fb7ac423b79581d06129c38d117d793406edffbbef95d0073fe7b5023ce51b39. Feb 13 15:27:29.546101 kubelet[2244]: I0213 15:27:29.541493 2244 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:27:29.546101 kubelet[2244]: E0213 15:27:29.541849 2244 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.91:6443/api/v1/nodes\": dial tcp 10.0.0.91:6443: connect: connection refused" node="localhost" Feb 13 15:27:29.554546 containerd[1457]: time="2025-02-13T15:27:29.554482853Z" level=info msg="StartContainer for \"fb7ac423b79581d06129c38d117d793406edffbbef95d0073fe7b5023ce51b39\" returns successfully" Feb 13 15:27:29.554707 containerd[1457]: time="2025-02-13T15:27:29.554506874Z" level=info msg="StartContainer for \"7fa8d67d4039ce7ac3dea8c89efd38ac88a801d301452f404e490a891260af54\" returns successfully" Feb 13 15:27:29.555339 containerd[1457]: time="2025-02-13T15:27:29.554516122Z" level=info msg="StartContainer for \"9f85b2959fc46651a2b2eb45b3b74a47a8b53eb1ea0d7f4c2888cc56cfd389a2\" returns successfully" Feb 13 15:27:29.625527 kubelet[2244]: W0213 15:27:29.625376 2244 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.91:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Feb 13 15:27:29.625527 kubelet[2244]: E0213 15:27:29.625449 2244 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.91:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Feb 13 15:27:30.059647 kubelet[2244]: E0213 15:27:30.059379 2244 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:30.063906 kubelet[2244]: E0213 15:27:30.062956 2244 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:30.066175 kubelet[2244]: E0213 15:27:30.066080 2244 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:31.021772 kubelet[2244]: I0213 15:27:31.021727 2244 apiserver.go:52] "Watching apiserver" Feb 13 15:27:31.033588 kubelet[2244]: I0213 15:27:31.033525 2244 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 15:27:31.042313 kubelet[2244]: E0213 15:27:31.042224 2244 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 13 15:27:31.067169 kubelet[2244]: E0213 15:27:31.067143 2244 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:31.144451 kubelet[2244]: I0213 15:27:31.143468 2244 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:27:31.150583 kubelet[2244]: I0213 15:27:31.150413 2244 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 15:27:32.453112 kubelet[2244]: E0213 15:27:32.453048 2244 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:32.991022 systemd[1]: Reloading requested from client PID 2527 ('systemctl') (unit session-7.scope)... Feb 13 15:27:32.991040 systemd[1]: Reloading... Feb 13 15:27:33.058679 zram_generator::config[2569]: No configuration found. Feb 13 15:27:33.069951 kubelet[2244]: E0213 15:27:33.069915 2244 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:33.138534 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:27:33.199860 systemd[1]: Reloading finished in 208 ms. Feb 13 15:27:33.227910 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:27:33.244729 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:27:33.244935 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:27:33.253945 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:27:33.344970 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:27:33.349460 (kubelet)[2608]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:27:33.385753 kubelet[2608]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:27:33.385753 kubelet[2608]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:27:33.385753 kubelet[2608]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:27:33.386857 kubelet[2608]: I0213 15:27:33.385809 2608 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:27:33.389947 kubelet[2608]: I0213 15:27:33.389923 2608 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 15:27:33.389947 kubelet[2608]: I0213 15:27:33.389946 2608 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:27:33.390115 kubelet[2608]: I0213 15:27:33.390101 2608 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 15:27:33.391583 kubelet[2608]: I0213 15:27:33.391336 2608 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 15:27:33.392599 kubelet[2608]: I0213 15:27:33.392517 2608 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:27:33.399525 kubelet[2608]: I0213 15:27:33.399479 2608 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:27:33.399700 kubelet[2608]: I0213 15:27:33.399676 2608 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:27:33.399856 kubelet[2608]: I0213 15:27:33.399699 2608 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:27:33.399929 kubelet[2608]: I0213 15:27:33.399862 2608 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:27:33.399929 kubelet[2608]: I0213 15:27:33.399871 2608 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:27:33.399929 kubelet[2608]: I0213 15:27:33.399902 2608 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:27:33.400000 kubelet[2608]: I0213 15:27:33.399992 2608 kubelet.go:400] "Attempting to sync node with API server" Feb 13 15:27:33.400070 kubelet[2608]: I0213 15:27:33.400005 2608 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:27:33.400070 kubelet[2608]: I0213 15:27:33.400032 2608 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:27:33.400070 kubelet[2608]: I0213 15:27:33.400047 2608 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:27:33.400992 kubelet[2608]: I0213 15:27:33.400783 2608 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:27:33.401148 kubelet[2608]: I0213 15:27:33.401130 2608 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:27:33.402220 kubelet[2608]: I0213 15:27:33.402196 2608 server.go:1264] "Started kubelet" Feb 13 15:27:33.404623 kubelet[2608]: I0213 15:27:33.402337 2608 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:27:33.407267 kubelet[2608]: I0213 15:27:33.407248 2608 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:27:33.407389 kubelet[2608]: I0213 15:27:33.405531 2608 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:27:33.407933 kubelet[2608]: I0213 15:27:33.407914 2608 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:27:33.408324 kubelet[2608]: I0213 15:27:33.407988 2608 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:27:33.409291 kubelet[2608]: E0213 15:27:33.408829 2608 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 15:27:33.414475 kubelet[2608]: I0213 15:27:33.414447 2608 server.go:455] "Adding debug handlers to kubelet server" Feb 13 15:27:33.415672 kubelet[2608]: I0213 15:27:33.415280 2608 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 15:27:33.415672 kubelet[2608]: I0213 15:27:33.415408 2608 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:27:33.420432 kubelet[2608]: E0213 15:27:33.420409 2608 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:27:33.421419 kubelet[2608]: I0213 15:27:33.421232 2608 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:27:33.423561 kubelet[2608]: I0213 15:27:33.423427 2608 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:27:33.423561 kubelet[2608]: I0213 15:27:33.423467 2608 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:27:33.423561 kubelet[2608]: I0213 15:27:33.423486 2608 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 15:27:33.424131 kubelet[2608]: E0213 15:27:33.423526 2608 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:27:33.427415 kubelet[2608]: I0213 15:27:33.427366 2608 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:27:33.427415 kubelet[2608]: I0213 15:27:33.427397 2608 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:27:33.427746 kubelet[2608]: I0213 15:27:33.427520 2608 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:27:33.458908 kubelet[2608]: I0213 15:27:33.458880 2608 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:27:33.458908 kubelet[2608]: I0213 15:27:33.458900 2608 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:27:33.459060 kubelet[2608]: I0213 15:27:33.458923 2608 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:27:33.459098 kubelet[2608]: I0213 15:27:33.459077 2608 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 15:27:33.459126 kubelet[2608]: I0213 15:27:33.459093 2608 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 15:27:33.459126 kubelet[2608]: I0213 15:27:33.459113 2608 policy_none.go:49] "None policy: Start" Feb 13 15:27:33.459767 kubelet[2608]: I0213 15:27:33.459749 2608 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:27:33.459831 kubelet[2608]: I0213 15:27:33.459779 2608 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:27:33.459938 kubelet[2608]: I0213 15:27:33.459923 2608 state_mem.go:75] "Updated machine memory state" Feb 13 15:27:33.463555 kubelet[2608]: I0213 15:27:33.463525 2608 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:27:33.463908 kubelet[2608]: I0213 15:27:33.463731 2608 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:27:33.463908 kubelet[2608]: I0213 15:27:33.463838 2608 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:27:33.513735 kubelet[2608]: I0213 15:27:33.512996 2608 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 15:27:33.519189 kubelet[2608]: I0213 15:27:33.519121 2608 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Feb 13 15:27:33.519311 kubelet[2608]: I0213 15:27:33.519209 2608 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 15:27:33.524462 kubelet[2608]: I0213 15:27:33.524428 2608 topology_manager.go:215] "Topology Admit Handler" podUID="03a675c73794c37026dc97b22a0ce52d" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 15:27:33.524635 kubelet[2608]: I0213 15:27:33.524558 2608 topology_manager.go:215] "Topology Admit Handler" podUID="dd3721fb1a67092819e35b40473f4063" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 15:27:33.524635 kubelet[2608]: I0213 15:27:33.524597 2608 topology_manager.go:215] "Topology Admit Handler" podUID="8d610d6c43052dbc8df47eb68906a982" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 15:27:33.532428 kubelet[2608]: E0213 15:27:33.531396 2608 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 15:27:33.616557 kubelet[2608]: I0213 15:27:33.616499 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:27:33.616557 kubelet[2608]: I0213 15:27:33.616551 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:27:33.616721 kubelet[2608]: I0213 15:27:33.616575 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d610d6c43052dbc8df47eb68906a982-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8d610d6c43052dbc8df47eb68906a982\") " pod="kube-system/kube-scheduler-localhost" Feb 13 15:27:33.616721 kubelet[2608]: I0213 15:27:33.616590 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/03a675c73794c37026dc97b22a0ce52d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"03a675c73794c37026dc97b22a0ce52d\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:27:33.616721 kubelet[2608]: I0213 15:27:33.616605 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:27:33.616721 kubelet[2608]: I0213 15:27:33.616639 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:27:33.616721 kubelet[2608]: I0213 15:27:33.616655 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/03a675c73794c37026dc97b22a0ce52d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"03a675c73794c37026dc97b22a0ce52d\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:27:33.616849 kubelet[2608]: I0213 15:27:33.616669 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/03a675c73794c37026dc97b22a0ce52d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"03a675c73794c37026dc97b22a0ce52d\") " pod="kube-system/kube-apiserver-localhost" Feb 13 15:27:33.616849 kubelet[2608]: I0213 15:27:33.616683 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 15:27:33.830395 kubelet[2608]: E0213 15:27:33.830263 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:33.832132 kubelet[2608]: E0213 15:27:33.832069 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:33.832277 kubelet[2608]: E0213 15:27:33.832260 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:34.401358 kubelet[2608]: I0213 15:27:34.401302 2608 apiserver.go:52] "Watching apiserver" Feb 13 15:27:34.415762 kubelet[2608]: I0213 15:27:34.415718 2608 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 15:27:34.440845 kubelet[2608]: E0213 15:27:34.440814 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:34.450964 kubelet[2608]: E0213 15:27:34.450919 2608 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Feb 13 15:27:34.451352 kubelet[2608]: E0213 15:27:34.451324 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:34.451735 kubelet[2608]: E0213 15:27:34.451715 2608 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 15:27:34.452101 kubelet[2608]: E0213 15:27:34.452076 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:34.464300 kubelet[2608]: I0213 15:27:34.464211 2608 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.464188768 podStartE2EDuration="1.464188768s" podCreationTimestamp="2025-02-13 15:27:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:27:34.463462359 +0000 UTC m=+1.110463710" watchObservedRunningTime="2025-02-13 15:27:34.464188768 +0000 UTC m=+1.111190199" Feb 13 15:27:34.479920 kubelet[2608]: I0213 15:27:34.479527 2608 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.479511297 podStartE2EDuration="1.479511297s" podCreationTimestamp="2025-02-13 15:27:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:27:34.470958909 +0000 UTC m=+1.117960260" watchObservedRunningTime="2025-02-13 15:27:34.479511297 +0000 UTC m=+1.126512648" Feb 13 15:27:34.491502 kubelet[2608]: I0213 15:27:34.490266 2608 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.490248192 podStartE2EDuration="2.490248192s" podCreationTimestamp="2025-02-13 15:27:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:27:34.479727915 +0000 UTC m=+1.126729266" watchObservedRunningTime="2025-02-13 15:27:34.490248192 +0000 UTC m=+1.137249543" Feb 13 15:27:35.443611 kubelet[2608]: E0213 15:27:35.442689 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:35.443611 kubelet[2608]: E0213 15:27:35.442713 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:38.289338 sudo[1615]: pam_unix(sudo:session): session closed for user root Feb 13 15:27:38.291222 sshd[1614]: Connection closed by 10.0.0.1 port 33752 Feb 13 15:27:38.291798 sshd-session[1612]: pam_unix(sshd:session): session closed for user core Feb 13 15:27:38.296084 systemd[1]: sshd@6-10.0.0.91:22-10.0.0.1:33752.service: Deactivated successfully. Feb 13 15:27:38.298020 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 15:27:38.298186 systemd[1]: session-7.scope: Consumed 7.491s CPU time, 192.0M memory peak, 0B memory swap peak. Feb 13 15:27:38.298833 systemd-logind[1422]: Session 7 logged out. Waiting for processes to exit. Feb 13 15:27:38.300015 systemd-logind[1422]: Removed session 7. Feb 13 15:27:41.473802 kubelet[2608]: E0213 15:27:41.473772 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:41.494610 kubelet[2608]: E0213 15:27:41.494572 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:42.046336 kubelet[2608]: E0213 15:27:42.046303 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:42.471739 kubelet[2608]: E0213 15:27:42.471480 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:42.471739 kubelet[2608]: E0213 15:27:42.471586 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:42.472147 kubelet[2608]: E0213 15:27:42.472124 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:49.117027 update_engine[1426]: I20250213 15:27:49.116907 1426 update_attempter.cc:509] Updating boot flags... Feb 13 15:27:49.162710 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2703) Feb 13 15:27:49.203161 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2707) Feb 13 15:27:49.380150 kubelet[2608]: I0213 15:27:49.380106 2608 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 15:27:49.388458 containerd[1457]: time="2025-02-13T15:27:49.388387665Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 15:27:49.388948 kubelet[2608]: I0213 15:27:49.388747 2608 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 15:27:49.743330 kubelet[2608]: I0213 15:27:49.743210 2608 topology_manager.go:215] "Topology Admit Handler" podUID="3f5434c8-b44b-4a7d-87ae-cea5b9f451c2" podNamespace="kube-system" podName="kube-proxy-vjhrf" Feb 13 15:27:49.759817 systemd[1]: Created slice kubepods-besteffort-pod3f5434c8_b44b_4a7d_87ae_cea5b9f451c2.slice - libcontainer container kubepods-besteffort-pod3f5434c8_b44b_4a7d_87ae_cea5b9f451c2.slice. Feb 13 15:27:49.835496 kubelet[2608]: I0213 15:27:49.835432 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m74v9\" (UniqueName: \"kubernetes.io/projected/3f5434c8-b44b-4a7d-87ae-cea5b9f451c2-kube-api-access-m74v9\") pod \"kube-proxy-vjhrf\" (UID: \"3f5434c8-b44b-4a7d-87ae-cea5b9f451c2\") " pod="kube-system/kube-proxy-vjhrf" Feb 13 15:27:49.835496 kubelet[2608]: I0213 15:27:49.835486 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3f5434c8-b44b-4a7d-87ae-cea5b9f451c2-kube-proxy\") pod \"kube-proxy-vjhrf\" (UID: \"3f5434c8-b44b-4a7d-87ae-cea5b9f451c2\") " pod="kube-system/kube-proxy-vjhrf" Feb 13 15:27:49.835496 kubelet[2608]: I0213 15:27:49.835505 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3f5434c8-b44b-4a7d-87ae-cea5b9f451c2-xtables-lock\") pod \"kube-proxy-vjhrf\" (UID: \"3f5434c8-b44b-4a7d-87ae-cea5b9f451c2\") " pod="kube-system/kube-proxy-vjhrf" Feb 13 15:27:49.836517 kubelet[2608]: I0213 15:27:49.836429 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3f5434c8-b44b-4a7d-87ae-cea5b9f451c2-lib-modules\") pod \"kube-proxy-vjhrf\" (UID: \"3f5434c8-b44b-4a7d-87ae-cea5b9f451c2\") " pod="kube-system/kube-proxy-vjhrf" Feb 13 15:27:49.947758 kubelet[2608]: E0213 15:27:49.947719 2608 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Feb 13 15:27:49.947758 kubelet[2608]: E0213 15:27:49.947754 2608 projected.go:200] Error preparing data for projected volume kube-api-access-m74v9 for pod kube-system/kube-proxy-vjhrf: configmap "kube-root-ca.crt" not found Feb 13 15:27:49.947927 kubelet[2608]: E0213 15:27:49.947812 2608 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3f5434c8-b44b-4a7d-87ae-cea5b9f451c2-kube-api-access-m74v9 podName:3f5434c8-b44b-4a7d-87ae-cea5b9f451c2 nodeName:}" failed. No retries permitted until 2025-02-13 15:27:50.447793218 +0000 UTC m=+17.094794569 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-m74v9" (UniqueName: "kubernetes.io/projected/3f5434c8-b44b-4a7d-87ae-cea5b9f451c2-kube-api-access-m74v9") pod "kube-proxy-vjhrf" (UID: "3f5434c8-b44b-4a7d-87ae-cea5b9f451c2") : configmap "kube-root-ca.crt" not found Feb 13 15:27:50.466664 kubelet[2608]: I0213 15:27:50.465306 2608 topology_manager.go:215] "Topology Admit Handler" podUID="864602cf-8589-49f4-b096-a15dcdddce9c" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-bdp7g" Feb 13 15:27:50.475447 systemd[1]: Created slice kubepods-besteffort-pod864602cf_8589_49f4_b096_a15dcdddce9c.slice - libcontainer container kubepods-besteffort-pod864602cf_8589_49f4_b096_a15dcdddce9c.slice. Feb 13 15:27:50.641208 kubelet[2608]: I0213 15:27:50.641161 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/864602cf-8589-49f4-b096-a15dcdddce9c-var-lib-calico\") pod \"tigera-operator-7bc55997bb-bdp7g\" (UID: \"864602cf-8589-49f4-b096-a15dcdddce9c\") " pod="tigera-operator/tigera-operator-7bc55997bb-bdp7g" Feb 13 15:27:50.641208 kubelet[2608]: I0213 15:27:50.641208 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6j99\" (UniqueName: \"kubernetes.io/projected/864602cf-8589-49f4-b096-a15dcdddce9c-kube-api-access-p6j99\") pod \"tigera-operator-7bc55997bb-bdp7g\" (UID: \"864602cf-8589-49f4-b096-a15dcdddce9c\") " pod="tigera-operator/tigera-operator-7bc55997bb-bdp7g" Feb 13 15:27:50.669745 kubelet[2608]: E0213 15:27:50.669713 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:50.674259 containerd[1457]: time="2025-02-13T15:27:50.674209927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vjhrf,Uid:3f5434c8-b44b-4a7d-87ae-cea5b9f451c2,Namespace:kube-system,Attempt:0,}" Feb 13 15:27:50.693530 containerd[1457]: time="2025-02-13T15:27:50.693421176Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:27:50.693530 containerd[1457]: time="2025-02-13T15:27:50.693482030Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:27:50.693530 containerd[1457]: time="2025-02-13T15:27:50.693494233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:27:50.693807 containerd[1457]: time="2025-02-13T15:27:50.693567170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:27:50.717844 systemd[1]: Started cri-containerd-05df7f23a11b226ee787c7b02b499cbabc91f0b53d6f86846387c510897cf484.scope - libcontainer container 05df7f23a11b226ee787c7b02b499cbabc91f0b53d6f86846387c510897cf484. Feb 13 15:27:50.738500 containerd[1457]: time="2025-02-13T15:27:50.738455019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vjhrf,Uid:3f5434c8-b44b-4a7d-87ae-cea5b9f451c2,Namespace:kube-system,Attempt:0,} returns sandbox id \"05df7f23a11b226ee787c7b02b499cbabc91f0b53d6f86846387c510897cf484\"" Feb 13 15:27:50.740909 kubelet[2608]: E0213 15:27:50.740889 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:50.743144 containerd[1457]: time="2025-02-13T15:27:50.743112318Z" level=info msg="CreateContainer within sandbox \"05df7f23a11b226ee787c7b02b499cbabc91f0b53d6f86846387c510897cf484\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 15:27:50.763161 containerd[1457]: time="2025-02-13T15:27:50.763108946Z" level=info msg="CreateContainer within sandbox \"05df7f23a11b226ee787c7b02b499cbabc91f0b53d6f86846387c510897cf484\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8be6abb1cb36ec84d74f449633eec4e4ba7e9902a0fb880e50bf499a5a06f5af\"" Feb 13 15:27:50.765974 containerd[1457]: time="2025-02-13T15:27:50.765945071Z" level=info msg="StartContainer for \"8be6abb1cb36ec84d74f449633eec4e4ba7e9902a0fb880e50bf499a5a06f5af\"" Feb 13 15:27:50.779702 containerd[1457]: time="2025-02-13T15:27:50.779661271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-bdp7g,Uid:864602cf-8589-49f4-b096-a15dcdddce9c,Namespace:tigera-operator,Attempt:0,}" Feb 13 15:27:50.793820 systemd[1]: Started cri-containerd-8be6abb1cb36ec84d74f449633eec4e4ba7e9902a0fb880e50bf499a5a06f5af.scope - libcontainer container 8be6abb1cb36ec84d74f449633eec4e4ba7e9902a0fb880e50bf499a5a06f5af. Feb 13 15:27:50.807249 containerd[1457]: time="2025-02-13T15:27:50.806796643Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:27:50.807249 containerd[1457]: time="2025-02-13T15:27:50.806983765Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:27:50.807249 containerd[1457]: time="2025-02-13T15:27:50.807016493Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:27:50.807492 containerd[1457]: time="2025-02-13T15:27:50.807179570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:27:50.838899 systemd[1]: Started cri-containerd-7cc53f28e3416f614c4dcd5625b195ca5cf5f5685d8181029cf2d25046ecf432.scope - libcontainer container 7cc53f28e3416f614c4dcd5625b195ca5cf5f5685d8181029cf2d25046ecf432. Feb 13 15:27:50.842067 containerd[1457]: time="2025-02-13T15:27:50.842010212Z" level=info msg="StartContainer for \"8be6abb1cb36ec84d74f449633eec4e4ba7e9902a0fb880e50bf499a5a06f5af\" returns successfully" Feb 13 15:27:50.906248 containerd[1457]: time="2025-02-13T15:27:50.906186808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-bdp7g,Uid:864602cf-8589-49f4-b096-a15dcdddce9c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"7cc53f28e3416f614c4dcd5625b195ca5cf5f5685d8181029cf2d25046ecf432\"" Feb 13 15:27:50.909963 containerd[1457]: time="2025-02-13T15:27:50.909905254Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Feb 13 15:27:51.496078 kubelet[2608]: E0213 15:27:51.495858 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:51.949767 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2019317307.mount: Deactivated successfully. Feb 13 15:27:52.214766 containerd[1457]: time="2025-02-13T15:27:52.214520735Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:52.215280 containerd[1457]: time="2025-02-13T15:27:52.215157946Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19124160" Feb 13 15:27:52.215839 containerd[1457]: time="2025-02-13T15:27:52.215806200Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:52.218553 containerd[1457]: time="2025-02-13T15:27:52.218512479Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:52.223299 containerd[1457]: time="2025-02-13T15:27:52.223102066Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 1.313155483s" Feb 13 15:27:52.223299 containerd[1457]: time="2025-02-13T15:27:52.223135073Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Feb 13 15:27:52.234246 containerd[1457]: time="2025-02-13T15:27:52.234182153Z" level=info msg="CreateContainer within sandbox \"7cc53f28e3416f614c4dcd5625b195ca5cf5f5685d8181029cf2d25046ecf432\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 13 15:27:52.252757 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3670222820.mount: Deactivated successfully. Feb 13 15:27:52.255112 containerd[1457]: time="2025-02-13T15:27:52.255063343Z" level=info msg="CreateContainer within sandbox \"7cc53f28e3416f614c4dcd5625b195ca5cf5f5685d8181029cf2d25046ecf432\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"c05366e05d2ef9aaecb2e9194c0cfbec3a1424cef553ca997626e9d4c7451fb4\"" Feb 13 15:27:52.255750 containerd[1457]: time="2025-02-13T15:27:52.255704355Z" level=info msg="StartContainer for \"c05366e05d2ef9aaecb2e9194c0cfbec3a1424cef553ca997626e9d4c7451fb4\"" Feb 13 15:27:52.283807 systemd[1]: Started cri-containerd-c05366e05d2ef9aaecb2e9194c0cfbec3a1424cef553ca997626e9d4c7451fb4.scope - libcontainer container c05366e05d2ef9aaecb2e9194c0cfbec3a1424cef553ca997626e9d4c7451fb4. Feb 13 15:27:52.317394 containerd[1457]: time="2025-02-13T15:27:52.312166609Z" level=info msg="StartContainer for \"c05366e05d2ef9aaecb2e9194c0cfbec3a1424cef553ca997626e9d4c7451fb4\" returns successfully" Feb 13 15:27:52.514536 kubelet[2608]: I0213 15:27:52.514149 2608 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vjhrf" podStartSLOduration=3.5129904180000002 podStartE2EDuration="3.512990418s" podCreationTimestamp="2025-02-13 15:27:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:27:51.521471612 +0000 UTC m=+18.168472923" watchObservedRunningTime="2025-02-13 15:27:52.512990418 +0000 UTC m=+19.159991769" Feb 13 15:27:52.514536 kubelet[2608]: I0213 15:27:52.514265 2608 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-bdp7g" podStartSLOduration=1.19044239 podStartE2EDuration="2.514255919s" podCreationTimestamp="2025-02-13 15:27:50 +0000 UTC" firstStartedPulling="2025-02-13 15:27:50.907740361 +0000 UTC m=+17.554741712" lastFinishedPulling="2025-02-13 15:27:52.23155389 +0000 UTC m=+18.878555241" observedRunningTime="2025-02-13 15:27:52.512870713 +0000 UTC m=+19.159872024" watchObservedRunningTime="2025-02-13 15:27:52.514255919 +0000 UTC m=+19.161257270" Feb 13 15:27:56.106672 kubelet[2608]: I0213 15:27:56.103731 2608 topology_manager.go:215] "Topology Admit Handler" podUID="1a15afcc-ab09-4992-81ec-77b48a108e16" podNamespace="calico-system" podName="calico-typha-7767bbffdf-r57fk" Feb 13 15:27:56.117069 systemd[1]: Created slice kubepods-besteffort-pod1a15afcc_ab09_4992_81ec_77b48a108e16.slice - libcontainer container kubepods-besteffort-pod1a15afcc_ab09_4992_81ec_77b48a108e16.slice. Feb 13 15:27:56.172835 kubelet[2608]: I0213 15:27:56.172439 2608 topology_manager.go:215] "Topology Admit Handler" podUID="f95a4514-2be0-4207-a34e-6bb4df18a601" podNamespace="calico-system" podName="calico-node-m55t4" Feb 13 15:27:56.184199 systemd[1]: Created slice kubepods-besteffort-podf95a4514_2be0_4207_a34e_6bb4df18a601.slice - libcontainer container kubepods-besteffort-podf95a4514_2be0_4207_a34e_6bb4df18a601.slice. Feb 13 15:27:56.284641 kubelet[2608]: I0213 15:27:56.284140 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f95a4514-2be0-4207-a34e-6bb4df18a601-policysync\") pod \"calico-node-m55t4\" (UID: \"f95a4514-2be0-4207-a34e-6bb4df18a601\") " pod="calico-system/calico-node-m55t4" Feb 13 15:27:56.284641 kubelet[2608]: I0213 15:27:56.284189 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-426b6\" (UniqueName: \"kubernetes.io/projected/1a15afcc-ab09-4992-81ec-77b48a108e16-kube-api-access-426b6\") pod \"calico-typha-7767bbffdf-r57fk\" (UID: \"1a15afcc-ab09-4992-81ec-77b48a108e16\") " pod="calico-system/calico-typha-7767bbffdf-r57fk" Feb 13 15:27:56.284641 kubelet[2608]: I0213 15:27:56.284210 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f95a4514-2be0-4207-a34e-6bb4df18a601-flexvol-driver-host\") pod \"calico-node-m55t4\" (UID: \"f95a4514-2be0-4207-a34e-6bb4df18a601\") " pod="calico-system/calico-node-m55t4" Feb 13 15:27:56.284641 kubelet[2608]: I0213 15:27:56.284227 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk7t4\" (UniqueName: \"kubernetes.io/projected/f95a4514-2be0-4207-a34e-6bb4df18a601-kube-api-access-xk7t4\") pod \"calico-node-m55t4\" (UID: \"f95a4514-2be0-4207-a34e-6bb4df18a601\") " pod="calico-system/calico-node-m55t4" Feb 13 15:27:56.284641 kubelet[2608]: I0213 15:27:56.284244 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f95a4514-2be0-4207-a34e-6bb4df18a601-node-certs\") pod \"calico-node-m55t4\" (UID: \"f95a4514-2be0-4207-a34e-6bb4df18a601\") " pod="calico-system/calico-node-m55t4" Feb 13 15:27:56.284882 kubelet[2608]: I0213 15:27:56.284303 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f95a4514-2be0-4207-a34e-6bb4df18a601-var-lib-calico\") pod \"calico-node-m55t4\" (UID: \"f95a4514-2be0-4207-a34e-6bb4df18a601\") " pod="calico-system/calico-node-m55t4" Feb 13 15:27:56.284882 kubelet[2608]: I0213 15:27:56.284339 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f95a4514-2be0-4207-a34e-6bb4df18a601-cni-net-dir\") pod \"calico-node-m55t4\" (UID: \"f95a4514-2be0-4207-a34e-6bb4df18a601\") " pod="calico-system/calico-node-m55t4" Feb 13 15:27:56.284882 kubelet[2608]: I0213 15:27:56.284393 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f95a4514-2be0-4207-a34e-6bb4df18a601-tigera-ca-bundle\") pod \"calico-node-m55t4\" (UID: \"f95a4514-2be0-4207-a34e-6bb4df18a601\") " pod="calico-system/calico-node-m55t4" Feb 13 15:27:56.284882 kubelet[2608]: I0213 15:27:56.284412 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f95a4514-2be0-4207-a34e-6bb4df18a601-cni-bin-dir\") pod \"calico-node-m55t4\" (UID: \"f95a4514-2be0-4207-a34e-6bb4df18a601\") " pod="calico-system/calico-node-m55t4" Feb 13 15:27:56.284882 kubelet[2608]: I0213 15:27:56.284447 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a15afcc-ab09-4992-81ec-77b48a108e16-tigera-ca-bundle\") pod \"calico-typha-7767bbffdf-r57fk\" (UID: \"1a15afcc-ab09-4992-81ec-77b48a108e16\") " pod="calico-system/calico-typha-7767bbffdf-r57fk" Feb 13 15:27:56.284987 kubelet[2608]: I0213 15:27:56.284469 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1a15afcc-ab09-4992-81ec-77b48a108e16-typha-certs\") pod \"calico-typha-7767bbffdf-r57fk\" (UID: \"1a15afcc-ab09-4992-81ec-77b48a108e16\") " pod="calico-system/calico-typha-7767bbffdf-r57fk" Feb 13 15:27:56.284987 kubelet[2608]: I0213 15:27:56.284485 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f95a4514-2be0-4207-a34e-6bb4df18a601-lib-modules\") pod \"calico-node-m55t4\" (UID: \"f95a4514-2be0-4207-a34e-6bb4df18a601\") " pod="calico-system/calico-node-m55t4" Feb 13 15:27:56.284987 kubelet[2608]: I0213 15:27:56.284503 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f95a4514-2be0-4207-a34e-6bb4df18a601-xtables-lock\") pod \"calico-node-m55t4\" (UID: \"f95a4514-2be0-4207-a34e-6bb4df18a601\") " pod="calico-system/calico-node-m55t4" Feb 13 15:27:56.284987 kubelet[2608]: I0213 15:27:56.284518 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f95a4514-2be0-4207-a34e-6bb4df18a601-var-run-calico\") pod \"calico-node-m55t4\" (UID: \"f95a4514-2be0-4207-a34e-6bb4df18a601\") " pod="calico-system/calico-node-m55t4" Feb 13 15:27:56.284987 kubelet[2608]: I0213 15:27:56.284533 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f95a4514-2be0-4207-a34e-6bb4df18a601-cni-log-dir\") pod \"calico-node-m55t4\" (UID: \"f95a4514-2be0-4207-a34e-6bb4df18a601\") " pod="calico-system/calico-node-m55t4" Feb 13 15:27:56.290889 kubelet[2608]: I0213 15:27:56.290790 2608 topology_manager.go:215] "Topology Admit Handler" podUID="8c67a7be-7144-4f11-b45e-f04dfd3de75c" podNamespace="calico-system" podName="csi-node-driver-59ns4" Feb 13 15:27:56.291264 kubelet[2608]: E0213 15:27:56.291120 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-59ns4" podUID="8c67a7be-7144-4f11-b45e-f04dfd3de75c" Feb 13 15:27:56.392855 kubelet[2608]: E0213 15:27:56.391716 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:56.392855 kubelet[2608]: W0213 15:27:56.391743 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:56.392855 kubelet[2608]: E0213 15:27:56.391770 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:56.392855 kubelet[2608]: E0213 15:27:56.392055 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:56.392855 kubelet[2608]: W0213 15:27:56.392065 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:56.392855 kubelet[2608]: E0213 15:27:56.392075 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:56.397096 kubelet[2608]: E0213 15:27:56.397071 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:56.397096 kubelet[2608]: W0213 15:27:56.397089 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:56.397203 kubelet[2608]: E0213 15:27:56.397105 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:56.401306 kubelet[2608]: E0213 15:27:56.401285 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:56.401461 kubelet[2608]: W0213 15:27:56.401341 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:56.401461 kubelet[2608]: E0213 15:27:56.401357 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:56.426322 kubelet[2608]: E0213 15:27:56.426194 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:56.427862 containerd[1457]: time="2025-02-13T15:27:56.427812471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7767bbffdf-r57fk,Uid:1a15afcc-ab09-4992-81ec-77b48a108e16,Namespace:calico-system,Attempt:0,}" Feb 13 15:27:56.450542 containerd[1457]: time="2025-02-13T15:27:56.450467119Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:27:56.450542 containerd[1457]: time="2025-02-13T15:27:56.450515928Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:27:56.450542 containerd[1457]: time="2025-02-13T15:27:56.450526930Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:27:56.450784 containerd[1457]: time="2025-02-13T15:27:56.450593621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:27:56.473805 systemd[1]: Started cri-containerd-5308b817cf754fd226a0f3372788e436b1bb8e0830893cf267d9f1b00655e09a.scope - libcontainer container 5308b817cf754fd226a0f3372788e436b1bb8e0830893cf267d9f1b00655e09a. Feb 13 15:27:56.486833 kubelet[2608]: E0213 15:27:56.486614 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:56.486833 kubelet[2608]: W0213 15:27:56.486669 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:56.486833 kubelet[2608]: E0213 15:27:56.486694 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:56.486833 kubelet[2608]: I0213 15:27:56.486725 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8c67a7be-7144-4f11-b45e-f04dfd3de75c-socket-dir\") pod \"csi-node-driver-59ns4\" (UID: \"8c67a7be-7144-4f11-b45e-f04dfd3de75c\") " pod="calico-system/csi-node-driver-59ns4" Feb 13 15:27:56.489378 kubelet[2608]: E0213 15:27:56.487042 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:56.489378 kubelet[2608]: W0213 15:27:56.487055 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:56.489378 kubelet[2608]: E0213 15:27:56.487072 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:56.489378 kubelet[2608]: I0213 15:27:56.487089 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8c67a7be-7144-4f11-b45e-f04dfd3de75c-kubelet-dir\") pod \"csi-node-driver-59ns4\" (UID: \"8c67a7be-7144-4f11-b45e-f04dfd3de75c\") " pod="calico-system/csi-node-driver-59ns4" Feb 13 15:27:56.489378 kubelet[2608]: E0213 15:27:56.487712 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:56.489378 kubelet[2608]: E0213 15:27:56.488651 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:56.489378 kubelet[2608]: W0213 15:27:56.488673 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:56.489378 kubelet[2608]: E0213 15:27:56.488738 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:56.490003 containerd[1457]: time="2025-02-13T15:27:56.488155828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-m55t4,Uid:f95a4514-2be0-4207-a34e-6bb4df18a601,Namespace:calico-system,Attempt:0,}" Feb 13 15:27:56.490167 kubelet[2608]: I0213 15:27:56.488764 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8c67a7be-7144-4f11-b45e-f04dfd3de75c-registration-dir\") pod \"csi-node-driver-59ns4\" (UID: \"8c67a7be-7144-4f11-b45e-f04dfd3de75c\") " pod="calico-system/csi-node-driver-59ns4" Feb 13 15:27:56.490167 kubelet[2608]: E0213 15:27:56.489064 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:56.490167 kubelet[2608]: W0213 15:27:56.489076 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:56.490167 kubelet[2608]: E0213 15:27:56.489140 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:56.490167 kubelet[2608]: E0213 15:27:56.489480 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:56.490167 kubelet[2608]: W0213 15:27:56.489703 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:56.490167 kubelet[2608]: E0213 15:27:56.490206 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:56.490996 kubelet[2608]: E0213 15:27:56.490917 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:56.490996 kubelet[2608]: W0213 15:27:56.490936 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:56.491257 kubelet[2608]: E0213 15:27:56.491153 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:56.491569 kubelet[2608]: E0213 15:27:56.491391 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:56.491569 kubelet[2608]: W0213 15:27:56.491416 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:56.491698 kubelet[2608]: E0213 15:27:56.491446 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:56.491698 kubelet[2608]: I0213 15:27:56.491610 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/8c67a7be-7144-4f11-b45e-f04dfd3de75c-varrun\") pod \"csi-node-driver-59ns4\" (UID: \"8c67a7be-7144-4f11-b45e-f04dfd3de75c\") " pod="calico-system/csi-node-driver-59ns4" Feb 13 15:27:56.492526 kubelet[2608]: E0213 15:27:56.492450 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:56.492526 kubelet[2608]: W0213 15:27:56.492468 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:56.492526 kubelet[2608]: E0213 15:27:56.492481 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:56.492763 kubelet[2608]: E0213 15:27:56.492748 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:56.492763 kubelet[2608]: W0213 15:27:56.492763 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:56.492763 kubelet[2608]: E0213 15:27:56.492780 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:56.492964 kubelet[2608]: E0213 15:27:56.492951 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:56.492964 kubelet[2608]: W0213 15:27:56.492963 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:56.493049 kubelet[2608]: E0213 15:27:56.492975 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:56.493049 kubelet[2608]: I0213 15:27:56.493035 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqd4h\" (UniqueName: \"kubernetes.io/projected/8c67a7be-7144-4f11-b45e-f04dfd3de75c-kube-api-access-dqd4h\") pod \"csi-node-driver-59ns4\" (UID: \"8c67a7be-7144-4f11-b45e-f04dfd3de75c\") " pod="calico-system/csi-node-driver-59ns4" Feb 13 15:27:56.493399 kubelet[2608]: E0213 15:27:56.493234 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:56.493399 kubelet[2608]: W0213 15:27:56.493248 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:56.493399 kubelet[2608]: E0213 15:27:56.493259 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:56.493667 kubelet[2608]: E0213 15:27:56.493646 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:56.493834 kubelet[2608]: W0213 15:27:56.493759 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:56.493834 kubelet[2608]: E0213 15:27:56.493809 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:56.494221 kubelet[2608]: E0213 15:27:56.494206 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:56.494424 kubelet[2608]: W0213 15:27:56.494255 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:56.494424 kubelet[2608]: E0213 15:27:56.494274 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:56.495157 kubelet[2608]: E0213 15:27:56.495102 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:56.495332 kubelet[2608]: W0213 15:27:56.495211 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:56.495332 kubelet[2608]: E0213 15:27:56.495230 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:56.495670 kubelet[2608]: E0213 15:27:56.495653 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:56.495829 kubelet[2608]: W0213 15:27:56.495732 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:56.495829 kubelet[2608]: E0213 15:27:56.495748 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:56.503821 containerd[1457]: time="2025-02-13T15:27:56.503781310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7767bbffdf-r57fk,Uid:1a15afcc-ab09-4992-81ec-77b48a108e16,Namespace:calico-system,Attempt:0,} returns sandbox id \"5308b817cf754fd226a0f3372788e436b1bb8e0830893cf267d9f1b00655e09a\"" Feb 13 15:27:56.504841 kubelet[2608]: E0213 15:27:56.504819 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:56.507300 containerd[1457]: time="2025-02-13T15:27:56.507259828Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Feb 13 15:27:56.579512 containerd[1457]: time="2025-02-13T15:27:56.578955134Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:27:56.579512 containerd[1457]: time="2025-02-13T15:27:56.579467342Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:27:56.579512 containerd[1457]: time="2025-02-13T15:27:56.579480464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:27:56.579770 containerd[1457]: time="2025-02-13T15:27:56.579569519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:27:56.594119 kubelet[2608]: E0213 15:27:56.593948 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:56.594119 kubelet[2608]: W0213 15:27:56.593970 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:56.594119 kubelet[2608]: E0213 15:27:56.593991 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:56.594877 kubelet[2608]: E0213 15:27:56.594672 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:56.594877 kubelet[2608]: W0213 15:27:56.594688 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:56.594877 kubelet[2608]: E0213 15:27:56.594709 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:56.595097 kubelet[2608]: E0213 15:27:56.595045 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:56.595097 kubelet[2608]: W0213 15:27:56.595058 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:56.595097 kubelet[2608]: E0213 15:27:56.595078 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:56.595380 kubelet[2608]: E0213 15:27:56.595292 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:56.595380 kubelet[2608]: W0213 15:27:56.595310 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:56.595380 kubelet[2608]: E0213 15:27:56.595328 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:56.595503 kubelet[2608]: E0213 15:27:56.595488 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:56.595503 kubelet[2608]: W0213 15:27:56.595502 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:56.595732 kubelet[2608]: E0213 15:27:56.595515 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:56.597912 kubelet[2608]: E0213 15:27:56.595791 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:56.597912 kubelet[2608]: W0213 15:27:56.595806 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:56.598043 kubelet[2608]: E0213 15:27:56.598025 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:56.598366 kubelet[2608]: E0213 15:27:56.598350 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:56.598446 kubelet[2608]: W0213 15:27:56.598433 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:56.598506 kubelet[2608]: E0213 15:27:56.598495 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:56.598818 kubelet[2608]: E0213 15:27:56.598802 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:56.598911 kubelet[2608]: W0213 15:27:56.598898 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:56.598996 kubelet[2608]: E0213 15:27:56.598984 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:56.599261 kubelet[2608]: E0213 15:27:56.599245 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:56.599337 kubelet[2608]: W0213 15:27:56.599325 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:56.599400 kubelet[2608]: E0213 15:27:56.599389 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:56.599638 kubelet[2608]: E0213 15:27:56.599608 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:56.599704 kubelet[2608]: W0213 15:27:56.599693 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:56.599777 kubelet[2608]: E0213 15:27:56.599764 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:56.600000 kubelet[2608]: E0213 15:27:56.599987 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:56.600102 kubelet[2608]: W0213 15:27:56.600089 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:56.600159 kubelet[2608]: E0213 15:27:56.600149 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:56.600393 kubelet[2608]: E0213 15:27:56.600378 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:56.600466 kubelet[2608]: W0213 15:27:56.600453 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:56.600529 kubelet[2608]: E0213 15:27:56.600518 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:56.600829 systemd[1]: Started cri-containerd-d6c73f0abcc32395de12341c2bc31a7bc0d667286ec5f4e24119796f08c18518.scope - libcontainer container d6c73f0abcc32395de12341c2bc31a7bc0d667286ec5f4e24119796f08c18518. Feb 13 15:27:56.601042 kubelet[2608]: E0213 15:27:56.601025 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:56.601042 kubelet[2608]: W0213 15:27:56.601040 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:56.601144 kubelet[2608]: E0213 15:27:56.601058 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:56.601275 kubelet[2608]: E0213 15:27:56.601256 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:56.601275 kubelet[2608]: W0213 15:27:56.601271 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:56.601332 kubelet[2608]: E0213 15:27:56.601282 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:56.601649 kubelet[2608]: E0213 15:27:56.601576 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:56.601649 kubelet[2608]: W0213 15:27:56.601591 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:56.601649 kubelet[2608]: E0213 15:27:56.601607 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:56.602338 kubelet[2608]: E0213 15:27:56.602198 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:56.602338 kubelet[2608]: W0213 15:27:56.602213 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:56.602338 kubelet[2608]: E0213 15:27:56.602230 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:56.604367 kubelet[2608]: E0213 15:27:56.603797 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:56.604367 kubelet[2608]: W0213 15:27:56.604304 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:56.604528 kubelet[2608]: E0213 15:27:56.604487 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:56.604848 kubelet[2608]: E0213 15:27:56.604789 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:56.604848 kubelet[2608]: W0213 15:27:56.604803 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:56.605044 kubelet[2608]: E0213 15:27:56.604968 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:56.605176 kubelet[2608]: E0213 15:27:56.605164 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:56.605324 kubelet[2608]: W0213 15:27:56.605220 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:56.605556 kubelet[2608]: E0213 15:27:56.605445 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:56.605802 kubelet[2608]: E0213 15:27:56.605768 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:56.605906 kubelet[2608]: W0213 15:27:56.605885 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:56.609627 kubelet[2608]: E0213 15:27:56.608654 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:56.609934 kubelet[2608]: E0213 15:27:56.609914 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:56.609934 kubelet[2608]: W0213 15:27:56.609931 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:56.610012 kubelet[2608]: E0213 15:27:56.609978 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:56.610126 kubelet[2608]: E0213 15:27:56.610113 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:56.610126 kubelet[2608]: W0213 15:27:56.610124 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:56.610177 kubelet[2608]: E0213 15:27:56.610138 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:56.610586 kubelet[2608]: E0213 15:27:56.610569 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:56.610586 kubelet[2608]: W0213 15:27:56.610584 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:56.610687 kubelet[2608]: E0213 15:27:56.610604 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:56.610839 kubelet[2608]: E0213 15:27:56.610824 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:56.610871 kubelet[2608]: W0213 15:27:56.610841 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:56.610871 kubelet[2608]: E0213 15:27:56.610852 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:56.611341 kubelet[2608]: E0213 15:27:56.611319 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:56.611341 kubelet[2608]: W0213 15:27:56.611334 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:56.611426 kubelet[2608]: E0213 15:27:56.611347 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:56.617099 kubelet[2608]: E0213 15:27:56.616996 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:56.617778 kubelet[2608]: W0213 15:27:56.617649 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:56.617778 kubelet[2608]: E0213 15:27:56.617674 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:56.655796 containerd[1457]: time="2025-02-13T15:27:56.655692905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-m55t4,Uid:f95a4514-2be0-4207-a34e-6bb4df18a601,Namespace:calico-system,Attempt:0,} returns sandbox id \"d6c73f0abcc32395de12341c2bc31a7bc0d667286ec5f4e24119796f08c18518\"" Feb 13 15:27:56.656644 kubelet[2608]: E0213 15:27:56.656597 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:57.658251 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount491778636.mount: Deactivated successfully. Feb 13 15:27:57.983354 containerd[1457]: time="2025-02-13T15:27:57.983042333Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:57.983767 containerd[1457]: time="2025-02-13T15:27:57.983696881Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29231308" Feb 13 15:27:57.984671 containerd[1457]: time="2025-02-13T15:27:57.984637476Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:57.986839 containerd[1457]: time="2025-02-13T15:27:57.986806672Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:57.987502 containerd[1457]: time="2025-02-13T15:27:57.987471181Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 1.480172827s" Feb 13 15:27:57.987552 containerd[1457]: time="2025-02-13T15:27:57.987503266Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Feb 13 15:27:57.990400 containerd[1457]: time="2025-02-13T15:27:57.990251278Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 15:27:57.999427 containerd[1457]: time="2025-02-13T15:27:57.999388179Z" level=info msg="CreateContainer within sandbox \"5308b817cf754fd226a0f3372788e436b1bb8e0830893cf267d9f1b00655e09a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 13 15:27:58.008508 containerd[1457]: time="2025-02-13T15:27:58.008465863Z" level=info msg="CreateContainer within sandbox \"5308b817cf754fd226a0f3372788e436b1bb8e0830893cf267d9f1b00655e09a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"356c22361a910446bb27686b03e43a157e429ae9eabce1f66d369ec6ff70fdf7\"" Feb 13 15:27:58.009195 containerd[1457]: time="2025-02-13T15:27:58.009146090Z" level=info msg="StartContainer for \"356c22361a910446bb27686b03e43a157e429ae9eabce1f66d369ec6ff70fdf7\"" Feb 13 15:27:58.042837 systemd[1]: Started cri-containerd-356c22361a910446bb27686b03e43a157e429ae9eabce1f66d369ec6ff70fdf7.scope - libcontainer container 356c22361a910446bb27686b03e43a157e429ae9eabce1f66d369ec6ff70fdf7. Feb 13 15:27:58.079089 containerd[1457]: time="2025-02-13T15:27:58.077279451Z" level=info msg="StartContainer for \"356c22361a910446bb27686b03e43a157e429ae9eabce1f66d369ec6ff70fdf7\" returns successfully" Feb 13 15:27:58.424134 kubelet[2608]: E0213 15:27:58.423750 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-59ns4" podUID="8c67a7be-7144-4f11-b45e-f04dfd3de75c" Feb 13 15:27:58.519011 kubelet[2608]: E0213 15:27:58.518976 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:58.536543 kubelet[2608]: I0213 15:27:58.536476 2608 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7767bbffdf-r57fk" podStartSLOduration=1.054922333 podStartE2EDuration="2.536455745s" podCreationTimestamp="2025-02-13 15:27:56 +0000 UTC" firstStartedPulling="2025-02-13 15:27:56.506992502 +0000 UTC m=+23.153993853" lastFinishedPulling="2025-02-13 15:27:57.988525954 +0000 UTC m=+24.635527265" observedRunningTime="2025-02-13 15:27:58.535793001 +0000 UTC m=+25.182794352" watchObservedRunningTime="2025-02-13 15:27:58.536455745 +0000 UTC m=+25.183457096" Feb 13 15:27:58.604656 kubelet[2608]: E0213 15:27:58.604527 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:58.604656 kubelet[2608]: W0213 15:27:58.604552 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:58.604656 kubelet[2608]: E0213 15:27:58.604573 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:58.604844 kubelet[2608]: E0213 15:27:58.604788 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:58.604844 kubelet[2608]: W0213 15:27:58.604796 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:58.604844 kubelet[2608]: E0213 15:27:58.604805 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:58.604995 kubelet[2608]: E0213 15:27:58.604972 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:58.604995 kubelet[2608]: W0213 15:27:58.604983 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:58.604995 kubelet[2608]: E0213 15:27:58.604991 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:58.605136 kubelet[2608]: E0213 15:27:58.605119 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:58.605136 kubelet[2608]: W0213 15:27:58.605130 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:58.605183 kubelet[2608]: E0213 15:27:58.605137 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:58.605301 kubelet[2608]: E0213 15:27:58.605283 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:58.605301 kubelet[2608]: W0213 15:27:58.605293 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:58.605301 kubelet[2608]: E0213 15:27:58.605301 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:58.605445 kubelet[2608]: E0213 15:27:58.605435 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:58.605471 kubelet[2608]: W0213 15:27:58.605447 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:58.605471 kubelet[2608]: E0213 15:27:58.605455 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:58.605641 kubelet[2608]: E0213 15:27:58.605614 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:58.605641 kubelet[2608]: W0213 15:27:58.605639 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:58.605693 kubelet[2608]: E0213 15:27:58.605648 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:58.605790 kubelet[2608]: E0213 15:27:58.605779 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:58.605814 kubelet[2608]: W0213 15:27:58.605789 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:58.605814 kubelet[2608]: E0213 15:27:58.605797 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:58.605971 kubelet[2608]: E0213 15:27:58.605958 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:58.605971 kubelet[2608]: W0213 15:27:58.605967 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:58.606028 kubelet[2608]: E0213 15:27:58.605975 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:58.606102 kubelet[2608]: E0213 15:27:58.606091 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:58.606102 kubelet[2608]: W0213 15:27:58.606100 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:58.606149 kubelet[2608]: E0213 15:27:58.606107 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:58.606237 kubelet[2608]: E0213 15:27:58.606227 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:58.606265 kubelet[2608]: W0213 15:27:58.606239 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:58.606265 kubelet[2608]: E0213 15:27:58.606247 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:58.606380 kubelet[2608]: E0213 15:27:58.606370 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:58.606405 kubelet[2608]: W0213 15:27:58.606380 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:58.606405 kubelet[2608]: E0213 15:27:58.606387 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:58.606518 kubelet[2608]: E0213 15:27:58.606508 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:58.606541 kubelet[2608]: W0213 15:27:58.606517 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:58.606541 kubelet[2608]: E0213 15:27:58.606525 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:58.606691 kubelet[2608]: E0213 15:27:58.606680 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:58.606691 kubelet[2608]: W0213 15:27:58.606690 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:58.606738 kubelet[2608]: E0213 15:27:58.606698 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:58.606853 kubelet[2608]: E0213 15:27:58.606844 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:58.606877 kubelet[2608]: W0213 15:27:58.606853 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:58.606877 kubelet[2608]: E0213 15:27:58.606863 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:58.614283 kubelet[2608]: E0213 15:27:58.614247 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:58.614283 kubelet[2608]: W0213 15:27:58.614268 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:58.614283 kubelet[2608]: E0213 15:27:58.614286 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:58.614520 kubelet[2608]: E0213 15:27:58.614494 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:58.614520 kubelet[2608]: W0213 15:27:58.614507 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:58.614569 kubelet[2608]: E0213 15:27:58.614524 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:58.614735 kubelet[2608]: E0213 15:27:58.614712 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:58.614735 kubelet[2608]: W0213 15:27:58.614723 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:58.614797 kubelet[2608]: E0213 15:27:58.614737 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:58.614949 kubelet[2608]: E0213 15:27:58.614937 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:58.614949 kubelet[2608]: W0213 15:27:58.614948 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:58.615006 kubelet[2608]: E0213 15:27:58.614960 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:58.615161 kubelet[2608]: E0213 15:27:58.615134 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:58.615161 kubelet[2608]: W0213 15:27:58.615146 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:58.615161 kubelet[2608]: E0213 15:27:58.615159 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:58.615384 kubelet[2608]: E0213 15:27:58.615371 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:58.615384 kubelet[2608]: W0213 15:27:58.615383 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:58.615440 kubelet[2608]: E0213 15:27:58.615396 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:58.615790 kubelet[2608]: E0213 15:27:58.615774 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:58.615831 kubelet[2608]: W0213 15:27:58.615791 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:58.615860 kubelet[2608]: E0213 15:27:58.615831 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:58.616000 kubelet[2608]: E0213 15:27:58.615987 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:58.616000 kubelet[2608]: W0213 15:27:58.616000 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:58.616173 kubelet[2608]: E0213 15:27:58.616062 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:58.616173 kubelet[2608]: E0213 15:27:58.616172 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:58.616251 kubelet[2608]: W0213 15:27:58.616180 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:58.616251 kubelet[2608]: E0213 15:27:58.616200 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:58.616390 kubelet[2608]: E0213 15:27:58.616373 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:58.616390 kubelet[2608]: W0213 15:27:58.616384 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:58.616442 kubelet[2608]: E0213 15:27:58.616398 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:58.616558 kubelet[2608]: E0213 15:27:58.616547 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:58.616558 kubelet[2608]: W0213 15:27:58.616557 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:58.616608 kubelet[2608]: E0213 15:27:58.616569 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:58.616751 kubelet[2608]: E0213 15:27:58.616740 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:58.616787 kubelet[2608]: W0213 15:27:58.616751 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:58.616787 kubelet[2608]: E0213 15:27:58.616765 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:58.617003 kubelet[2608]: E0213 15:27:58.616987 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:58.617003 kubelet[2608]: W0213 15:27:58.617003 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:58.617066 kubelet[2608]: E0213 15:27:58.617019 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:58.617213 kubelet[2608]: E0213 15:27:58.617197 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:58.617247 kubelet[2608]: W0213 15:27:58.617212 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:58.617247 kubelet[2608]: E0213 15:27:58.617228 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:58.617396 kubelet[2608]: E0213 15:27:58.617384 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:58.617396 kubelet[2608]: W0213 15:27:58.617396 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:58.617443 kubelet[2608]: E0213 15:27:58.617408 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:58.617627 kubelet[2608]: E0213 15:27:58.617607 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:58.617667 kubelet[2608]: W0213 15:27:58.617646 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:58.617667 kubelet[2608]: E0213 15:27:58.617663 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:58.617861 kubelet[2608]: E0213 15:27:58.617848 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:58.617893 kubelet[2608]: W0213 15:27:58.617860 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:58.617893 kubelet[2608]: E0213 15:27:58.617875 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:58.618447 kubelet[2608]: E0213 15:27:58.618429 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:58.618447 kubelet[2608]: W0213 15:27:58.618444 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:58.618514 kubelet[2608]: E0213 15:27:58.618456 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:59.336946 containerd[1457]: time="2025-02-13T15:27:59.336175178Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:59.337336 containerd[1457]: time="2025-02-13T15:27:59.337044989Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5117811" Feb 13 15:27:59.338116 containerd[1457]: time="2025-02-13T15:27:59.338086506Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:59.340000 containerd[1457]: time="2025-02-13T15:27:59.339961709Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:59.340795 containerd[1457]: time="2025-02-13T15:27:59.340760510Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.350476507s" Feb 13 15:27:59.340843 containerd[1457]: time="2025-02-13T15:27:59.340796275Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Feb 13 15:27:59.342901 containerd[1457]: time="2025-02-13T15:27:59.342871268Z" level=info msg="CreateContainer within sandbox \"d6c73f0abcc32395de12341c2bc31a7bc0d667286ec5f4e24119796f08c18518\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 15:27:59.352172 containerd[1457]: time="2025-02-13T15:27:59.352115783Z" level=info msg="CreateContainer within sandbox \"d6c73f0abcc32395de12341c2bc31a7bc0d667286ec5f4e24119796f08c18518\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"2cb9d2ffef8d1d3b9a8bad794fd85b46cd17bacfc6f798575d5859e7fa6e2deb\"" Feb 13 15:27:59.352758 containerd[1457]: time="2025-02-13T15:27:59.352578653Z" level=info msg="StartContainer for \"2cb9d2ffef8d1d3b9a8bad794fd85b46cd17bacfc6f798575d5859e7fa6e2deb\"" Feb 13 15:27:59.394845 systemd[1]: Started cri-containerd-2cb9d2ffef8d1d3b9a8bad794fd85b46cd17bacfc6f798575d5859e7fa6e2deb.scope - libcontainer container 2cb9d2ffef8d1d3b9a8bad794fd85b46cd17bacfc6f798575d5859e7fa6e2deb. Feb 13 15:27:59.425872 containerd[1457]: time="2025-02-13T15:27:59.425817903Z" level=info msg="StartContainer for \"2cb9d2ffef8d1d3b9a8bad794fd85b46cd17bacfc6f798575d5859e7fa6e2deb\" returns successfully" Feb 13 15:27:59.474046 systemd[1]: cri-containerd-2cb9d2ffef8d1d3b9a8bad794fd85b46cd17bacfc6f798575d5859e7fa6e2deb.scope: Deactivated successfully. Feb 13 15:27:59.494288 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2cb9d2ffef8d1d3b9a8bad794fd85b46cd17bacfc6f798575d5859e7fa6e2deb-rootfs.mount: Deactivated successfully. Feb 13 15:27:59.505271 containerd[1457]: time="2025-02-13T15:27:59.500595505Z" level=info msg="shim disconnected" id=2cb9d2ffef8d1d3b9a8bad794fd85b46cd17bacfc6f798575d5859e7fa6e2deb namespace=k8s.io Feb 13 15:27:59.505271 containerd[1457]: time="2025-02-13T15:27:59.505098184Z" level=warning msg="cleaning up after shim disconnected" id=2cb9d2ffef8d1d3b9a8bad794fd85b46cd17bacfc6f798575d5859e7fa6e2deb namespace=k8s.io Feb 13 15:27:59.505271 containerd[1457]: time="2025-02-13T15:27:59.505111786Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:27:59.521498 kubelet[2608]: E0213 15:27:59.521441 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:59.522968 kubelet[2608]: I0213 15:27:59.522907 2608 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:27:59.523687 kubelet[2608]: E0213 15:27:59.523605 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:59.523817 containerd[1457]: time="2025-02-13T15:27:59.523796125Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 15:28:00.424480 kubelet[2608]: E0213 15:28:00.424396 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-59ns4" podUID="8c67a7be-7144-4f11-b45e-f04dfd3de75c" Feb 13 15:28:02.423917 kubelet[2608]: E0213 15:28:02.423867 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-59ns4" podUID="8c67a7be-7144-4f11-b45e-f04dfd3de75c" Feb 13 15:28:03.228112 containerd[1457]: time="2025-02-13T15:28:03.228015010Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:03.228643 containerd[1457]: time="2025-02-13T15:28:03.228579123Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Feb 13 15:28:03.232351 containerd[1457]: time="2025-02-13T15:28:03.230288423Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:03.232746 containerd[1457]: time="2025-02-13T15:28:03.232719736Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:03.233895 containerd[1457]: time="2025-02-13T15:28:03.233843241Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 3.71001603s" Feb 13 15:28:03.233895 containerd[1457]: time="2025-02-13T15:28:03.233898408Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Feb 13 15:28:03.236980 systemd[1]: Started sshd@7-10.0.0.91:22-10.0.0.1:49524.service - OpenSSH per-connection server daemon (10.0.0.1:49524). Feb 13 15:28:03.240808 containerd[1457]: time="2025-02-13T15:28:03.240673920Z" level=info msg="CreateContainer within sandbox \"d6c73f0abcc32395de12341c2bc31a7bc0d667286ec5f4e24119796f08c18518\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 15:28:03.255374 containerd[1457]: time="2025-02-13T15:28:03.255323606Z" level=info msg="CreateContainer within sandbox \"d6c73f0abcc32395de12341c2bc31a7bc0d667286ec5f4e24119796f08c18518\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"24a31b7155731d8d8e6861a05bdfbe11ba81067ddc2bae2bfdbf1886e7d965ac\"" Feb 13 15:28:03.257736 containerd[1457]: time="2025-02-13T15:28:03.257489485Z" level=info msg="StartContainer for \"24a31b7155731d8d8e6861a05bdfbe11ba81067ddc2bae2bfdbf1886e7d965ac\"" Feb 13 15:28:03.301030 sshd[3301]: Accepted publickey for core from 10.0.0.1 port 49524 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:28:03.302364 sshd-session[3301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:28:03.305819 systemd[1]: Started cri-containerd-24a31b7155731d8d8e6861a05bdfbe11ba81067ddc2bae2bfdbf1886e7d965ac.scope - libcontainer container 24a31b7155731d8d8e6861a05bdfbe11ba81067ddc2bae2bfdbf1886e7d965ac. Feb 13 15:28:03.310149 systemd-logind[1422]: New session 8 of user core. Feb 13 15:28:03.318798 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 15:28:03.440678 containerd[1457]: time="2025-02-13T15:28:03.440519171Z" level=info msg="StartContainer for \"24a31b7155731d8d8e6861a05bdfbe11ba81067ddc2bae2bfdbf1886e7d965ac\" returns successfully" Feb 13 15:28:03.469179 sshd[3327]: Connection closed by 10.0.0.1 port 49524 Feb 13 15:28:03.471144 sshd-session[3301]: pam_unix(sshd:session): session closed for user core Feb 13 15:28:03.474442 systemd[1]: sshd@7-10.0.0.91:22-10.0.0.1:49524.service: Deactivated successfully. Feb 13 15:28:03.477013 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 15:28:03.479411 systemd-logind[1422]: Session 8 logged out. Waiting for processes to exit. Feb 13 15:28:03.480927 systemd-logind[1422]: Removed session 8. Feb 13 15:28:03.534719 kubelet[2608]: E0213 15:28:03.534670 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:03.961016 systemd[1]: cri-containerd-24a31b7155731d8d8e6861a05bdfbe11ba81067ddc2bae2bfdbf1886e7d965ac.scope: Deactivated successfully. Feb 13 15:28:03.976056 kubelet[2608]: I0213 15:28:03.975999 2608 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 15:28:03.980841 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-24a31b7155731d8d8e6861a05bdfbe11ba81067ddc2bae2bfdbf1886e7d965ac-rootfs.mount: Deactivated successfully. Feb 13 15:28:03.991447 containerd[1457]: time="2025-02-13T15:28:03.991246038Z" level=info msg="shim disconnected" id=24a31b7155731d8d8e6861a05bdfbe11ba81067ddc2bae2bfdbf1886e7d965ac namespace=k8s.io Feb 13 15:28:03.991447 containerd[1457]: time="2025-02-13T15:28:03.991296565Z" level=warning msg="cleaning up after shim disconnected" id=24a31b7155731d8d8e6861a05bdfbe11ba81067ddc2bae2bfdbf1886e7d965ac namespace=k8s.io Feb 13 15:28:03.991447 containerd[1457]: time="2025-02-13T15:28:03.991307206Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:28:04.002648 kubelet[2608]: I0213 15:28:04.002548 2608 topology_manager.go:215] "Topology Admit Handler" podUID="859d37ac-44c9-4b92-854b-e6ca0540dbd1" podNamespace="kube-system" podName="coredns-7db6d8ff4d-vhgmw" Feb 13 15:28:04.006643 kubelet[2608]: I0213 15:28:04.006483 2608 topology_manager.go:215] "Topology Admit Handler" podUID="7201f0b7-6fae-4b37-8849-0e2e56956168" podNamespace="calico-system" podName="calico-kube-controllers-ccbb9dcd9-2n9js" Feb 13 15:28:04.021051 kubelet[2608]: I0213 15:28:04.019734 2608 topology_manager.go:215] "Topology Admit Handler" podUID="2ce64576-1ac8-4271-89c9-a8de4b77d706" podNamespace="calico-apiserver" podName="calico-apiserver-5b5dbfc55b-xsmtx" Feb 13 15:28:04.024672 kubelet[2608]: I0213 15:28:04.023311 2608 topology_manager.go:215] "Topology Admit Handler" podUID="d26de4e2-c62e-4d8a-96e0-edbb9492094a" podNamespace="kube-system" podName="coredns-7db6d8ff4d-j2fhf" Feb 13 15:28:04.024672 kubelet[2608]: I0213 15:28:04.023480 2608 topology_manager.go:215] "Topology Admit Handler" podUID="086f2ebd-d6e8-46e2-831d-0f37b85724a2" podNamespace="calico-apiserver" podName="calico-apiserver-5b5dbfc55b-kt4sn" Feb 13 15:28:04.022279 systemd[1]: Created slice kubepods-burstable-pod859d37ac_44c9_4b92_854b_e6ca0540dbd1.slice - libcontainer container kubepods-burstable-pod859d37ac_44c9_4b92_854b_e6ca0540dbd1.slice. Feb 13 15:28:04.034587 systemd[1]: Created slice kubepods-besteffort-pod7201f0b7_6fae_4b37_8849_0e2e56956168.slice - libcontainer container kubepods-besteffort-pod7201f0b7_6fae_4b37_8849_0e2e56956168.slice. Feb 13 15:28:04.038374 systemd[1]: Created slice kubepods-besteffort-pod2ce64576_1ac8_4271_89c9_a8de4b77d706.slice - libcontainer container kubepods-besteffort-pod2ce64576_1ac8_4271_89c9_a8de4b77d706.slice. Feb 13 15:28:04.043178 systemd[1]: Created slice kubepods-besteffort-pod086f2ebd_d6e8_46e2_831d_0f37b85724a2.slice - libcontainer container kubepods-besteffort-pod086f2ebd_d6e8_46e2_831d_0f37b85724a2.slice. Feb 13 15:28:04.057105 kubelet[2608]: I0213 15:28:04.057059 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/859d37ac-44c9-4b92-854b-e6ca0540dbd1-config-volume\") pod \"coredns-7db6d8ff4d-vhgmw\" (UID: \"859d37ac-44c9-4b92-854b-e6ca0540dbd1\") " pod="kube-system/coredns-7db6d8ff4d-vhgmw" Feb 13 15:28:04.057105 kubelet[2608]: I0213 15:28:04.057108 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4p42\" (UniqueName: \"kubernetes.io/projected/859d37ac-44c9-4b92-854b-e6ca0540dbd1-kube-api-access-k4p42\") pod \"coredns-7db6d8ff4d-vhgmw\" (UID: \"859d37ac-44c9-4b92-854b-e6ca0540dbd1\") " pod="kube-system/coredns-7db6d8ff4d-vhgmw" Feb 13 15:28:04.057251 kubelet[2608]: I0213 15:28:04.057131 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sr2sv\" (UniqueName: \"kubernetes.io/projected/086f2ebd-d6e8-46e2-831d-0f37b85724a2-kube-api-access-sr2sv\") pod \"calico-apiserver-5b5dbfc55b-kt4sn\" (UID: \"086f2ebd-d6e8-46e2-831d-0f37b85724a2\") " pod="calico-apiserver/calico-apiserver-5b5dbfc55b-kt4sn" Feb 13 15:28:04.057251 kubelet[2608]: I0213 15:28:04.057150 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smw57\" (UniqueName: \"kubernetes.io/projected/2ce64576-1ac8-4271-89c9-a8de4b77d706-kube-api-access-smw57\") pod \"calico-apiserver-5b5dbfc55b-xsmtx\" (UID: \"2ce64576-1ac8-4271-89c9-a8de4b77d706\") " pod="calico-apiserver/calico-apiserver-5b5dbfc55b-xsmtx" Feb 13 15:28:04.057251 kubelet[2608]: I0213 15:28:04.057168 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qw5qp\" (UniqueName: \"kubernetes.io/projected/d26de4e2-c62e-4d8a-96e0-edbb9492094a-kube-api-access-qw5qp\") pod \"coredns-7db6d8ff4d-j2fhf\" (UID: \"d26de4e2-c62e-4d8a-96e0-edbb9492094a\") " pod="kube-system/coredns-7db6d8ff4d-j2fhf" Feb 13 15:28:04.057251 kubelet[2608]: I0213 15:28:04.057187 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7201f0b7-6fae-4b37-8849-0e2e56956168-tigera-ca-bundle\") pod \"calico-kube-controllers-ccbb9dcd9-2n9js\" (UID: \"7201f0b7-6fae-4b37-8849-0e2e56956168\") " pod="calico-system/calico-kube-controllers-ccbb9dcd9-2n9js" Feb 13 15:28:04.057251 kubelet[2608]: I0213 15:28:04.057206 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/086f2ebd-d6e8-46e2-831d-0f37b85724a2-calico-apiserver-certs\") pod \"calico-apiserver-5b5dbfc55b-kt4sn\" (UID: \"086f2ebd-d6e8-46e2-831d-0f37b85724a2\") " pod="calico-apiserver/calico-apiserver-5b5dbfc55b-kt4sn" Feb 13 15:28:04.057362 kubelet[2608]: I0213 15:28:04.057223 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2ce64576-1ac8-4271-89c9-a8de4b77d706-calico-apiserver-certs\") pod \"calico-apiserver-5b5dbfc55b-xsmtx\" (UID: \"2ce64576-1ac8-4271-89c9-a8de4b77d706\") " pod="calico-apiserver/calico-apiserver-5b5dbfc55b-xsmtx" Feb 13 15:28:04.057362 kubelet[2608]: I0213 15:28:04.057240 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mkgb\" (UniqueName: \"kubernetes.io/projected/7201f0b7-6fae-4b37-8849-0e2e56956168-kube-api-access-7mkgb\") pod \"calico-kube-controllers-ccbb9dcd9-2n9js\" (UID: \"7201f0b7-6fae-4b37-8849-0e2e56956168\") " pod="calico-system/calico-kube-controllers-ccbb9dcd9-2n9js" Feb 13 15:28:04.057362 kubelet[2608]: I0213 15:28:04.057257 2608 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d26de4e2-c62e-4d8a-96e0-edbb9492094a-config-volume\") pod \"coredns-7db6d8ff4d-j2fhf\" (UID: \"d26de4e2-c62e-4d8a-96e0-edbb9492094a\") " pod="kube-system/coredns-7db6d8ff4d-j2fhf" Feb 13 15:28:04.091806 systemd[1]: Created slice kubepods-burstable-podd26de4e2_c62e_4d8a_96e0_edbb9492094a.slice - libcontainer container kubepods-burstable-podd26de4e2_c62e_4d8a_96e0_edbb9492094a.slice. Feb 13 15:28:04.332443 kubelet[2608]: E0213 15:28:04.332327 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:04.333349 containerd[1457]: time="2025-02-13T15:28:04.333312798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vhgmw,Uid:859d37ac-44c9-4b92-854b-e6ca0540dbd1,Namespace:kube-system,Attempt:0,}" Feb 13 15:28:04.337903 containerd[1457]: time="2025-02-13T15:28:04.337870084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-ccbb9dcd9-2n9js,Uid:7201f0b7-6fae-4b37-8849-0e2e56956168,Namespace:calico-system,Attempt:0,}" Feb 13 15:28:04.342095 containerd[1457]: time="2025-02-13T15:28:04.341829975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b5dbfc55b-xsmtx,Uid:2ce64576-1ac8-4271-89c9-a8de4b77d706,Namespace:calico-apiserver,Attempt:0,}" Feb 13 15:28:04.379665 containerd[1457]: time="2025-02-13T15:28:04.378865130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b5dbfc55b-kt4sn,Uid:086f2ebd-d6e8-46e2-831d-0f37b85724a2,Namespace:calico-apiserver,Attempt:0,}" Feb 13 15:28:04.400888 kubelet[2608]: E0213 15:28:04.400570 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:04.408200 containerd[1457]: time="2025-02-13T15:28:04.406870844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j2fhf,Uid:d26de4e2-c62e-4d8a-96e0-edbb9492094a,Namespace:kube-system,Attempt:0,}" Feb 13 15:28:04.433764 systemd[1]: Created slice kubepods-besteffort-pod8c67a7be_7144_4f11_b45e_f04dfd3de75c.slice - libcontainer container kubepods-besteffort-pod8c67a7be_7144_4f11_b45e_f04dfd3de75c.slice. Feb 13 15:28:04.443538 containerd[1457]: time="2025-02-13T15:28:04.443502988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-59ns4,Uid:8c67a7be-7144-4f11-b45e-f04dfd3de75c,Namespace:calico-system,Attempt:0,}" Feb 13 15:28:04.565472 kubelet[2608]: E0213 15:28:04.564961 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:04.574479 containerd[1457]: time="2025-02-13T15:28:04.573646054Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 15:28:04.673066 containerd[1457]: time="2025-02-13T15:28:04.673009821Z" level=error msg="Failed to destroy network for sandbox \"77a75dac9c9f8165f19ab22aa953024a5e1c10959673b5e0528dc465b8c7c35e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:04.676934 containerd[1457]: time="2025-02-13T15:28:04.676613028Z" level=error msg="Failed to destroy network for sandbox \"262a4443b576ec39fe8e513ab34a3dc1609fc6bb9ac7e9f491ab2ed971993fd6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:04.677132 containerd[1457]: time="2025-02-13T15:28:04.677100008Z" level=error msg="encountered an error cleaning up failed sandbox \"77a75dac9c9f8165f19ab22aa953024a5e1c10959673b5e0528dc465b8c7c35e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:04.677213 containerd[1457]: time="2025-02-13T15:28:04.677162896Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-ccbb9dcd9-2n9js,Uid:7201f0b7-6fae-4b37-8849-0e2e56956168,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"77a75dac9c9f8165f19ab22aa953024a5e1c10959673b5e0528dc465b8c7c35e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:04.678589 containerd[1457]: time="2025-02-13T15:28:04.678543067Z" level=error msg="encountered an error cleaning up failed sandbox \"262a4443b576ec39fe8e513ab34a3dc1609fc6bb9ac7e9f491ab2ed971993fd6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:04.678709 containerd[1457]: time="2025-02-13T15:28:04.678607675Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b5dbfc55b-xsmtx,Uid:2ce64576-1ac8-4271-89c9-a8de4b77d706,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"262a4443b576ec39fe8e513ab34a3dc1609fc6bb9ac7e9f491ab2ed971993fd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:04.682716 kubelet[2608]: E0213 15:28:04.682649 2608 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"262a4443b576ec39fe8e513ab34a3dc1609fc6bb9ac7e9f491ab2ed971993fd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:04.682943 kubelet[2608]: E0213 15:28:04.682868 2608 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77a75dac9c9f8165f19ab22aa953024a5e1c10959673b5e0528dc465b8c7c35e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:04.683092 kubelet[2608]: E0213 15:28:04.683057 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77a75dac9c9f8165f19ab22aa953024a5e1c10959673b5e0528dc465b8c7c35e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-ccbb9dcd9-2n9js" Feb 13 15:28:04.683182 containerd[1457]: time="2025-02-13T15:28:04.683123555Z" level=error msg="Failed to destroy network for sandbox \"0e912db41237de96aedc11e4bdfeaf221d0a4eb32d2c79b569dc586de06677c4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:04.683337 kubelet[2608]: E0213 15:28:04.683292 2608 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77a75dac9c9f8165f19ab22aa953024a5e1c10959673b5e0528dc465b8c7c35e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-ccbb9dcd9-2n9js" Feb 13 15:28:04.683475 kubelet[2608]: E0213 15:28:04.683258 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"262a4443b576ec39fe8e513ab34a3dc1609fc6bb9ac7e9f491ab2ed971993fd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b5dbfc55b-xsmtx" Feb 13 15:28:04.683546 kubelet[2608]: E0213 15:28:04.683530 2608 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"262a4443b576ec39fe8e513ab34a3dc1609fc6bb9ac7e9f491ab2ed971993fd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b5dbfc55b-xsmtx" Feb 13 15:28:04.683678 kubelet[2608]: E0213 15:28:04.683448 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-ccbb9dcd9-2n9js_calico-system(7201f0b7-6fae-4b37-8849-0e2e56956168)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-ccbb9dcd9-2n9js_calico-system(7201f0b7-6fae-4b37-8849-0e2e56956168)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"77a75dac9c9f8165f19ab22aa953024a5e1c10959673b5e0528dc465b8c7c35e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-ccbb9dcd9-2n9js" podUID="7201f0b7-6fae-4b37-8849-0e2e56956168" Feb 13 15:28:04.683890 kubelet[2608]: E0213 15:28:04.683612 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b5dbfc55b-xsmtx_calico-apiserver(2ce64576-1ac8-4271-89c9-a8de4b77d706)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b5dbfc55b-xsmtx_calico-apiserver(2ce64576-1ac8-4271-89c9-a8de4b77d706)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"262a4443b576ec39fe8e513ab34a3dc1609fc6bb9ac7e9f491ab2ed971993fd6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b5dbfc55b-xsmtx" podUID="2ce64576-1ac8-4271-89c9-a8de4b77d706" Feb 13 15:28:04.684214 containerd[1457]: time="2025-02-13T15:28:04.684176326Z" level=error msg="encountered an error cleaning up failed sandbox \"0e912db41237de96aedc11e4bdfeaf221d0a4eb32d2c79b569dc586de06677c4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:04.684271 containerd[1457]: time="2025-02-13T15:28:04.684242254Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vhgmw,Uid:859d37ac-44c9-4b92-854b-e6ca0540dbd1,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0e912db41237de96aedc11e4bdfeaf221d0a4eb32d2c79b569dc586de06677c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:04.685952 kubelet[2608]: E0213 15:28:04.685821 2608 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e912db41237de96aedc11e4bdfeaf221d0a4eb32d2c79b569dc586de06677c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:04.686138 kubelet[2608]: E0213 15:28:04.686114 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e912db41237de96aedc11e4bdfeaf221d0a4eb32d2c79b569dc586de06677c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-vhgmw" Feb 13 15:28:04.686239 kubelet[2608]: E0213 15:28:04.686222 2608 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e912db41237de96aedc11e4bdfeaf221d0a4eb32d2c79b569dc586de06677c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-vhgmw" Feb 13 15:28:04.686412 kubelet[2608]: E0213 15:28:04.686387 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-vhgmw_kube-system(859d37ac-44c9-4b92-854b-e6ca0540dbd1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-vhgmw_kube-system(859d37ac-44c9-4b92-854b-e6ca0540dbd1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0e912db41237de96aedc11e4bdfeaf221d0a4eb32d2c79b569dc586de06677c4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-vhgmw" podUID="859d37ac-44c9-4b92-854b-e6ca0540dbd1" Feb 13 15:28:04.694918 containerd[1457]: time="2025-02-13T15:28:04.694851970Z" level=error msg="Failed to destroy network for sandbox \"400e05d09f7638f6370144bc7c30317522f5d53fca24796970245605aaf907e8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:04.696004 containerd[1457]: time="2025-02-13T15:28:04.695958028Z" level=error msg="encountered an error cleaning up failed sandbox \"400e05d09f7638f6370144bc7c30317522f5d53fca24796970245605aaf907e8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:04.696112 containerd[1457]: time="2025-02-13T15:28:04.696087404Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j2fhf,Uid:d26de4e2-c62e-4d8a-96e0-edbb9492094a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"400e05d09f7638f6370144bc7c30317522f5d53fca24796970245605aaf907e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:04.696429 kubelet[2608]: E0213 15:28:04.696364 2608 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"400e05d09f7638f6370144bc7c30317522f5d53fca24796970245605aaf907e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:04.696591 kubelet[2608]: E0213 15:28:04.696524 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"400e05d09f7638f6370144bc7c30317522f5d53fca24796970245605aaf907e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-j2fhf" Feb 13 15:28:04.696591 kubelet[2608]: E0213 15:28:04.696545 2608 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"400e05d09f7638f6370144bc7c30317522f5d53fca24796970245605aaf907e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-j2fhf" Feb 13 15:28:04.696793 kubelet[2608]: E0213 15:28:04.696649 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-j2fhf_kube-system(d26de4e2-c62e-4d8a-96e0-edbb9492094a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-j2fhf_kube-system(d26de4e2-c62e-4d8a-96e0-edbb9492094a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"400e05d09f7638f6370144bc7c30317522f5d53fca24796970245605aaf907e8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-j2fhf" podUID="d26de4e2-c62e-4d8a-96e0-edbb9492094a" Feb 13 15:28:04.706489 containerd[1457]: time="2025-02-13T15:28:04.706439488Z" level=error msg="Failed to destroy network for sandbox \"63565dfc82761cf07ac785a68455c18a81508c45b3108b2a758b41876e99d6f2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:04.707130 containerd[1457]: time="2025-02-13T15:28:04.707094889Z" level=error msg="encountered an error cleaning up failed sandbox \"63565dfc82761cf07ac785a68455c18a81508c45b3108b2a758b41876e99d6f2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:04.707196 containerd[1457]: time="2025-02-13T15:28:04.707167178Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-59ns4,Uid:8c67a7be-7144-4f11-b45e-f04dfd3de75c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"63565dfc82761cf07ac785a68455c18a81508c45b3108b2a758b41876e99d6f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:04.707627 kubelet[2608]: E0213 15:28:04.707402 2608 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63565dfc82761cf07ac785a68455c18a81508c45b3108b2a758b41876e99d6f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:04.707627 kubelet[2608]: E0213 15:28:04.707458 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63565dfc82761cf07ac785a68455c18a81508c45b3108b2a758b41876e99d6f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-59ns4" Feb 13 15:28:04.707627 kubelet[2608]: E0213 15:28:04.707480 2608 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63565dfc82761cf07ac785a68455c18a81508c45b3108b2a758b41876e99d6f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-59ns4" Feb 13 15:28:04.707783 kubelet[2608]: E0213 15:28:04.707517 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-59ns4_calico-system(8c67a7be-7144-4f11-b45e-f04dfd3de75c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-59ns4_calico-system(8c67a7be-7144-4f11-b45e-f04dfd3de75c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"63565dfc82761cf07ac785a68455c18a81508c45b3108b2a758b41876e99d6f2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-59ns4" podUID="8c67a7be-7144-4f11-b45e-f04dfd3de75c" Feb 13 15:28:04.720079 containerd[1457]: time="2025-02-13T15:28:04.719433260Z" level=error msg="Failed to destroy network for sandbox \"44d4ae110854cc57cbfa55e39401c987d33916faeb2710cc87a549e4d94657b0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:04.720944 containerd[1457]: time="2025-02-13T15:28:04.720713859Z" level=error msg="encountered an error cleaning up failed sandbox \"44d4ae110854cc57cbfa55e39401c987d33916faeb2710cc87a549e4d94657b0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:04.720944 containerd[1457]: time="2025-02-13T15:28:04.720788268Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b5dbfc55b-kt4sn,Uid:086f2ebd-d6e8-46e2-831d-0f37b85724a2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"44d4ae110854cc57cbfa55e39401c987d33916faeb2710cc87a549e4d94657b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:04.721966 kubelet[2608]: E0213 15:28:04.721920 2608 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44d4ae110854cc57cbfa55e39401c987d33916faeb2710cc87a549e4d94657b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:04.722081 kubelet[2608]: E0213 15:28:04.721984 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44d4ae110854cc57cbfa55e39401c987d33916faeb2710cc87a549e4d94657b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b5dbfc55b-kt4sn" Feb 13 15:28:04.722081 kubelet[2608]: E0213 15:28:04.722020 2608 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44d4ae110854cc57cbfa55e39401c987d33916faeb2710cc87a549e4d94657b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b5dbfc55b-kt4sn" Feb 13 15:28:04.722081 kubelet[2608]: E0213 15:28:04.722063 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b5dbfc55b-kt4sn_calico-apiserver(086f2ebd-d6e8-46e2-831d-0f37b85724a2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b5dbfc55b-kt4sn_calico-apiserver(086f2ebd-d6e8-46e2-831d-0f37b85724a2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"44d4ae110854cc57cbfa55e39401c987d33916faeb2710cc87a549e4d94657b0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b5dbfc55b-kt4sn" podUID="086f2ebd-d6e8-46e2-831d-0f37b85724a2" Feb 13 15:28:05.250256 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-262a4443b576ec39fe8e513ab34a3dc1609fc6bb9ac7e9f491ab2ed971993fd6-shm.mount: Deactivated successfully. Feb 13 15:28:05.250350 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-77a75dac9c9f8165f19ab22aa953024a5e1c10959673b5e0528dc465b8c7c35e-shm.mount: Deactivated successfully. Feb 13 15:28:05.250409 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0e912db41237de96aedc11e4bdfeaf221d0a4eb32d2c79b569dc586de06677c4-shm.mount: Deactivated successfully. Feb 13 15:28:05.570073 kubelet[2608]: I0213 15:28:05.569947 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="400e05d09f7638f6370144bc7c30317522f5d53fca24796970245605aaf907e8" Feb 13 15:28:05.572685 containerd[1457]: time="2025-02-13T15:28:05.572640990Z" level=info msg="StopPodSandbox for \"400e05d09f7638f6370144bc7c30317522f5d53fca24796970245605aaf907e8\"" Feb 13 15:28:05.573022 containerd[1457]: time="2025-02-13T15:28:05.572817171Z" level=info msg="Ensure that sandbox 400e05d09f7638f6370144bc7c30317522f5d53fca24796970245605aaf907e8 in task-service has been cleanup successfully" Feb 13 15:28:05.574713 systemd[1]: run-netns-cni\x2d5112d4bf\x2d7b5f\x2d683e\x2de121\x2d00bd4a25cb06.mount: Deactivated successfully. Feb 13 15:28:05.575998 kubelet[2608]: I0213 15:28:05.575972 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44d4ae110854cc57cbfa55e39401c987d33916faeb2710cc87a549e4d94657b0" Feb 13 15:28:05.576868 containerd[1457]: time="2025-02-13T15:28:05.576497732Z" level=info msg="StopPodSandbox for \"44d4ae110854cc57cbfa55e39401c987d33916faeb2710cc87a549e4d94657b0\"" Feb 13 15:28:05.576868 containerd[1457]: time="2025-02-13T15:28:05.576594383Z" level=info msg="TearDown network for sandbox \"400e05d09f7638f6370144bc7c30317522f5d53fca24796970245605aaf907e8\" successfully" Feb 13 15:28:05.576868 containerd[1457]: time="2025-02-13T15:28:05.576613746Z" level=info msg="StopPodSandbox for \"400e05d09f7638f6370144bc7c30317522f5d53fca24796970245605aaf907e8\" returns successfully" Feb 13 15:28:05.576868 containerd[1457]: time="2025-02-13T15:28:05.576703436Z" level=info msg="Ensure that sandbox 44d4ae110854cc57cbfa55e39401c987d33916faeb2710cc87a549e4d94657b0 in task-service has been cleanup successfully" Feb 13 15:28:05.577196 kubelet[2608]: E0213 15:28:05.577003 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:05.577296 containerd[1457]: time="2025-02-13T15:28:05.577271464Z" level=info msg="TearDown network for sandbox \"44d4ae110854cc57cbfa55e39401c987d33916faeb2710cc87a549e4d94657b0\" successfully" Feb 13 15:28:05.577345 containerd[1457]: time="2025-02-13T15:28:05.577333912Z" level=info msg="StopPodSandbox for \"44d4ae110854cc57cbfa55e39401c987d33916faeb2710cc87a549e4d94657b0\" returns successfully" Feb 13 15:28:05.577476 containerd[1457]: time="2025-02-13T15:28:05.577438524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j2fhf,Uid:d26de4e2-c62e-4d8a-96e0-edbb9492094a,Namespace:kube-system,Attempt:1,}" Feb 13 15:28:05.578497 containerd[1457]: time="2025-02-13T15:28:05.577801968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b5dbfc55b-kt4sn,Uid:086f2ebd-d6e8-46e2-831d-0f37b85724a2,Namespace:calico-apiserver,Attempt:1,}" Feb 13 15:28:05.579469 kubelet[2608]: I0213 15:28:05.579431 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="262a4443b576ec39fe8e513ab34a3dc1609fc6bb9ac7e9f491ab2ed971993fd6" Feb 13 15:28:05.583308 systemd[1]: run-netns-cni\x2d436ef4e1\x2d97ee\x2d62a4\x2dc401\x2d17884aad0cd1.mount: Deactivated successfully. Feb 13 15:28:05.584812 containerd[1457]: time="2025-02-13T15:28:05.584696833Z" level=info msg="StopPodSandbox for \"262a4443b576ec39fe8e513ab34a3dc1609fc6bb9ac7e9f491ab2ed971993fd6\"" Feb 13 15:28:05.586192 containerd[1457]: time="2025-02-13T15:28:05.586156048Z" level=info msg="Ensure that sandbox 262a4443b576ec39fe8e513ab34a3dc1609fc6bb9ac7e9f491ab2ed971993fd6 in task-service has been cleanup successfully" Feb 13 15:28:05.589246 containerd[1457]: time="2025-02-13T15:28:05.587780122Z" level=info msg="TearDown network for sandbox \"262a4443b576ec39fe8e513ab34a3dc1609fc6bb9ac7e9f491ab2ed971993fd6\" successfully" Feb 13 15:28:05.589246 containerd[1457]: time="2025-02-13T15:28:05.587812686Z" level=info msg="StopPodSandbox for \"262a4443b576ec39fe8e513ab34a3dc1609fc6bb9ac7e9f491ab2ed971993fd6\" returns successfully" Feb 13 15:28:05.589316 systemd[1]: run-netns-cni\x2d4ef7f727\x2de312\x2d5459\x2d1c76\x2d06b7197c9f02.mount: Deactivated successfully. Feb 13 15:28:05.595361 kubelet[2608]: I0213 15:28:05.594933 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63565dfc82761cf07ac785a68455c18a81508c45b3108b2a758b41876e99d6f2" Feb 13 15:28:05.596151 containerd[1457]: time="2025-02-13T15:28:05.595296461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b5dbfc55b-xsmtx,Uid:2ce64576-1ac8-4271-89c9-a8de4b77d706,Namespace:calico-apiserver,Attempt:1,}" Feb 13 15:28:05.598796 kubelet[2608]: I0213 15:28:05.598166 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77a75dac9c9f8165f19ab22aa953024a5e1c10959673b5e0528dc465b8c7c35e" Feb 13 15:28:05.599372 containerd[1457]: time="2025-02-13T15:28:05.599050911Z" level=info msg="StopPodSandbox for \"77a75dac9c9f8165f19ab22aa953024a5e1c10959673b5e0528dc465b8c7c35e\"" Feb 13 15:28:05.599372 containerd[1457]: time="2025-02-13T15:28:05.599237733Z" level=info msg="Ensure that sandbox 77a75dac9c9f8165f19ab22aa953024a5e1c10959673b5e0528dc465b8c7c35e in task-service has been cleanup successfully" Feb 13 15:28:05.601301 containerd[1457]: time="2025-02-13T15:28:05.601264255Z" level=info msg="TearDown network for sandbox \"77a75dac9c9f8165f19ab22aa953024a5e1c10959673b5e0528dc465b8c7c35e\" successfully" Feb 13 15:28:05.601395 containerd[1457]: time="2025-02-13T15:28:05.601378829Z" level=info msg="StopPodSandbox for \"77a75dac9c9f8165f19ab22aa953024a5e1c10959673b5e0528dc465b8c7c35e\" returns successfully" Feb 13 15:28:05.602051 systemd[1]: run-netns-cni\x2dd95d9dfc\x2d9bef\x2db392\x2d0dd2\x2d2aea69bd771e.mount: Deactivated successfully. Feb 13 15:28:05.603047 containerd[1457]: time="2025-02-13T15:28:05.602174204Z" level=info msg="StopPodSandbox for \"63565dfc82761cf07ac785a68455c18a81508c45b3108b2a758b41876e99d6f2\"" Feb 13 15:28:05.603047 containerd[1457]: time="2025-02-13T15:28:05.602341104Z" level=info msg="Ensure that sandbox 63565dfc82761cf07ac785a68455c18a81508c45b3108b2a758b41876e99d6f2 in task-service has been cleanup successfully" Feb 13 15:28:05.603047 containerd[1457]: time="2025-02-13T15:28:05.602508724Z" level=info msg="TearDown network for sandbox \"63565dfc82761cf07ac785a68455c18a81508c45b3108b2a758b41876e99d6f2\" successfully" Feb 13 15:28:05.603047 containerd[1457]: time="2025-02-13T15:28:05.602524046Z" level=info msg="StopPodSandbox for \"63565dfc82761cf07ac785a68455c18a81508c45b3108b2a758b41876e99d6f2\" returns successfully" Feb 13 15:28:05.604011 containerd[1457]: time="2025-02-13T15:28:05.603636979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-59ns4,Uid:8c67a7be-7144-4f11-b45e-f04dfd3de75c,Namespace:calico-system,Attempt:1,}" Feb 13 15:28:05.604691 kubelet[2608]: I0213 15:28:05.604302 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e912db41237de96aedc11e4bdfeaf221d0a4eb32d2c79b569dc586de06677c4" Feb 13 15:28:05.605443 containerd[1457]: time="2025-02-13T15:28:05.603968059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-ccbb9dcd9-2n9js,Uid:7201f0b7-6fae-4b37-8849-0e2e56956168,Namespace:calico-system,Attempt:1,}" Feb 13 15:28:05.605781 containerd[1457]: time="2025-02-13T15:28:05.604880608Z" level=info msg="StopPodSandbox for \"0e912db41237de96aedc11e4bdfeaf221d0a4eb32d2c79b569dc586de06677c4\"" Feb 13 15:28:05.606524 containerd[1457]: time="2025-02-13T15:28:05.606476639Z" level=info msg="Ensure that sandbox 0e912db41237de96aedc11e4bdfeaf221d0a4eb32d2c79b569dc586de06677c4 in task-service has been cleanup successfully" Feb 13 15:28:05.608703 containerd[1457]: time="2025-02-13T15:28:05.608670462Z" level=info msg="TearDown network for sandbox \"0e912db41237de96aedc11e4bdfeaf221d0a4eb32d2c79b569dc586de06677c4\" successfully" Feb 13 15:28:05.608703 containerd[1457]: time="2025-02-13T15:28:05.608696625Z" level=info msg="StopPodSandbox for \"0e912db41237de96aedc11e4bdfeaf221d0a4eb32d2c79b569dc586de06677c4\" returns successfully" Feb 13 15:28:05.609189 kubelet[2608]: E0213 15:28:05.609166 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:05.609511 containerd[1457]: time="2025-02-13T15:28:05.609486519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vhgmw,Uid:859d37ac-44c9-4b92-854b-e6ca0540dbd1,Namespace:kube-system,Attempt:1,}" Feb 13 15:28:05.731672 containerd[1457]: time="2025-02-13T15:28:05.731488238Z" level=error msg="Failed to destroy network for sandbox \"776f35edebf581700803135a5d9ab65368a38fb844db753a8fd392cbe7c754d3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:05.732411 containerd[1457]: time="2025-02-13T15:28:05.732372183Z" level=error msg="Failed to destroy network for sandbox \"0cdfe4900b8aded4d3dc434c00139a606306819ed296214d867b55f86ba8ee4a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:05.732577 containerd[1457]: time="2025-02-13T15:28:05.732404667Z" level=error msg="encountered an error cleaning up failed sandbox \"776f35edebf581700803135a5d9ab65368a38fb844db753a8fd392cbe7c754d3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:05.732647 containerd[1457]: time="2025-02-13T15:28:05.732627854Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-ccbb9dcd9-2n9js,Uid:7201f0b7-6fae-4b37-8849-0e2e56956168,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"776f35edebf581700803135a5d9ab65368a38fb844db753a8fd392cbe7c754d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:05.732885 kubelet[2608]: E0213 15:28:05.732838 2608 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"776f35edebf581700803135a5d9ab65368a38fb844db753a8fd392cbe7c754d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:05.732938 kubelet[2608]: E0213 15:28:05.732894 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"776f35edebf581700803135a5d9ab65368a38fb844db753a8fd392cbe7c754d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-ccbb9dcd9-2n9js" Feb 13 15:28:05.732938 kubelet[2608]: E0213 15:28:05.732914 2608 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"776f35edebf581700803135a5d9ab65368a38fb844db753a8fd392cbe7c754d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-ccbb9dcd9-2n9js" Feb 13 15:28:05.732987 kubelet[2608]: E0213 15:28:05.732955 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-ccbb9dcd9-2n9js_calico-system(7201f0b7-6fae-4b37-8849-0e2e56956168)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-ccbb9dcd9-2n9js_calico-system(7201f0b7-6fae-4b37-8849-0e2e56956168)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"776f35edebf581700803135a5d9ab65368a38fb844db753a8fd392cbe7c754d3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-ccbb9dcd9-2n9js" podUID="7201f0b7-6fae-4b37-8849-0e2e56956168" Feb 13 15:28:05.733219 containerd[1457]: time="2025-02-13T15:28:05.733078188Z" level=error msg="encountered an error cleaning up failed sandbox \"0cdfe4900b8aded4d3dc434c00139a606306819ed296214d867b55f86ba8ee4a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:05.733219 containerd[1457]: time="2025-02-13T15:28:05.733144396Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j2fhf,Uid:d26de4e2-c62e-4d8a-96e0-edbb9492094a,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"0cdfe4900b8aded4d3dc434c00139a606306819ed296214d867b55f86ba8ee4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:05.733335 kubelet[2608]: E0213 15:28:05.733301 2608 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0cdfe4900b8aded4d3dc434c00139a606306819ed296214d867b55f86ba8ee4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:05.733384 kubelet[2608]: E0213 15:28:05.733340 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0cdfe4900b8aded4d3dc434c00139a606306819ed296214d867b55f86ba8ee4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-j2fhf" Feb 13 15:28:05.733384 kubelet[2608]: E0213 15:28:05.733356 2608 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0cdfe4900b8aded4d3dc434c00139a606306819ed296214d867b55f86ba8ee4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-j2fhf" Feb 13 15:28:05.733438 kubelet[2608]: E0213 15:28:05.733383 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-j2fhf_kube-system(d26de4e2-c62e-4d8a-96e0-edbb9492094a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-j2fhf_kube-system(d26de4e2-c62e-4d8a-96e0-edbb9492094a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0cdfe4900b8aded4d3dc434c00139a606306819ed296214d867b55f86ba8ee4a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-j2fhf" podUID="d26de4e2-c62e-4d8a-96e0-edbb9492094a" Feb 13 15:28:05.747859 containerd[1457]: time="2025-02-13T15:28:05.747807590Z" level=error msg="Failed to destroy network for sandbox \"9f518b19d3683f9344fecab0db2f16d46f20150c992e387844d1811fc72fa170\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:05.748970 containerd[1457]: time="2025-02-13T15:28:05.748929165Z" level=error msg="encountered an error cleaning up failed sandbox \"9f518b19d3683f9344fecab0db2f16d46f20150c992e387844d1811fc72fa170\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:05.749076 containerd[1457]: time="2025-02-13T15:28:05.749041298Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b5dbfc55b-xsmtx,Uid:2ce64576-1ac8-4271-89c9-a8de4b77d706,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"9f518b19d3683f9344fecab0db2f16d46f20150c992e387844d1811fc72fa170\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:05.751240 kubelet[2608]: E0213 15:28:05.750583 2608 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f518b19d3683f9344fecab0db2f16d46f20150c992e387844d1811fc72fa170\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:05.751240 kubelet[2608]: E0213 15:28:05.750730 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f518b19d3683f9344fecab0db2f16d46f20150c992e387844d1811fc72fa170\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b5dbfc55b-xsmtx" Feb 13 15:28:05.751240 kubelet[2608]: E0213 15:28:05.750756 2608 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f518b19d3683f9344fecab0db2f16d46f20150c992e387844d1811fc72fa170\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b5dbfc55b-xsmtx" Feb 13 15:28:05.751354 kubelet[2608]: E0213 15:28:05.750806 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b5dbfc55b-xsmtx_calico-apiserver(2ce64576-1ac8-4271-89c9-a8de4b77d706)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b5dbfc55b-xsmtx_calico-apiserver(2ce64576-1ac8-4271-89c9-a8de4b77d706)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9f518b19d3683f9344fecab0db2f16d46f20150c992e387844d1811fc72fa170\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b5dbfc55b-xsmtx" podUID="2ce64576-1ac8-4271-89c9-a8de4b77d706" Feb 13 15:28:05.752438 containerd[1457]: time="2025-02-13T15:28:05.752403540Z" level=error msg="Failed to destroy network for sandbox \"9bde7141846a528b8406457cadcce6da8913cc6f999d44f2fda3999be82f378f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:05.752937 containerd[1457]: time="2025-02-13T15:28:05.752905080Z" level=error msg="encountered an error cleaning up failed sandbox \"9bde7141846a528b8406457cadcce6da8913cc6f999d44f2fda3999be82f378f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:05.753057 containerd[1457]: time="2025-02-13T15:28:05.753033056Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b5dbfc55b-kt4sn,Uid:086f2ebd-d6e8-46e2-831d-0f37b85724a2,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"9bde7141846a528b8406457cadcce6da8913cc6f999d44f2fda3999be82f378f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:05.753361 kubelet[2608]: E0213 15:28:05.753327 2608 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9bde7141846a528b8406457cadcce6da8913cc6f999d44f2fda3999be82f378f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:05.753428 kubelet[2608]: E0213 15:28:05.753377 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9bde7141846a528b8406457cadcce6da8913cc6f999d44f2fda3999be82f378f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b5dbfc55b-kt4sn" Feb 13 15:28:05.753428 kubelet[2608]: E0213 15:28:05.753396 2608 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9bde7141846a528b8406457cadcce6da8913cc6f999d44f2fda3999be82f378f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b5dbfc55b-kt4sn" Feb 13 15:28:05.753481 kubelet[2608]: E0213 15:28:05.753428 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b5dbfc55b-kt4sn_calico-apiserver(086f2ebd-d6e8-46e2-831d-0f37b85724a2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b5dbfc55b-kt4sn_calico-apiserver(086f2ebd-d6e8-46e2-831d-0f37b85724a2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9bde7141846a528b8406457cadcce6da8913cc6f999d44f2fda3999be82f378f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b5dbfc55b-kt4sn" podUID="086f2ebd-d6e8-46e2-831d-0f37b85724a2" Feb 13 15:28:05.759919 containerd[1457]: time="2025-02-13T15:28:05.759882675Z" level=error msg="Failed to destroy network for sandbox \"2c09c28a03fd4472b318aa06a1ea4e03909edc65be9ace9f58b2d77cd52e637a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:05.760484 containerd[1457]: time="2025-02-13T15:28:05.760313807Z" level=error msg="encountered an error cleaning up failed sandbox \"2c09c28a03fd4472b318aa06a1ea4e03909edc65be9ace9f58b2d77cd52e637a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:05.760484 containerd[1457]: time="2025-02-13T15:28:05.760379895Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-59ns4,Uid:8c67a7be-7144-4f11-b45e-f04dfd3de75c,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"2c09c28a03fd4472b318aa06a1ea4e03909edc65be9ace9f58b2d77cd52e637a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:05.760612 kubelet[2608]: E0213 15:28:05.760575 2608 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c09c28a03fd4472b318aa06a1ea4e03909edc65be9ace9f58b2d77cd52e637a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:05.760680 kubelet[2608]: E0213 15:28:05.760666 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c09c28a03fd4472b318aa06a1ea4e03909edc65be9ace9f58b2d77cd52e637a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-59ns4" Feb 13 15:28:05.760706 kubelet[2608]: E0213 15:28:05.760686 2608 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c09c28a03fd4472b318aa06a1ea4e03909edc65be9ace9f58b2d77cd52e637a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-59ns4" Feb 13 15:28:05.760763 kubelet[2608]: E0213 15:28:05.760737 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-59ns4_calico-system(8c67a7be-7144-4f11-b45e-f04dfd3de75c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-59ns4_calico-system(8c67a7be-7144-4f11-b45e-f04dfd3de75c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2c09c28a03fd4472b318aa06a1ea4e03909edc65be9ace9f58b2d77cd52e637a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-59ns4" podUID="8c67a7be-7144-4f11-b45e-f04dfd3de75c" Feb 13 15:28:05.771456 containerd[1457]: time="2025-02-13T15:28:05.771409535Z" level=error msg="Failed to destroy network for sandbox \"67348c9f7030a07e08918ec471c01d8d492fb7b37edf0b70d665014a3b6d6d75\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:05.772296 containerd[1457]: time="2025-02-13T15:28:05.771949239Z" level=error msg="encountered an error cleaning up failed sandbox \"67348c9f7030a07e08918ec471c01d8d492fb7b37edf0b70d665014a3b6d6d75\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:05.772296 containerd[1457]: time="2025-02-13T15:28:05.772020208Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vhgmw,Uid:859d37ac-44c9-4b92-854b-e6ca0540dbd1,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"67348c9f7030a07e08918ec471c01d8d492fb7b37edf0b70d665014a3b6d6d75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:05.772387 kubelet[2608]: E0213 15:28:05.772287 2608 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67348c9f7030a07e08918ec471c01d8d492fb7b37edf0b70d665014a3b6d6d75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:05.772387 kubelet[2608]: E0213 15:28:05.772348 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67348c9f7030a07e08918ec471c01d8d492fb7b37edf0b70d665014a3b6d6d75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-vhgmw" Feb 13 15:28:05.772387 kubelet[2608]: E0213 15:28:05.772376 2608 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67348c9f7030a07e08918ec471c01d8d492fb7b37edf0b70d665014a3b6d6d75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-vhgmw" Feb 13 15:28:05.772471 kubelet[2608]: E0213 15:28:05.772414 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-vhgmw_kube-system(859d37ac-44c9-4b92-854b-e6ca0540dbd1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-vhgmw_kube-system(859d37ac-44c9-4b92-854b-e6ca0540dbd1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"67348c9f7030a07e08918ec471c01d8d492fb7b37edf0b70d665014a3b6d6d75\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-vhgmw" podUID="859d37ac-44c9-4b92-854b-e6ca0540dbd1" Feb 13 15:28:06.250803 systemd[1]: run-netns-cni\x2de5de9cb1\x2d3bf0\x2d8821\x2dcc7c\x2defb56bdf6ee5.mount: Deactivated successfully. Feb 13 15:28:06.250894 systemd[1]: run-netns-cni\x2d61a03fcb\x2dffbe\x2d63df\x2d349a\x2dd0448d0841c4.mount: Deactivated successfully. Feb 13 15:28:06.607639 kubelet[2608]: I0213 15:28:06.607189 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0cdfe4900b8aded4d3dc434c00139a606306819ed296214d867b55f86ba8ee4a" Feb 13 15:28:06.609146 kubelet[2608]: I0213 15:28:06.609121 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9bde7141846a528b8406457cadcce6da8913cc6f999d44f2fda3999be82f378f" Feb 13 15:28:06.610252 containerd[1457]: time="2025-02-13T15:28:06.609942721Z" level=info msg="StopPodSandbox for \"9bde7141846a528b8406457cadcce6da8913cc6f999d44f2fda3999be82f378f\"" Feb 13 15:28:06.610252 containerd[1457]: time="2025-02-13T15:28:06.610114861Z" level=info msg="Ensure that sandbox 9bde7141846a528b8406457cadcce6da8913cc6f999d44f2fda3999be82f378f in task-service has been cleanup successfully" Feb 13 15:28:06.611959 kubelet[2608]: I0213 15:28:06.611492 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f518b19d3683f9344fecab0db2f16d46f20150c992e387844d1811fc72fa170" Feb 13 15:28:06.612071 containerd[1457]: time="2025-02-13T15:28:06.612034763Z" level=info msg="StopPodSandbox for \"9f518b19d3683f9344fecab0db2f16d46f20150c992e387844d1811fc72fa170\"" Feb 13 15:28:06.612457 containerd[1457]: time="2025-02-13T15:28:06.612207623Z" level=info msg="Ensure that sandbox 9f518b19d3683f9344fecab0db2f16d46f20150c992e387844d1811fc72fa170 in task-service has been cleanup successfully" Feb 13 15:28:06.612457 containerd[1457]: time="2025-02-13T15:28:06.612383123Z" level=info msg="TearDown network for sandbox \"9bde7141846a528b8406457cadcce6da8913cc6f999d44f2fda3999be82f378f\" successfully" Feb 13 15:28:06.612457 containerd[1457]: time="2025-02-13T15:28:06.612410246Z" level=info msg="StopPodSandbox for \"9bde7141846a528b8406457cadcce6da8913cc6f999d44f2fda3999be82f378f\" returns successfully" Feb 13 15:28:06.612486 systemd[1]: run-netns-cni\x2ddf6d580a\x2d3f64\x2ddeda\x2de82a\x2d26e3dfaeef77.mount: Deactivated successfully. Feb 13 15:28:06.613478 containerd[1457]: time="2025-02-13T15:28:06.613059081Z" level=info msg="TearDown network for sandbox \"9f518b19d3683f9344fecab0db2f16d46f20150c992e387844d1811fc72fa170\" successfully" Feb 13 15:28:06.613478 containerd[1457]: time="2025-02-13T15:28:06.613084524Z" level=info msg="StopPodSandbox for \"9f518b19d3683f9344fecab0db2f16d46f20150c992e387844d1811fc72fa170\" returns successfully" Feb 13 15:28:06.613478 containerd[1457]: time="2025-02-13T15:28:06.613267705Z" level=info msg="StopPodSandbox for \"44d4ae110854cc57cbfa55e39401c987d33916faeb2710cc87a549e4d94657b0\"" Feb 13 15:28:06.613478 containerd[1457]: time="2025-02-13T15:28:06.613358796Z" level=info msg="TearDown network for sandbox \"44d4ae110854cc57cbfa55e39401c987d33916faeb2710cc87a549e4d94657b0\" successfully" Feb 13 15:28:06.613478 containerd[1457]: time="2025-02-13T15:28:06.613369157Z" level=info msg="StopPodSandbox for \"44d4ae110854cc57cbfa55e39401c987d33916faeb2710cc87a549e4d94657b0\" returns successfully" Feb 13 15:28:06.614019 containerd[1457]: time="2025-02-13T15:28:06.613816569Z" level=info msg="StopPodSandbox for \"262a4443b576ec39fe8e513ab34a3dc1609fc6bb9ac7e9f491ab2ed971993fd6\"" Feb 13 15:28:06.614019 containerd[1457]: time="2025-02-13T15:28:06.613907059Z" level=info msg="TearDown network for sandbox \"262a4443b576ec39fe8e513ab34a3dc1609fc6bb9ac7e9f491ab2ed971993fd6\" successfully" Feb 13 15:28:06.614019 containerd[1457]: time="2025-02-13T15:28:06.613916860Z" level=info msg="StopPodSandbox for \"262a4443b576ec39fe8e513ab34a3dc1609fc6bb9ac7e9f491ab2ed971993fd6\" returns successfully" Feb 13 15:28:06.614019 containerd[1457]: time="2025-02-13T15:28:06.613916100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b5dbfc55b-kt4sn,Uid:086f2ebd-d6e8-46e2-831d-0f37b85724a2,Namespace:calico-apiserver,Attempt:2,}" Feb 13 15:28:06.614671 containerd[1457]: time="2025-02-13T15:28:06.614642624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b5dbfc55b-xsmtx,Uid:2ce64576-1ac8-4271-89c9-a8de4b77d706,Namespace:calico-apiserver,Attempt:2,}" Feb 13 15:28:06.615314 kubelet[2608]: I0213 15:28:06.615260 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c09c28a03fd4472b318aa06a1ea4e03909edc65be9ace9f58b2d77cd52e637a" Feb 13 15:28:06.615399 systemd[1]: run-netns-cni\x2deeb129f0\x2d33d2\x2d48c1\x2d1786\x2d506173e5873d.mount: Deactivated successfully. Feb 13 15:28:06.616205 containerd[1457]: time="2025-02-13T15:28:06.616175001Z" level=info msg="StopPodSandbox for \"2c09c28a03fd4472b318aa06a1ea4e03909edc65be9ace9f58b2d77cd52e637a\"" Feb 13 15:28:06.616607 containerd[1457]: time="2025-02-13T15:28:06.616345861Z" level=info msg="Ensure that sandbox 2c09c28a03fd4472b318aa06a1ea4e03909edc65be9ace9f58b2d77cd52e637a in task-service has been cleanup successfully" Feb 13 15:28:06.617683 kubelet[2608]: I0213 15:28:06.616803 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="776f35edebf581700803135a5d9ab65368a38fb844db753a8fd392cbe7c754d3" Feb 13 15:28:06.617773 containerd[1457]: time="2025-02-13T15:28:06.617279329Z" level=info msg="StopPodSandbox for \"776f35edebf581700803135a5d9ab65368a38fb844db753a8fd392cbe7c754d3\"" Feb 13 15:28:06.617773 containerd[1457]: time="2025-02-13T15:28:06.617471951Z" level=info msg="Ensure that sandbox 776f35edebf581700803135a5d9ab65368a38fb844db753a8fd392cbe7c754d3 in task-service has been cleanup successfully" Feb 13 15:28:06.617773 containerd[1457]: time="2025-02-13T15:28:06.617282049Z" level=info msg="TearDown network for sandbox \"2c09c28a03fd4472b318aa06a1ea4e03909edc65be9ace9f58b2d77cd52e637a\" successfully" Feb 13 15:28:06.617773 containerd[1457]: time="2025-02-13T15:28:06.617525597Z" level=info msg="StopPodSandbox for \"2c09c28a03fd4472b318aa06a1ea4e03909edc65be9ace9f58b2d77cd52e637a\" returns successfully" Feb 13 15:28:06.617977 containerd[1457]: time="2025-02-13T15:28:06.617942366Z" level=info msg="StopPodSandbox for \"63565dfc82761cf07ac785a68455c18a81508c45b3108b2a758b41876e99d6f2\"" Feb 13 15:28:06.618066 containerd[1457]: time="2025-02-13T15:28:06.618050178Z" level=info msg="TearDown network for sandbox \"63565dfc82761cf07ac785a68455c18a81508c45b3108b2a758b41876e99d6f2\" successfully" Feb 13 15:28:06.618104 containerd[1457]: time="2025-02-13T15:28:06.618064060Z" level=info msg="StopPodSandbox for \"63565dfc82761cf07ac785a68455c18a81508c45b3108b2a758b41876e99d6f2\" returns successfully" Feb 13 15:28:06.618104 containerd[1457]: time="2025-02-13T15:28:06.617955967Z" level=info msg="TearDown network for sandbox \"776f35edebf581700803135a5d9ab65368a38fb844db753a8fd392cbe7c754d3\" successfully" Feb 13 15:28:06.618157 containerd[1457]: time="2025-02-13T15:28:06.618110545Z" level=info msg="StopPodSandbox for \"776f35edebf581700803135a5d9ab65368a38fb844db753a8fd392cbe7c754d3\" returns successfully" Feb 13 15:28:06.618411 systemd[1]: run-netns-cni\x2d85895618\x2dc391\x2d2343\x2dda87\x2d4339143647c3.mount: Deactivated successfully. Feb 13 15:28:06.619595 containerd[1457]: time="2025-02-13T15:28:06.619048773Z" level=info msg="StopPodSandbox for \"77a75dac9c9f8165f19ab22aa953024a5e1c10959673b5e0528dc465b8c7c35e\"" Feb 13 15:28:06.619595 containerd[1457]: time="2025-02-13T15:28:06.619135343Z" level=info msg="TearDown network for sandbox \"77a75dac9c9f8165f19ab22aa953024a5e1c10959673b5e0528dc465b8c7c35e\" successfully" Feb 13 15:28:06.619595 containerd[1457]: time="2025-02-13T15:28:06.619144104Z" level=info msg="StopPodSandbox for \"77a75dac9c9f8165f19ab22aa953024a5e1c10959673b5e0528dc465b8c7c35e\" returns successfully" Feb 13 15:28:06.619595 containerd[1457]: time="2025-02-13T15:28:06.619228074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-59ns4,Uid:8c67a7be-7144-4f11-b45e-f04dfd3de75c,Namespace:calico-system,Attempt:2,}" Feb 13 15:28:06.621352 kubelet[2608]: I0213 15:28:06.620527 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67348c9f7030a07e08918ec471c01d8d492fb7b37edf0b70d665014a3b6d6d75" Feb 13 15:28:06.621472 containerd[1457]: time="2025-02-13T15:28:06.620887506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-ccbb9dcd9-2n9js,Uid:7201f0b7-6fae-4b37-8849-0e2e56956168,Namespace:calico-system,Attempt:2,}" Feb 13 15:28:06.621472 containerd[1457]: time="2025-02-13T15:28:06.620967835Z" level=info msg="StopPodSandbox for \"67348c9f7030a07e08918ec471c01d8d492fb7b37edf0b70d665014a3b6d6d75\"" Feb 13 15:28:06.621472 containerd[1457]: time="2025-02-13T15:28:06.621036123Z" level=info msg="StopPodSandbox for \"0cdfe4900b8aded4d3dc434c00139a606306819ed296214d867b55f86ba8ee4a\"" Feb 13 15:28:06.621472 containerd[1457]: time="2025-02-13T15:28:06.621334557Z" level=info msg="Ensure that sandbox 0cdfe4900b8aded4d3dc434c00139a606306819ed296214d867b55f86ba8ee4a in task-service has been cleanup successfully" Feb 13 15:28:06.621695 containerd[1457]: time="2025-02-13T15:28:06.621660715Z" level=info msg="Ensure that sandbox 67348c9f7030a07e08918ec471c01d8d492fb7b37edf0b70d665014a3b6d6d75 in task-service has been cleanup successfully" Feb 13 15:28:06.621729 systemd[1]: run-netns-cni\x2d5959beaa\x2decc0\x2d93c5\x2d2ba6\x2d980e6d0798c7.mount: Deactivated successfully. Feb 13 15:28:06.621901 containerd[1457]: time="2025-02-13T15:28:06.621709561Z" level=info msg="TearDown network for sandbox \"0cdfe4900b8aded4d3dc434c00139a606306819ed296214d867b55f86ba8ee4a\" successfully" Feb 13 15:28:06.621901 containerd[1457]: time="2025-02-13T15:28:06.621890902Z" level=info msg="StopPodSandbox for \"0cdfe4900b8aded4d3dc434c00139a606306819ed296214d867b55f86ba8ee4a\" returns successfully" Feb 13 15:28:06.622305 containerd[1457]: time="2025-02-13T15:28:06.622198937Z" level=info msg="TearDown network for sandbox \"67348c9f7030a07e08918ec471c01d8d492fb7b37edf0b70d665014a3b6d6d75\" successfully" Feb 13 15:28:06.622305 containerd[1457]: time="2025-02-13T15:28:06.622231661Z" level=info msg="StopPodSandbox for \"67348c9f7030a07e08918ec471c01d8d492fb7b37edf0b70d665014a3b6d6d75\" returns successfully" Feb 13 15:28:06.622305 containerd[1457]: time="2025-02-13T15:28:06.622244623Z" level=info msg="StopPodSandbox for \"400e05d09f7638f6370144bc7c30317522f5d53fca24796970245605aaf907e8\"" Feb 13 15:28:06.622586 containerd[1457]: time="2025-02-13T15:28:06.622328832Z" level=info msg="TearDown network for sandbox \"400e05d09f7638f6370144bc7c30317522f5d53fca24796970245605aaf907e8\" successfully" Feb 13 15:28:06.622586 containerd[1457]: time="2025-02-13T15:28:06.622339874Z" level=info msg="StopPodSandbox for \"400e05d09f7638f6370144bc7c30317522f5d53fca24796970245605aaf907e8\" returns successfully" Feb 13 15:28:06.623008 kubelet[2608]: E0213 15:28:06.622486 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:06.623079 containerd[1457]: time="2025-02-13T15:28:06.622844212Z" level=info msg="StopPodSandbox for \"0e912db41237de96aedc11e4bdfeaf221d0a4eb32d2c79b569dc586de06677c4\"" Feb 13 15:28:06.623079 containerd[1457]: time="2025-02-13T15:28:06.622924941Z" level=info msg="TearDown network for sandbox \"0e912db41237de96aedc11e4bdfeaf221d0a4eb32d2c79b569dc586de06677c4\" successfully" Feb 13 15:28:06.623079 containerd[1457]: time="2025-02-13T15:28:06.622934542Z" level=info msg="StopPodSandbox for \"0e912db41237de96aedc11e4bdfeaf221d0a4eb32d2c79b569dc586de06677c4\" returns successfully" Feb 13 15:28:06.623079 containerd[1457]: time="2025-02-13T15:28:06.623032354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j2fhf,Uid:d26de4e2-c62e-4d8a-96e0-edbb9492094a,Namespace:kube-system,Attempt:2,}" Feb 13 15:28:06.623170 kubelet[2608]: E0213 15:28:06.623090 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:06.624504 containerd[1457]: time="2025-02-13T15:28:06.623701711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vhgmw,Uid:859d37ac-44c9-4b92-854b-e6ca0540dbd1,Namespace:kube-system,Attempt:2,}" Feb 13 15:28:06.950453 containerd[1457]: time="2025-02-13T15:28:06.950348569Z" level=error msg="Failed to destroy network for sandbox \"b2452a586ae430ec9ed425f170afb0d9071b24ad7295b6f6a4477e012c9d521e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:06.951728 containerd[1457]: time="2025-02-13T15:28:06.951603674Z" level=error msg="encountered an error cleaning up failed sandbox \"b2452a586ae430ec9ed425f170afb0d9071b24ad7295b6f6a4477e012c9d521e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:06.951728 containerd[1457]: time="2025-02-13T15:28:06.951679403Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vhgmw,Uid:859d37ac-44c9-4b92-854b-e6ca0540dbd1,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"b2452a586ae430ec9ed425f170afb0d9071b24ad7295b6f6a4477e012c9d521e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:06.951957 kubelet[2608]: E0213 15:28:06.951904 2608 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2452a586ae430ec9ed425f170afb0d9071b24ad7295b6f6a4477e012c9d521e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:06.952038 kubelet[2608]: E0213 15:28:06.951978 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2452a586ae430ec9ed425f170afb0d9071b24ad7295b6f6a4477e012c9d521e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-vhgmw" Feb 13 15:28:06.952038 kubelet[2608]: E0213 15:28:06.952002 2608 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2452a586ae430ec9ed425f170afb0d9071b24ad7295b6f6a4477e012c9d521e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-vhgmw" Feb 13 15:28:06.952293 kubelet[2608]: E0213 15:28:06.952047 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-vhgmw_kube-system(859d37ac-44c9-4b92-854b-e6ca0540dbd1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-vhgmw_kube-system(859d37ac-44c9-4b92-854b-e6ca0540dbd1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b2452a586ae430ec9ed425f170afb0d9071b24ad7295b6f6a4477e012c9d521e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-vhgmw" podUID="859d37ac-44c9-4b92-854b-e6ca0540dbd1" Feb 13 15:28:06.958197 containerd[1457]: time="2025-02-13T15:28:06.958148430Z" level=error msg="Failed to destroy network for sandbox \"3f20dd0cef541fa916ded40902978e512025f116521ec7f105b4cd1d798e2080\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:06.958548 containerd[1457]: time="2025-02-13T15:28:06.958519193Z" level=error msg="encountered an error cleaning up failed sandbox \"3f20dd0cef541fa916ded40902978e512025f116521ec7f105b4cd1d798e2080\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:06.958610 containerd[1457]: time="2025-02-13T15:28:06.958578640Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b5dbfc55b-kt4sn,Uid:086f2ebd-d6e8-46e2-831d-0f37b85724a2,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"3f20dd0cef541fa916ded40902978e512025f116521ec7f105b4cd1d798e2080\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:06.958877 kubelet[2608]: E0213 15:28:06.958816 2608 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f20dd0cef541fa916ded40902978e512025f116521ec7f105b4cd1d798e2080\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:06.958877 kubelet[2608]: E0213 15:28:06.958871 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f20dd0cef541fa916ded40902978e512025f116521ec7f105b4cd1d798e2080\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b5dbfc55b-kt4sn" Feb 13 15:28:06.958971 kubelet[2608]: E0213 15:28:06.958890 2608 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f20dd0cef541fa916ded40902978e512025f116521ec7f105b4cd1d798e2080\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b5dbfc55b-kt4sn" Feb 13 15:28:06.958971 kubelet[2608]: E0213 15:28:06.958931 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b5dbfc55b-kt4sn_calico-apiserver(086f2ebd-d6e8-46e2-831d-0f37b85724a2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b5dbfc55b-kt4sn_calico-apiserver(086f2ebd-d6e8-46e2-831d-0f37b85724a2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3f20dd0cef541fa916ded40902978e512025f116521ec7f105b4cd1d798e2080\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b5dbfc55b-kt4sn" podUID="086f2ebd-d6e8-46e2-831d-0f37b85724a2" Feb 13 15:28:06.959043 containerd[1457]: time="2025-02-13T15:28:06.958869673Z" level=error msg="Failed to destroy network for sandbox \"daae8dd6eeb39b0c2ba55118dc887b99adeb526889c9a3d6f932e9903afa2435\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:06.959219 containerd[1457]: time="2025-02-13T15:28:06.959126143Z" level=error msg="encountered an error cleaning up failed sandbox \"daae8dd6eeb39b0c2ba55118dc887b99adeb526889c9a3d6f932e9903afa2435\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:06.959219 containerd[1457]: time="2025-02-13T15:28:06.959171028Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-59ns4,Uid:8c67a7be-7144-4f11-b45e-f04dfd3de75c,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"daae8dd6eeb39b0c2ba55118dc887b99adeb526889c9a3d6f932e9903afa2435\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:06.959537 kubelet[2608]: E0213 15:28:06.959452 2608 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"daae8dd6eeb39b0c2ba55118dc887b99adeb526889c9a3d6f932e9903afa2435\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:06.959579 kubelet[2608]: E0213 15:28:06.959557 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"daae8dd6eeb39b0c2ba55118dc887b99adeb526889c9a3d6f932e9903afa2435\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-59ns4" Feb 13 15:28:06.959609 kubelet[2608]: E0213 15:28:06.959577 2608 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"daae8dd6eeb39b0c2ba55118dc887b99adeb526889c9a3d6f932e9903afa2435\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-59ns4" Feb 13 15:28:06.959652 kubelet[2608]: E0213 15:28:06.959603 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-59ns4_calico-system(8c67a7be-7144-4f11-b45e-f04dfd3de75c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-59ns4_calico-system(8c67a7be-7144-4f11-b45e-f04dfd3de75c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"daae8dd6eeb39b0c2ba55118dc887b99adeb526889c9a3d6f932e9903afa2435\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-59ns4" podUID="8c67a7be-7144-4f11-b45e-f04dfd3de75c" Feb 13 15:28:06.965195 containerd[1457]: time="2025-02-13T15:28:06.965049467Z" level=error msg="Failed to destroy network for sandbox \"38a3f3da6737f9ca3a00e18f45dbf1f049602a27f165bd52b905645c672743cc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:06.965548 containerd[1457]: time="2025-02-13T15:28:06.965515121Z" level=error msg="encountered an error cleaning up failed sandbox \"38a3f3da6737f9ca3a00e18f45dbf1f049602a27f165bd52b905645c672743cc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:06.965764 containerd[1457]: time="2025-02-13T15:28:06.965666779Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-ccbb9dcd9-2n9js,Uid:7201f0b7-6fae-4b37-8849-0e2e56956168,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"38a3f3da6737f9ca3a00e18f45dbf1f049602a27f165bd52b905645c672743cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:06.966113 kubelet[2608]: E0213 15:28:06.966065 2608 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38a3f3da6737f9ca3a00e18f45dbf1f049602a27f165bd52b905645c672743cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:06.966177 kubelet[2608]: E0213 15:28:06.966115 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38a3f3da6737f9ca3a00e18f45dbf1f049602a27f165bd52b905645c672743cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-ccbb9dcd9-2n9js" Feb 13 15:28:06.966177 kubelet[2608]: E0213 15:28:06.966133 2608 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38a3f3da6737f9ca3a00e18f45dbf1f049602a27f165bd52b905645c672743cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-ccbb9dcd9-2n9js" Feb 13 15:28:06.966177 kubelet[2608]: E0213 15:28:06.966164 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-ccbb9dcd9-2n9js_calico-system(7201f0b7-6fae-4b37-8849-0e2e56956168)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-ccbb9dcd9-2n9js_calico-system(7201f0b7-6fae-4b37-8849-0e2e56956168)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"38a3f3da6737f9ca3a00e18f45dbf1f049602a27f165bd52b905645c672743cc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-ccbb9dcd9-2n9js" podUID="7201f0b7-6fae-4b37-8849-0e2e56956168" Feb 13 15:28:06.969468 containerd[1457]: time="2025-02-13T15:28:06.969431814Z" level=error msg="Failed to destroy network for sandbox \"ea7024231cf1f96c1a2263eb96f69e734b18be76b300f9a0314c4df3b994a6d9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:06.970241 containerd[1457]: time="2025-02-13T15:28:06.970077528Z" level=error msg="encountered an error cleaning up failed sandbox \"ea7024231cf1f96c1a2263eb96f69e734b18be76b300f9a0314c4df3b994a6d9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:06.970241 containerd[1457]: time="2025-02-13T15:28:06.970129934Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b5dbfc55b-xsmtx,Uid:2ce64576-1ac8-4271-89c9-a8de4b77d706,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"ea7024231cf1f96c1a2263eb96f69e734b18be76b300f9a0314c4df3b994a6d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:06.970366 kubelet[2608]: E0213 15:28:06.970282 2608 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea7024231cf1f96c1a2263eb96f69e734b18be76b300f9a0314c4df3b994a6d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:06.970366 kubelet[2608]: E0213 15:28:06.970329 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea7024231cf1f96c1a2263eb96f69e734b18be76b300f9a0314c4df3b994a6d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b5dbfc55b-xsmtx" Feb 13 15:28:06.970366 kubelet[2608]: E0213 15:28:06.970348 2608 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea7024231cf1f96c1a2263eb96f69e734b18be76b300f9a0314c4df3b994a6d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b5dbfc55b-xsmtx" Feb 13 15:28:06.970492 kubelet[2608]: E0213 15:28:06.970391 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b5dbfc55b-xsmtx_calico-apiserver(2ce64576-1ac8-4271-89c9-a8de4b77d706)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b5dbfc55b-xsmtx_calico-apiserver(2ce64576-1ac8-4271-89c9-a8de4b77d706)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ea7024231cf1f96c1a2263eb96f69e734b18be76b300f9a0314c4df3b994a6d9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b5dbfc55b-xsmtx" podUID="2ce64576-1ac8-4271-89c9-a8de4b77d706" Feb 13 15:28:06.978105 containerd[1457]: time="2025-02-13T15:28:06.978027487Z" level=error msg="Failed to destroy network for sandbox \"31240264793868d5f6753603cc9e8db3c437501e3593f5eaf3c12fbc398703a1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:06.979359 containerd[1457]: time="2025-02-13T15:28:06.979235546Z" level=error msg="encountered an error cleaning up failed sandbox \"31240264793868d5f6753603cc9e8db3c437501e3593f5eaf3c12fbc398703a1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:06.979433 containerd[1457]: time="2025-02-13T15:28:06.979365361Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j2fhf,Uid:d26de4e2-c62e-4d8a-96e0-edbb9492094a,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"31240264793868d5f6753603cc9e8db3c437501e3593f5eaf3c12fbc398703a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:06.979808 kubelet[2608]: E0213 15:28:06.979729 2608 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31240264793868d5f6753603cc9e8db3c437501e3593f5eaf3c12fbc398703a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:06.979808 kubelet[2608]: E0213 15:28:06.979785 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31240264793868d5f6753603cc9e8db3c437501e3593f5eaf3c12fbc398703a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-j2fhf" Feb 13 15:28:06.979925 kubelet[2608]: E0213 15:28:06.979810 2608 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31240264793868d5f6753603cc9e8db3c437501e3593f5eaf3c12fbc398703a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-j2fhf" Feb 13 15:28:06.979925 kubelet[2608]: E0213 15:28:06.979857 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-j2fhf_kube-system(d26de4e2-c62e-4d8a-96e0-edbb9492094a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-j2fhf_kube-system(d26de4e2-c62e-4d8a-96e0-edbb9492094a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"31240264793868d5f6753603cc9e8db3c437501e3593f5eaf3c12fbc398703a1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-j2fhf" podUID="d26de4e2-c62e-4d8a-96e0-edbb9492094a" Feb 13 15:28:07.251395 systemd[1]: run-netns-cni\x2dd24e8e52\x2dc6db\x2db773\x2d0e18\x2d040754279a20.mount: Deactivated successfully. Feb 13 15:28:07.251479 systemd[1]: run-netns-cni\x2d693cd2d5\x2da5ef\x2d7029\x2da580\x2d2fee44479f4b.mount: Deactivated successfully. Feb 13 15:28:07.625081 kubelet[2608]: I0213 15:28:07.625041 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38a3f3da6737f9ca3a00e18f45dbf1f049602a27f165bd52b905645c672743cc" Feb 13 15:28:07.625850 containerd[1457]: time="2025-02-13T15:28:07.625813392Z" level=info msg="StopPodSandbox for \"38a3f3da6737f9ca3a00e18f45dbf1f049602a27f165bd52b905645c672743cc\"" Feb 13 15:28:07.626221 containerd[1457]: time="2025-02-13T15:28:07.626059779Z" level=info msg="Ensure that sandbox 38a3f3da6737f9ca3a00e18f45dbf1f049602a27f165bd52b905645c672743cc in task-service has been cleanup successfully" Feb 13 15:28:07.626838 containerd[1457]: time="2025-02-13T15:28:07.626799142Z" level=info msg="TearDown network for sandbox \"38a3f3da6737f9ca3a00e18f45dbf1f049602a27f165bd52b905645c672743cc\" successfully" Feb 13 15:28:07.626838 containerd[1457]: time="2025-02-13T15:28:07.626828345Z" level=info msg="StopPodSandbox for \"38a3f3da6737f9ca3a00e18f45dbf1f049602a27f165bd52b905645c672743cc\" returns successfully" Feb 13 15:28:07.627804 containerd[1457]: time="2025-02-13T15:28:07.627465896Z" level=info msg="StopPodSandbox for \"776f35edebf581700803135a5d9ab65368a38fb844db753a8fd392cbe7c754d3\"" Feb 13 15:28:07.628606 containerd[1457]: time="2025-02-13T15:28:07.628347555Z" level=info msg="TearDown network for sandbox \"776f35edebf581700803135a5d9ab65368a38fb844db753a8fd392cbe7c754d3\" successfully" Feb 13 15:28:07.628606 containerd[1457]: time="2025-02-13T15:28:07.628371878Z" level=info msg="StopPodSandbox for \"776f35edebf581700803135a5d9ab65368a38fb844db753a8fd392cbe7c754d3\" returns successfully" Feb 13 15:28:07.628966 systemd[1]: run-netns-cni\x2d8109f543\x2db7f6\x2d3750\x2d993a\x2dc911e81cb618.mount: Deactivated successfully. Feb 13 15:28:07.629820 containerd[1457]: time="2025-02-13T15:28:07.629766313Z" level=info msg="StopPodSandbox for \"77a75dac9c9f8165f19ab22aa953024a5e1c10959673b5e0528dc465b8c7c35e\"" Feb 13 15:28:07.629889 containerd[1457]: time="2025-02-13T15:28:07.629860684Z" level=info msg="TearDown network for sandbox \"77a75dac9c9f8165f19ab22aa953024a5e1c10959673b5e0528dc465b8c7c35e\" successfully" Feb 13 15:28:07.629889 containerd[1457]: time="2025-02-13T15:28:07.629873605Z" level=info msg="StopPodSandbox for \"77a75dac9c9f8165f19ab22aa953024a5e1c10959673b5e0528dc465b8c7c35e\" returns successfully" Feb 13 15:28:07.630649 containerd[1457]: time="2025-02-13T15:28:07.630460031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-ccbb9dcd9-2n9js,Uid:7201f0b7-6fae-4b37-8849-0e2e56956168,Namespace:calico-system,Attempt:3,}" Feb 13 15:28:07.630799 kubelet[2608]: I0213 15:28:07.630717 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2452a586ae430ec9ed425f170afb0d9071b24ad7295b6f6a4477e012c9d521e" Feb 13 15:28:07.631355 containerd[1457]: time="2025-02-13T15:28:07.631299444Z" level=info msg="StopPodSandbox for \"b2452a586ae430ec9ed425f170afb0d9071b24ad7295b6f6a4477e012c9d521e\"" Feb 13 15:28:07.631507 containerd[1457]: time="2025-02-13T15:28:07.631483865Z" level=info msg="Ensure that sandbox b2452a586ae430ec9ed425f170afb0d9071b24ad7295b6f6a4477e012c9d521e in task-service has been cleanup successfully" Feb 13 15:28:07.632334 containerd[1457]: time="2025-02-13T15:28:07.631807381Z" level=info msg="TearDown network for sandbox \"b2452a586ae430ec9ed425f170afb0d9071b24ad7295b6f6a4477e012c9d521e\" successfully" Feb 13 15:28:07.632334 containerd[1457]: time="2025-02-13T15:28:07.632038127Z" level=info msg="StopPodSandbox for \"b2452a586ae430ec9ed425f170afb0d9071b24ad7295b6f6a4477e012c9d521e\" returns successfully" Feb 13 15:28:07.634085 kubelet[2608]: I0213 15:28:07.633836 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="daae8dd6eeb39b0c2ba55118dc887b99adeb526889c9a3d6f932e9903afa2435" Feb 13 15:28:07.634173 containerd[1457]: time="2025-02-13T15:28:07.633831647Z" level=info msg="StopPodSandbox for \"67348c9f7030a07e08918ec471c01d8d492fb7b37edf0b70d665014a3b6d6d75\"" Feb 13 15:28:07.634173 containerd[1457]: time="2025-02-13T15:28:07.633955781Z" level=info msg="TearDown network for sandbox \"67348c9f7030a07e08918ec471c01d8d492fb7b37edf0b70d665014a3b6d6d75\" successfully" Feb 13 15:28:07.634173 containerd[1457]: time="2025-02-13T15:28:07.633968062Z" level=info msg="StopPodSandbox for \"67348c9f7030a07e08918ec471c01d8d492fb7b37edf0b70d665014a3b6d6d75\" returns successfully" Feb 13 15:28:07.633860 systemd[1]: run-netns-cni\x2d98dbb1af\x2de5e4\x2d69ba\x2d318a\x2d11a9d0064c11.mount: Deactivated successfully. Feb 13 15:28:07.634681 containerd[1457]: time="2025-02-13T15:28:07.634647418Z" level=info msg="StopPodSandbox for \"daae8dd6eeb39b0c2ba55118dc887b99adeb526889c9a3d6f932e9903afa2435\"" Feb 13 15:28:07.634844 containerd[1457]: time="2025-02-13T15:28:07.634820598Z" level=info msg="Ensure that sandbox daae8dd6eeb39b0c2ba55118dc887b99adeb526889c9a3d6f932e9903afa2435 in task-service has been cleanup successfully" Feb 13 15:28:07.637028 systemd[1]: run-netns-cni\x2dbaf56635\x2d2154\x2d33f0\x2d7cc1\x2d7ddfc44c4296.mount: Deactivated successfully. Feb 13 15:28:07.639691 containerd[1457]: time="2025-02-13T15:28:07.639550006Z" level=info msg="TearDown network for sandbox \"daae8dd6eeb39b0c2ba55118dc887b99adeb526889c9a3d6f932e9903afa2435\" successfully" Feb 13 15:28:07.639691 containerd[1457]: time="2025-02-13T15:28:07.639588090Z" level=info msg="StopPodSandbox for \"daae8dd6eeb39b0c2ba55118dc887b99adeb526889c9a3d6f932e9903afa2435\" returns successfully" Feb 13 15:28:07.640404 containerd[1457]: time="2025-02-13T15:28:07.639929768Z" level=info msg="StopPodSandbox for \"2c09c28a03fd4472b318aa06a1ea4e03909edc65be9ace9f58b2d77cd52e637a\"" Feb 13 15:28:07.640404 containerd[1457]: time="2025-02-13T15:28:07.640024979Z" level=info msg="TearDown network for sandbox \"2c09c28a03fd4472b318aa06a1ea4e03909edc65be9ace9f58b2d77cd52e637a\" successfully" Feb 13 15:28:07.640404 containerd[1457]: time="2025-02-13T15:28:07.640034780Z" level=info msg="StopPodSandbox for \"2c09c28a03fd4472b318aa06a1ea4e03909edc65be9ace9f58b2d77cd52e637a\" returns successfully" Feb 13 15:28:07.640404 containerd[1457]: time="2025-02-13T15:28:07.640092746Z" level=info msg="StopPodSandbox for \"0e912db41237de96aedc11e4bdfeaf221d0a4eb32d2c79b569dc586de06677c4\"" Feb 13 15:28:07.640404 containerd[1457]: time="2025-02-13T15:28:07.640141552Z" level=info msg="TearDown network for sandbox \"0e912db41237de96aedc11e4bdfeaf221d0a4eb32d2c79b569dc586de06677c4\" successfully" Feb 13 15:28:07.640404 containerd[1457]: time="2025-02-13T15:28:07.640149113Z" level=info msg="StopPodSandbox for \"0e912db41237de96aedc11e4bdfeaf221d0a4eb32d2c79b569dc586de06677c4\" returns successfully" Feb 13 15:28:07.640534 kubelet[2608]: E0213 15:28:07.640315 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:07.640577 containerd[1457]: time="2025-02-13T15:28:07.640418583Z" level=info msg="StopPodSandbox for \"63565dfc82761cf07ac785a68455c18a81508c45b3108b2a758b41876e99d6f2\"" Feb 13 15:28:07.640577 containerd[1457]: time="2025-02-13T15:28:07.640500232Z" level=info msg="TearDown network for sandbox \"63565dfc82761cf07ac785a68455c18a81508c45b3108b2a758b41876e99d6f2\" successfully" Feb 13 15:28:07.640577 containerd[1457]: time="2025-02-13T15:28:07.640511873Z" level=info msg="StopPodSandbox for \"63565dfc82761cf07ac785a68455c18a81508c45b3108b2a758b41876e99d6f2\" returns successfully" Feb 13 15:28:07.640833 containerd[1457]: time="2025-02-13T15:28:07.640611684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vhgmw,Uid:859d37ac-44c9-4b92-854b-e6ca0540dbd1,Namespace:kube-system,Attempt:3,}" Feb 13 15:28:07.641242 containerd[1457]: time="2025-02-13T15:28:07.641182628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-59ns4,Uid:8c67a7be-7144-4f11-b45e-f04dfd3de75c,Namespace:calico-system,Attempt:3,}" Feb 13 15:28:07.642188 kubelet[2608]: I0213 15:28:07.642157 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea7024231cf1f96c1a2263eb96f69e734b18be76b300f9a0314c4df3b994a6d9" Feb 13 15:28:07.642712 containerd[1457]: time="2025-02-13T15:28:07.642684476Z" level=info msg="StopPodSandbox for \"ea7024231cf1f96c1a2263eb96f69e734b18be76b300f9a0314c4df3b994a6d9\"" Feb 13 15:28:07.642944 containerd[1457]: time="2025-02-13T15:28:07.642902540Z" level=info msg="Ensure that sandbox ea7024231cf1f96c1a2263eb96f69e734b18be76b300f9a0314c4df3b994a6d9 in task-service has been cleanup successfully" Feb 13 15:28:07.643115 containerd[1457]: time="2025-02-13T15:28:07.643096482Z" level=info msg="TearDown network for sandbox \"ea7024231cf1f96c1a2263eb96f69e734b18be76b300f9a0314c4df3b994a6d9\" successfully" Feb 13 15:28:07.643156 containerd[1457]: time="2025-02-13T15:28:07.643114364Z" level=info msg="StopPodSandbox for \"ea7024231cf1f96c1a2263eb96f69e734b18be76b300f9a0314c4df3b994a6d9\" returns successfully" Feb 13 15:28:07.644199 containerd[1457]: time="2025-02-13T15:28:07.644170162Z" level=info msg="StopPodSandbox for \"9f518b19d3683f9344fecab0db2f16d46f20150c992e387844d1811fc72fa170\"" Feb 13 15:28:07.644285 containerd[1457]: time="2025-02-13T15:28:07.644269493Z" level=info msg="TearDown network for sandbox \"9f518b19d3683f9344fecab0db2f16d46f20150c992e387844d1811fc72fa170\" successfully" Feb 13 15:28:07.644322 containerd[1457]: time="2025-02-13T15:28:07.644284454Z" level=info msg="StopPodSandbox for \"9f518b19d3683f9344fecab0db2f16d46f20150c992e387844d1811fc72fa170\" returns successfully" Feb 13 15:28:07.644759 containerd[1457]: time="2025-02-13T15:28:07.644728304Z" level=info msg="StopPodSandbox for \"262a4443b576ec39fe8e513ab34a3dc1609fc6bb9ac7e9f491ab2ed971993fd6\"" Feb 13 15:28:07.645024 containerd[1457]: time="2025-02-13T15:28:07.644957089Z" level=info msg="TearDown network for sandbox \"262a4443b576ec39fe8e513ab34a3dc1609fc6bb9ac7e9f491ab2ed971993fd6\" successfully" Feb 13 15:28:07.645024 containerd[1457]: time="2025-02-13T15:28:07.644976492Z" level=info msg="StopPodSandbox for \"262a4443b576ec39fe8e513ab34a3dc1609fc6bb9ac7e9f491ab2ed971993fd6\" returns successfully" Feb 13 15:28:07.645598 systemd[1]: run-netns-cni\x2d0f388daf\x2d107a\x2d0818\x2de150\x2d8f048de58798.mount: Deactivated successfully. Feb 13 15:28:07.647035 containerd[1457]: time="2025-02-13T15:28:07.646906507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b5dbfc55b-xsmtx,Uid:2ce64576-1ac8-4271-89c9-a8de4b77d706,Namespace:calico-apiserver,Attempt:3,}" Feb 13 15:28:07.647223 kubelet[2608]: I0213 15:28:07.647160 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31240264793868d5f6753603cc9e8db3c437501e3593f5eaf3c12fbc398703a1" Feb 13 15:28:07.648386 containerd[1457]: time="2025-02-13T15:28:07.648354549Z" level=info msg="StopPodSandbox for \"31240264793868d5f6753603cc9e8db3c437501e3593f5eaf3c12fbc398703a1\"" Feb 13 15:28:07.649670 containerd[1457]: time="2025-02-13T15:28:07.649501357Z" level=info msg="Ensure that sandbox 31240264793868d5f6753603cc9e8db3c437501e3593f5eaf3c12fbc398703a1 in task-service has been cleanup successfully" Feb 13 15:28:07.650317 kubelet[2608]: I0213 15:28:07.650234 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f20dd0cef541fa916ded40902978e512025f116521ec7f105b4cd1d798e2080" Feb 13 15:28:07.650386 containerd[1457]: time="2025-02-13T15:28:07.650276563Z" level=info msg="TearDown network for sandbox \"31240264793868d5f6753603cc9e8db3c437501e3593f5eaf3c12fbc398703a1\" successfully" Feb 13 15:28:07.650386 containerd[1457]: time="2025-02-13T15:28:07.650295566Z" level=info msg="StopPodSandbox for \"31240264793868d5f6753603cc9e8db3c437501e3593f5eaf3c12fbc398703a1\" returns successfully" Feb 13 15:28:07.653248 containerd[1457]: time="2025-02-13T15:28:07.653050193Z" level=info msg="StopPodSandbox for \"0cdfe4900b8aded4d3dc434c00139a606306819ed296214d867b55f86ba8ee4a\"" Feb 13 15:28:07.653248 containerd[1457]: time="2025-02-13T15:28:07.653074996Z" level=info msg="StopPodSandbox for \"3f20dd0cef541fa916ded40902978e512025f116521ec7f105b4cd1d798e2080\"" Feb 13 15:28:07.653248 containerd[1457]: time="2025-02-13T15:28:07.653147884Z" level=info msg="TearDown network for sandbox \"0cdfe4900b8aded4d3dc434c00139a606306819ed296214d867b55f86ba8ee4a\" successfully" Feb 13 15:28:07.653248 containerd[1457]: time="2025-02-13T15:28:07.653157485Z" level=info msg="StopPodSandbox for \"0cdfe4900b8aded4d3dc434c00139a606306819ed296214d867b55f86ba8ee4a\" returns successfully" Feb 13 15:28:07.654400 containerd[1457]: time="2025-02-13T15:28:07.653323584Z" level=info msg="Ensure that sandbox 3f20dd0cef541fa916ded40902978e512025f116521ec7f105b4cd1d798e2080 in task-service has been cleanup successfully" Feb 13 15:28:07.654400 containerd[1457]: time="2025-02-13T15:28:07.653609656Z" level=info msg="StopPodSandbox for \"400e05d09f7638f6370144bc7c30317522f5d53fca24796970245605aaf907e8\"" Feb 13 15:28:07.654400 containerd[1457]: time="2025-02-13T15:28:07.653703106Z" level=info msg="TearDown network for sandbox \"400e05d09f7638f6370144bc7c30317522f5d53fca24796970245605aaf907e8\" successfully" Feb 13 15:28:07.654400 containerd[1457]: time="2025-02-13T15:28:07.653714907Z" level=info msg="StopPodSandbox for \"400e05d09f7638f6370144bc7c30317522f5d53fca24796970245605aaf907e8\" returns successfully" Feb 13 15:28:07.654400 containerd[1457]: time="2025-02-13T15:28:07.654103351Z" level=info msg="TearDown network for sandbox \"3f20dd0cef541fa916ded40902978e512025f116521ec7f105b4cd1d798e2080\" successfully" Feb 13 15:28:07.654400 containerd[1457]: time="2025-02-13T15:28:07.654151516Z" level=info msg="StopPodSandbox for \"3f20dd0cef541fa916ded40902978e512025f116521ec7f105b4cd1d798e2080\" returns successfully" Feb 13 15:28:07.654540 kubelet[2608]: E0213 15:28:07.653940 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:07.654815 containerd[1457]: time="2025-02-13T15:28:07.654634130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j2fhf,Uid:d26de4e2-c62e-4d8a-96e0-edbb9492094a,Namespace:kube-system,Attempt:3,}" Feb 13 15:28:07.654955 containerd[1457]: time="2025-02-13T15:28:07.654898920Z" level=info msg="StopPodSandbox for \"9bde7141846a528b8406457cadcce6da8913cc6f999d44f2fda3999be82f378f\"" Feb 13 15:28:07.655004 containerd[1457]: time="2025-02-13T15:28:07.654986769Z" level=info msg="TearDown network for sandbox \"9bde7141846a528b8406457cadcce6da8913cc6f999d44f2fda3999be82f378f\" successfully" Feb 13 15:28:07.655004 containerd[1457]: time="2025-02-13T15:28:07.654998771Z" level=info msg="StopPodSandbox for \"9bde7141846a528b8406457cadcce6da8913cc6f999d44f2fda3999be82f378f\" returns successfully" Feb 13 15:28:07.655446 containerd[1457]: time="2025-02-13T15:28:07.655290363Z" level=info msg="StopPodSandbox for \"44d4ae110854cc57cbfa55e39401c987d33916faeb2710cc87a549e4d94657b0\"" Feb 13 15:28:07.655446 containerd[1457]: time="2025-02-13T15:28:07.655378813Z" level=info msg="TearDown network for sandbox \"44d4ae110854cc57cbfa55e39401c987d33916faeb2710cc87a549e4d94657b0\" successfully" Feb 13 15:28:07.655446 containerd[1457]: time="2025-02-13T15:28:07.655388774Z" level=info msg="StopPodSandbox for \"44d4ae110854cc57cbfa55e39401c987d33916faeb2710cc87a549e4d94657b0\" returns successfully" Feb 13 15:28:07.655979 containerd[1457]: time="2025-02-13T15:28:07.655879669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b5dbfc55b-kt4sn,Uid:086f2ebd-d6e8-46e2-831d-0f37b85724a2,Namespace:calico-apiserver,Attempt:3,}" Feb 13 15:28:07.960639 containerd[1457]: time="2025-02-13T15:28:07.960510645Z" level=error msg="Failed to destroy network for sandbox \"f4d56aece65a454a6744a6656975af454ed23eaa118376aad704578449124dce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:07.961384 containerd[1457]: time="2025-02-13T15:28:07.961341178Z" level=error msg="encountered an error cleaning up failed sandbox \"f4d56aece65a454a6744a6656975af454ed23eaa118376aad704578449124dce\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:07.961580 containerd[1457]: time="2025-02-13T15:28:07.961557482Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j2fhf,Uid:d26de4e2-c62e-4d8a-96e0-edbb9492094a,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"f4d56aece65a454a6744a6656975af454ed23eaa118376aad704578449124dce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:07.962728 kubelet[2608]: E0213 15:28:07.962562 2608 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4d56aece65a454a6744a6656975af454ed23eaa118376aad704578449124dce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:07.962728 kubelet[2608]: E0213 15:28:07.962668 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4d56aece65a454a6744a6656975af454ed23eaa118376aad704578449124dce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-j2fhf" Feb 13 15:28:07.962728 kubelet[2608]: E0213 15:28:07.962691 2608 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4d56aece65a454a6744a6656975af454ed23eaa118376aad704578449124dce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-j2fhf" Feb 13 15:28:07.962872 kubelet[2608]: E0213 15:28:07.962727 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-j2fhf_kube-system(d26de4e2-c62e-4d8a-96e0-edbb9492094a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-j2fhf_kube-system(d26de4e2-c62e-4d8a-96e0-edbb9492094a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f4d56aece65a454a6744a6656975af454ed23eaa118376aad704578449124dce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-j2fhf" podUID="d26de4e2-c62e-4d8a-96e0-edbb9492094a" Feb 13 15:28:07.974047 containerd[1457]: time="2025-02-13T15:28:07.973918942Z" level=error msg="Failed to destroy network for sandbox \"cc21ee81810fc473d761df47ca40dfc07bf603b78f073619580e97c4954181e5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:07.974613 containerd[1457]: time="2025-02-13T15:28:07.974577776Z" level=error msg="Failed to destroy network for sandbox \"0477f5248a84e6b1609e010745ea4791329b3811f61cb4ca28c5f7d4c62c942f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:07.975588 containerd[1457]: time="2025-02-13T15:28:07.975551205Z" level=error msg="encountered an error cleaning up failed sandbox \"cc21ee81810fc473d761df47ca40dfc07bf603b78f073619580e97c4954181e5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:07.975676 containerd[1457]: time="2025-02-13T15:28:07.975654616Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b5dbfc55b-kt4sn,Uid:086f2ebd-d6e8-46e2-831d-0f37b85724a2,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"cc21ee81810fc473d761df47ca40dfc07bf603b78f073619580e97c4954181e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:07.976283 kubelet[2608]: E0213 15:28:07.975918 2608 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc21ee81810fc473d761df47ca40dfc07bf603b78f073619580e97c4954181e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:07.976283 kubelet[2608]: E0213 15:28:07.975983 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc21ee81810fc473d761df47ca40dfc07bf603b78f073619580e97c4954181e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b5dbfc55b-kt4sn" Feb 13 15:28:07.976283 kubelet[2608]: E0213 15:28:07.976003 2608 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc21ee81810fc473d761df47ca40dfc07bf603b78f073619580e97c4954181e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b5dbfc55b-kt4sn" Feb 13 15:28:07.976569 kubelet[2608]: E0213 15:28:07.976048 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b5dbfc55b-kt4sn_calico-apiserver(086f2ebd-d6e8-46e2-831d-0f37b85724a2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b5dbfc55b-kt4sn_calico-apiserver(086f2ebd-d6e8-46e2-831d-0f37b85724a2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cc21ee81810fc473d761df47ca40dfc07bf603b78f073619580e97c4954181e5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b5dbfc55b-kt4sn" podUID="086f2ebd-d6e8-46e2-831d-0f37b85724a2" Feb 13 15:28:07.976674 containerd[1457]: time="2025-02-13T15:28:07.976325251Z" level=error msg="encountered an error cleaning up failed sandbox \"0477f5248a84e6b1609e010745ea4791329b3811f61cb4ca28c5f7d4c62c942f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:07.978686 containerd[1457]: time="2025-02-13T15:28:07.977937231Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-ccbb9dcd9-2n9js,Uid:7201f0b7-6fae-4b37-8849-0e2e56956168,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"0477f5248a84e6b1609e010745ea4791329b3811f61cb4ca28c5f7d4c62c942f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:07.978991 kubelet[2608]: E0213 15:28:07.978918 2608 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0477f5248a84e6b1609e010745ea4791329b3811f61cb4ca28c5f7d4c62c942f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:07.979231 kubelet[2608]: E0213 15:28:07.979097 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0477f5248a84e6b1609e010745ea4791329b3811f61cb4ca28c5f7d4c62c942f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-ccbb9dcd9-2n9js" Feb 13 15:28:07.979231 kubelet[2608]: E0213 15:28:07.979129 2608 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0477f5248a84e6b1609e010745ea4791329b3811f61cb4ca28c5f7d4c62c942f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-ccbb9dcd9-2n9js" Feb 13 15:28:07.979231 kubelet[2608]: E0213 15:28:07.979182 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-ccbb9dcd9-2n9js_calico-system(7201f0b7-6fae-4b37-8849-0e2e56956168)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-ccbb9dcd9-2n9js_calico-system(7201f0b7-6fae-4b37-8849-0e2e56956168)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0477f5248a84e6b1609e010745ea4791329b3811f61cb4ca28c5f7d4c62c942f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-ccbb9dcd9-2n9js" podUID="7201f0b7-6fae-4b37-8849-0e2e56956168" Feb 13 15:28:07.992899 containerd[1457]: time="2025-02-13T15:28:07.992763927Z" level=error msg="Failed to destroy network for sandbox \"2ae1783dc181bf0a101c7cecd003149a3d0bc99235467eacc57dcc150794979a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:07.994349 containerd[1457]: time="2025-02-13T15:28:07.994154362Z" level=error msg="encountered an error cleaning up failed sandbox \"2ae1783dc181bf0a101c7cecd003149a3d0bc99235467eacc57dcc150794979a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:07.994437 containerd[1457]: time="2025-02-13T15:28:07.994399349Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-59ns4,Uid:8c67a7be-7144-4f11-b45e-f04dfd3de75c,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"2ae1783dc181bf0a101c7cecd003149a3d0bc99235467eacc57dcc150794979a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:07.994823 kubelet[2608]: E0213 15:28:07.994650 2608 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ae1783dc181bf0a101c7cecd003149a3d0bc99235467eacc57dcc150794979a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:07.994823 kubelet[2608]: E0213 15:28:07.994711 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ae1783dc181bf0a101c7cecd003149a3d0bc99235467eacc57dcc150794979a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-59ns4" Feb 13 15:28:07.994823 kubelet[2608]: E0213 15:28:07.994729 2608 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ae1783dc181bf0a101c7cecd003149a3d0bc99235467eacc57dcc150794979a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-59ns4" Feb 13 15:28:07.994944 kubelet[2608]: E0213 15:28:07.994778 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-59ns4_calico-system(8c67a7be-7144-4f11-b45e-f04dfd3de75c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-59ns4_calico-system(8c67a7be-7144-4f11-b45e-f04dfd3de75c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2ae1783dc181bf0a101c7cecd003149a3d0bc99235467eacc57dcc150794979a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-59ns4" podUID="8c67a7be-7144-4f11-b45e-f04dfd3de75c" Feb 13 15:28:07.995145 containerd[1457]: time="2025-02-13T15:28:07.995097467Z" level=error msg="Failed to destroy network for sandbox \"1fdc9959f235af76b82180f39dc0b3634b4d66128bffd3617d8de10ff163e7fb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:07.995441 containerd[1457]: time="2025-02-13T15:28:07.995413663Z" level=error msg="encountered an error cleaning up failed sandbox \"1fdc9959f235af76b82180f39dc0b3634b4d66128bffd3617d8de10ff163e7fb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:07.995487 containerd[1457]: time="2025-02-13T15:28:07.995463588Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vhgmw,Uid:859d37ac-44c9-4b92-854b-e6ca0540dbd1,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"1fdc9959f235af76b82180f39dc0b3634b4d66128bffd3617d8de10ff163e7fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:07.995659 kubelet[2608]: E0213 15:28:07.995630 2608 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fdc9959f235af76b82180f39dc0b3634b4d66128bffd3617d8de10ff163e7fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:07.995722 kubelet[2608]: E0213 15:28:07.995672 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fdc9959f235af76b82180f39dc0b3634b4d66128bffd3617d8de10ff163e7fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-vhgmw" Feb 13 15:28:07.995722 kubelet[2608]: E0213 15:28:07.995689 2608 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fdc9959f235af76b82180f39dc0b3634b4d66128bffd3617d8de10ff163e7fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-vhgmw" Feb 13 15:28:07.995780 kubelet[2608]: E0213 15:28:07.995724 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-vhgmw_kube-system(859d37ac-44c9-4b92-854b-e6ca0540dbd1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-vhgmw_kube-system(859d37ac-44c9-4b92-854b-e6ca0540dbd1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1fdc9959f235af76b82180f39dc0b3634b4d66128bffd3617d8de10ff163e7fb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-vhgmw" podUID="859d37ac-44c9-4b92-854b-e6ca0540dbd1" Feb 13 15:28:08.004807 containerd[1457]: time="2025-02-13T15:28:08.004760815Z" level=error msg="Failed to destroy network for sandbox \"b203324d387be4f7cabb41c7345bf05412ccb11d6f2993f88f1edc0c931d90ab\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:08.005455 containerd[1457]: time="2025-02-13T15:28:08.005414406Z" level=error msg="encountered an error cleaning up failed sandbox \"b203324d387be4f7cabb41c7345bf05412ccb11d6f2993f88f1edc0c931d90ab\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:08.005503 containerd[1457]: time="2025-02-13T15:28:08.005482853Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b5dbfc55b-xsmtx,Uid:2ce64576-1ac8-4271-89c9-a8de4b77d706,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"b203324d387be4f7cabb41c7345bf05412ccb11d6f2993f88f1edc0c931d90ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:08.005799 kubelet[2608]: E0213 15:28:08.005765 2608 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b203324d387be4f7cabb41c7345bf05412ccb11d6f2993f88f1edc0c931d90ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:08.005838 kubelet[2608]: E0213 15:28:08.005821 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b203324d387be4f7cabb41c7345bf05412ccb11d6f2993f88f1edc0c931d90ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b5dbfc55b-xsmtx" Feb 13 15:28:08.005873 kubelet[2608]: E0213 15:28:08.005839 2608 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b203324d387be4f7cabb41c7345bf05412ccb11d6f2993f88f1edc0c931d90ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b5dbfc55b-xsmtx" Feb 13 15:28:08.005938 kubelet[2608]: E0213 15:28:08.005874 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b5dbfc55b-xsmtx_calico-apiserver(2ce64576-1ac8-4271-89c9-a8de4b77d706)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b5dbfc55b-xsmtx_calico-apiserver(2ce64576-1ac8-4271-89c9-a8de4b77d706)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b203324d387be4f7cabb41c7345bf05412ccb11d6f2993f88f1edc0c931d90ab\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b5dbfc55b-xsmtx" podUID="2ce64576-1ac8-4271-89c9-a8de4b77d706" Feb 13 15:28:08.251918 systemd[1]: run-netns-cni\x2da4c1a19c\x2d0644\x2dbc98\x2dd7d0\x2d7900ae85eec5.mount: Deactivated successfully. Feb 13 15:28:08.252003 systemd[1]: run-netns-cni\x2dc60ebeab\x2d4021\x2da67c\x2d183e\x2da5cade62744e.mount: Deactivated successfully. Feb 13 15:28:08.507274 systemd[1]: Started sshd@8-10.0.0.91:22-10.0.0.1:49540.service - OpenSSH per-connection server daemon (10.0.0.1:49540). Feb 13 15:28:08.557827 sshd[4284]: Accepted publickey for core from 10.0.0.1 port 49540 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:28:08.559463 sshd-session[4284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:28:08.567133 systemd-logind[1422]: New session 9 of user core. Feb 13 15:28:08.574836 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 15:28:08.657867 kubelet[2608]: I0213 15:28:08.657831 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4d56aece65a454a6744a6656975af454ed23eaa118376aad704578449124dce" Feb 13 15:28:08.661676 containerd[1457]: time="2025-02-13T15:28:08.660234750Z" level=info msg="StopPodSandbox for \"f4d56aece65a454a6744a6656975af454ed23eaa118376aad704578449124dce\"" Feb 13 15:28:08.661676 containerd[1457]: time="2025-02-13T15:28:08.660470136Z" level=info msg="Ensure that sandbox f4d56aece65a454a6744a6656975af454ed23eaa118376aad704578449124dce in task-service has been cleanup successfully" Feb 13 15:28:08.661676 containerd[1457]: time="2025-02-13T15:28:08.661566574Z" level=info msg="TearDown network for sandbox \"f4d56aece65a454a6744a6656975af454ed23eaa118376aad704578449124dce\" successfully" Feb 13 15:28:08.661676 containerd[1457]: time="2025-02-13T15:28:08.661589017Z" level=info msg="StopPodSandbox for \"f4d56aece65a454a6744a6656975af454ed23eaa118376aad704578449124dce\" returns successfully" Feb 13 15:28:08.662249 containerd[1457]: time="2025-02-13T15:28:08.662219765Z" level=info msg="StopPodSandbox for \"31240264793868d5f6753603cc9e8db3c437501e3593f5eaf3c12fbc398703a1\"" Feb 13 15:28:08.662350 containerd[1457]: time="2025-02-13T15:28:08.662332017Z" level=info msg="TearDown network for sandbox \"31240264793868d5f6753603cc9e8db3c437501e3593f5eaf3c12fbc398703a1\" successfully" Feb 13 15:28:08.662381 containerd[1457]: time="2025-02-13T15:28:08.662345898Z" level=info msg="StopPodSandbox for \"31240264793868d5f6753603cc9e8db3c437501e3593f5eaf3c12fbc398703a1\" returns successfully" Feb 13 15:28:08.663574 systemd[1]: run-netns-cni\x2d980bb0aa\x2df687\x2d07ed\x2da77e\x2d0f1c7bbeb167.mount: Deactivated successfully. Feb 13 15:28:08.665200 containerd[1457]: time="2025-02-13T15:28:08.665161243Z" level=info msg="StopPodSandbox for \"0cdfe4900b8aded4d3dc434c00139a606306819ed296214d867b55f86ba8ee4a\"" Feb 13 15:28:08.665400 containerd[1457]: time="2025-02-13T15:28:08.665379826Z" level=info msg="TearDown network for sandbox \"0cdfe4900b8aded4d3dc434c00139a606306819ed296214d867b55f86ba8ee4a\" successfully" Feb 13 15:28:08.665478 containerd[1457]: time="2025-02-13T15:28:08.665462915Z" level=info msg="StopPodSandbox for \"0cdfe4900b8aded4d3dc434c00139a606306819ed296214d867b55f86ba8ee4a\" returns successfully" Feb 13 15:28:08.666280 containerd[1457]: time="2025-02-13T15:28:08.666239759Z" level=info msg="StopPodSandbox for \"400e05d09f7638f6370144bc7c30317522f5d53fca24796970245605aaf907e8\"" Feb 13 15:28:08.666367 containerd[1457]: time="2025-02-13T15:28:08.666345130Z" level=info msg="TearDown network for sandbox \"400e05d09f7638f6370144bc7c30317522f5d53fca24796970245605aaf907e8\" successfully" Feb 13 15:28:08.666417 containerd[1457]: time="2025-02-13T15:28:08.666399256Z" level=info msg="StopPodSandbox for \"400e05d09f7638f6370144bc7c30317522f5d53fca24796970245605aaf907e8\" returns successfully" Feb 13 15:28:08.666785 kubelet[2608]: E0213 15:28:08.666611 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:08.667406 containerd[1457]: time="2025-02-13T15:28:08.667377122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j2fhf,Uid:d26de4e2-c62e-4d8a-96e0-edbb9492094a,Namespace:kube-system,Attempt:4,}" Feb 13 15:28:08.668855 kubelet[2608]: I0213 15:28:08.668811 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc21ee81810fc473d761df47ca40dfc07bf603b78f073619580e97c4954181e5" Feb 13 15:28:08.670188 containerd[1457]: time="2025-02-13T15:28:08.669347335Z" level=info msg="StopPodSandbox for \"cc21ee81810fc473d761df47ca40dfc07bf603b78f073619580e97c4954181e5\"" Feb 13 15:28:08.670188 containerd[1457]: time="2025-02-13T15:28:08.669999845Z" level=info msg="Ensure that sandbox cc21ee81810fc473d761df47ca40dfc07bf603b78f073619580e97c4954181e5 in task-service has been cleanup successfully" Feb 13 15:28:08.670786 containerd[1457]: time="2025-02-13T15:28:08.670702961Z" level=info msg="TearDown network for sandbox \"cc21ee81810fc473d761df47ca40dfc07bf603b78f073619580e97c4954181e5\" successfully" Feb 13 15:28:08.670917 containerd[1457]: time="2025-02-13T15:28:08.670857778Z" level=info msg="StopPodSandbox for \"cc21ee81810fc473d761df47ca40dfc07bf603b78f073619580e97c4954181e5\" returns successfully" Feb 13 15:28:08.671748 containerd[1457]: time="2025-02-13T15:28:08.671717751Z" level=info msg="StopPodSandbox for \"3f20dd0cef541fa916ded40902978e512025f116521ec7f105b4cd1d798e2080\"" Feb 13 15:28:08.671829 containerd[1457]: time="2025-02-13T15:28:08.671800440Z" level=info msg="TearDown network for sandbox \"3f20dd0cef541fa916ded40902978e512025f116521ec7f105b4cd1d798e2080\" successfully" Feb 13 15:28:08.671829 containerd[1457]: time="2025-02-13T15:28:08.671811161Z" level=info msg="StopPodSandbox for \"3f20dd0cef541fa916ded40902978e512025f116521ec7f105b4cd1d798e2080\" returns successfully" Feb 13 15:28:08.672107 containerd[1457]: time="2025-02-13T15:28:08.672074949Z" level=info msg="StopPodSandbox for \"9bde7141846a528b8406457cadcce6da8913cc6f999d44f2fda3999be82f378f\"" Feb 13 15:28:08.672238 containerd[1457]: time="2025-02-13T15:28:08.672223125Z" level=info msg="TearDown network for sandbox \"9bde7141846a528b8406457cadcce6da8913cc6f999d44f2fda3999be82f378f\" successfully" Feb 13 15:28:08.672287 containerd[1457]: time="2025-02-13T15:28:08.672276651Z" level=info msg="StopPodSandbox for \"9bde7141846a528b8406457cadcce6da8913cc6f999d44f2fda3999be82f378f\" returns successfully" Feb 13 15:28:08.675529 containerd[1457]: time="2025-02-13T15:28:08.675464396Z" level=info msg="StopPodSandbox for \"44d4ae110854cc57cbfa55e39401c987d33916faeb2710cc87a549e4d94657b0\"" Feb 13 15:28:08.676338 systemd[1]: run-netns-cni\x2d9122ff46\x2d43c7\x2db8e3\x2d2bae\x2d033bd222e8c2.mount: Deactivated successfully. Feb 13 15:28:08.676493 containerd[1457]: time="2025-02-13T15:28:08.675894682Z" level=info msg="TearDown network for sandbox \"44d4ae110854cc57cbfa55e39401c987d33916faeb2710cc87a549e4d94657b0\" successfully" Feb 13 15:28:08.676771 containerd[1457]: time="2025-02-13T15:28:08.676544392Z" level=info msg="StopPodSandbox for \"44d4ae110854cc57cbfa55e39401c987d33916faeb2710cc87a549e4d94657b0\" returns successfully" Feb 13 15:28:08.677051 kubelet[2608]: I0213 15:28:08.677021 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b203324d387be4f7cabb41c7345bf05412ccb11d6f2993f88f1edc0c931d90ab" Feb 13 15:28:08.678930 containerd[1457]: time="2025-02-13T15:28:08.678890006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b5dbfc55b-kt4sn,Uid:086f2ebd-d6e8-46e2-831d-0f37b85724a2,Namespace:calico-apiserver,Attempt:4,}" Feb 13 15:28:08.679344 containerd[1457]: time="2025-02-13T15:28:08.679315252Z" level=info msg="StopPodSandbox for \"b203324d387be4f7cabb41c7345bf05412ccb11d6f2993f88f1edc0c931d90ab\"" Feb 13 15:28:08.679987 containerd[1457]: time="2025-02-13T15:28:08.679909636Z" level=info msg="Ensure that sandbox b203324d387be4f7cabb41c7345bf05412ccb11d6f2993f88f1edc0c931d90ab in task-service has been cleanup successfully" Feb 13 15:28:08.680836 containerd[1457]: time="2025-02-13T15:28:08.680808053Z" level=info msg="TearDown network for sandbox \"b203324d387be4f7cabb41c7345bf05412ccb11d6f2993f88f1edc0c931d90ab\" successfully" Feb 13 15:28:08.680836 containerd[1457]: time="2025-02-13T15:28:08.680828775Z" level=info msg="StopPodSandbox for \"b203324d387be4f7cabb41c7345bf05412ccb11d6f2993f88f1edc0c931d90ab\" returns successfully" Feb 13 15:28:08.681675 containerd[1457]: time="2025-02-13T15:28:08.681647584Z" level=info msg="StopPodSandbox for \"ea7024231cf1f96c1a2263eb96f69e734b18be76b300f9a0314c4df3b994a6d9\"" Feb 13 15:28:08.682333 containerd[1457]: time="2025-02-13T15:28:08.682016463Z" level=info msg="TearDown network for sandbox \"ea7024231cf1f96c1a2263eb96f69e734b18be76b300f9a0314c4df3b994a6d9\" successfully" Feb 13 15:28:08.682333 containerd[1457]: time="2025-02-13T15:28:08.682034385Z" level=info msg="StopPodSandbox for \"ea7024231cf1f96c1a2263eb96f69e734b18be76b300f9a0314c4df3b994a6d9\" returns successfully" Feb 13 15:28:08.682995 containerd[1457]: time="2025-02-13T15:28:08.682891038Z" level=info msg="StopPodSandbox for \"9f518b19d3683f9344fecab0db2f16d46f20150c992e387844d1811fc72fa170\"" Feb 13 15:28:08.683641 containerd[1457]: time="2025-02-13T15:28:08.683524666Z" level=info msg="TearDown network for sandbox \"9f518b19d3683f9344fecab0db2f16d46f20150c992e387844d1811fc72fa170\" successfully" Feb 13 15:28:08.683641 containerd[1457]: time="2025-02-13T15:28:08.683548469Z" level=info msg="StopPodSandbox for \"9f518b19d3683f9344fecab0db2f16d46f20150c992e387844d1811fc72fa170\" returns successfully" Feb 13 15:28:08.684316 containerd[1457]: time="2025-02-13T15:28:08.684097368Z" level=info msg="StopPodSandbox for \"262a4443b576ec39fe8e513ab34a3dc1609fc6bb9ac7e9f491ab2ed971993fd6\"" Feb 13 15:28:08.684316 containerd[1457]: time="2025-02-13T15:28:08.684180977Z" level=info msg="TearDown network for sandbox \"262a4443b576ec39fe8e513ab34a3dc1609fc6bb9ac7e9f491ab2ed971993fd6\" successfully" Feb 13 15:28:08.684316 containerd[1457]: time="2025-02-13T15:28:08.684192779Z" level=info msg="StopPodSandbox for \"262a4443b576ec39fe8e513ab34a3dc1609fc6bb9ac7e9f491ab2ed971993fd6\" returns successfully" Feb 13 15:28:08.684811 containerd[1457]: time="2025-02-13T15:28:08.684695473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b5dbfc55b-xsmtx,Uid:2ce64576-1ac8-4271-89c9-a8de4b77d706,Namespace:calico-apiserver,Attempt:4,}" Feb 13 15:28:08.686108 systemd[1]: run-netns-cni\x2dc4c22216\x2d3da6\x2d1a28\x2dd44d\x2d300f572bcaa0.mount: Deactivated successfully. Feb 13 15:28:08.686981 kubelet[2608]: I0213 15:28:08.686952 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ae1783dc181bf0a101c7cecd003149a3d0bc99235467eacc57dcc150794979a" Feb 13 15:28:08.688574 containerd[1457]: time="2025-02-13T15:28:08.688545929Z" level=info msg="StopPodSandbox for \"2ae1783dc181bf0a101c7cecd003149a3d0bc99235467eacc57dcc150794979a\"" Feb 13 15:28:08.691290 containerd[1457]: time="2025-02-13T15:28:08.691257102Z" level=info msg="Ensure that sandbox 2ae1783dc181bf0a101c7cecd003149a3d0bc99235467eacc57dcc150794979a in task-service has been cleanup successfully" Feb 13 15:28:08.701275 containerd[1457]: time="2025-02-13T15:28:08.701187815Z" level=info msg="TearDown network for sandbox \"2ae1783dc181bf0a101c7cecd003149a3d0bc99235467eacc57dcc150794979a\" successfully" Feb 13 15:28:08.701646 containerd[1457]: time="2025-02-13T15:28:08.701481966Z" level=info msg="StopPodSandbox for \"2ae1783dc181bf0a101c7cecd003149a3d0bc99235467eacc57dcc150794979a\" returns successfully" Feb 13 15:28:08.702927 containerd[1457]: time="2025-02-13T15:28:08.702896119Z" level=info msg="StopPodSandbox for \"daae8dd6eeb39b0c2ba55118dc887b99adeb526889c9a3d6f932e9903afa2435\"" Feb 13 15:28:08.703691 containerd[1457]: time="2025-02-13T15:28:08.703668203Z" level=info msg="TearDown network for sandbox \"daae8dd6eeb39b0c2ba55118dc887b99adeb526889c9a3d6f932e9903afa2435\" successfully" Feb 13 15:28:08.703804 containerd[1457]: time="2025-02-13T15:28:08.703786015Z" level=info msg="StopPodSandbox for \"daae8dd6eeb39b0c2ba55118dc887b99adeb526889c9a3d6f932e9903afa2435\" returns successfully" Feb 13 15:28:08.704645 containerd[1457]: time="2025-02-13T15:28:08.704593743Z" level=info msg="StopPodSandbox for \"2c09c28a03fd4472b318aa06a1ea4e03909edc65be9ace9f58b2d77cd52e637a\"" Feb 13 15:28:08.704897 containerd[1457]: time="2025-02-13T15:28:08.704869732Z" level=info msg="TearDown network for sandbox \"2c09c28a03fd4472b318aa06a1ea4e03909edc65be9ace9f58b2d77cd52e637a\" successfully" Feb 13 15:28:08.704968 containerd[1457]: time="2025-02-13T15:28:08.704944101Z" level=info msg="StopPodSandbox for \"2c09c28a03fd4472b318aa06a1ea4e03909edc65be9ace9f58b2d77cd52e637a\" returns successfully" Feb 13 15:28:08.705851 kubelet[2608]: I0213 15:28:08.705814 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0477f5248a84e6b1609e010745ea4791329b3811f61cb4ca28c5f7d4c62c942f" Feb 13 15:28:08.706520 containerd[1457]: time="2025-02-13T15:28:08.706373335Z" level=info msg="StopPodSandbox for \"63565dfc82761cf07ac785a68455c18a81508c45b3108b2a758b41876e99d6f2\"" Feb 13 15:28:08.706520 containerd[1457]: time="2025-02-13T15:28:08.706470785Z" level=info msg="TearDown network for sandbox \"63565dfc82761cf07ac785a68455c18a81508c45b3108b2a758b41876e99d6f2\" successfully" Feb 13 15:28:08.706520 containerd[1457]: time="2025-02-13T15:28:08.706483187Z" level=info msg="StopPodSandbox for \"63565dfc82761cf07ac785a68455c18a81508c45b3108b2a758b41876e99d6f2\" returns successfully" Feb 13 15:28:08.707570 containerd[1457]: time="2025-02-13T15:28:08.706380576Z" level=info msg="StopPodSandbox for \"0477f5248a84e6b1609e010745ea4791329b3811f61cb4ca28c5f7d4c62c942f\"" Feb 13 15:28:08.707570 containerd[1457]: time="2025-02-13T15:28:08.707151259Z" level=info msg="Ensure that sandbox 0477f5248a84e6b1609e010745ea4791329b3811f61cb4ca28c5f7d4c62c942f in task-service has been cleanup successfully" Feb 13 15:28:08.707996 containerd[1457]: time="2025-02-13T15:28:08.707970307Z" level=info msg="TearDown network for sandbox \"0477f5248a84e6b1609e010745ea4791329b3811f61cb4ca28c5f7d4c62c942f\" successfully" Feb 13 15:28:08.708278 containerd[1457]: time="2025-02-13T15:28:08.708142686Z" level=info msg="StopPodSandbox for \"0477f5248a84e6b1609e010745ea4791329b3811f61cb4ca28c5f7d4c62c942f\" returns successfully" Feb 13 15:28:08.709901 containerd[1457]: time="2025-02-13T15:28:08.709872153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-59ns4,Uid:8c67a7be-7144-4f11-b45e-f04dfd3de75c,Namespace:calico-system,Attempt:4,}" Feb 13 15:28:08.711704 containerd[1457]: time="2025-02-13T15:28:08.711638624Z" level=info msg="StopPodSandbox for \"38a3f3da6737f9ca3a00e18f45dbf1f049602a27f165bd52b905645c672743cc\"" Feb 13 15:28:08.711831 containerd[1457]: time="2025-02-13T15:28:08.711747756Z" level=info msg="TearDown network for sandbox \"38a3f3da6737f9ca3a00e18f45dbf1f049602a27f165bd52b905645c672743cc\" successfully" Feb 13 15:28:08.711831 containerd[1457]: time="2025-02-13T15:28:08.711757957Z" level=info msg="StopPodSandbox for \"38a3f3da6737f9ca3a00e18f45dbf1f049602a27f165bd52b905645c672743cc\" returns successfully" Feb 13 15:28:08.712266 containerd[1457]: time="2025-02-13T15:28:08.712236688Z" level=info msg="StopPodSandbox for \"776f35edebf581700803135a5d9ab65368a38fb844db753a8fd392cbe7c754d3\"" Feb 13 15:28:08.712654 containerd[1457]: time="2025-02-13T15:28:08.712330339Z" level=info msg="TearDown network for sandbox \"776f35edebf581700803135a5d9ab65368a38fb844db753a8fd392cbe7c754d3\" successfully" Feb 13 15:28:08.712654 containerd[1457]: time="2025-02-13T15:28:08.712341500Z" level=info msg="StopPodSandbox for \"776f35edebf581700803135a5d9ab65368a38fb844db753a8fd392cbe7c754d3\" returns successfully" Feb 13 15:28:08.713396 containerd[1457]: time="2025-02-13T15:28:08.713292002Z" level=info msg="StopPodSandbox for \"77a75dac9c9f8165f19ab22aa953024a5e1c10959673b5e0528dc465b8c7c35e\"" Feb 13 15:28:08.714386 containerd[1457]: time="2025-02-13T15:28:08.714352197Z" level=info msg="TearDown network for sandbox \"77a75dac9c9f8165f19ab22aa953024a5e1c10959673b5e0528dc465b8c7c35e\" successfully" Feb 13 15:28:08.714498 containerd[1457]: time="2025-02-13T15:28:08.714482971Z" level=info msg="StopPodSandbox for \"77a75dac9c9f8165f19ab22aa953024a5e1c10959673b5e0528dc465b8c7c35e\" returns successfully" Feb 13 15:28:08.715985 containerd[1457]: time="2025-02-13T15:28:08.715954050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-ccbb9dcd9-2n9js,Uid:7201f0b7-6fae-4b37-8849-0e2e56956168,Namespace:calico-system,Attempt:4,}" Feb 13 15:28:08.718446 kubelet[2608]: I0213 15:28:08.718382 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fdc9959f235af76b82180f39dc0b3634b4d66128bffd3617d8de10ff163e7fb" Feb 13 15:28:08.719184 containerd[1457]: time="2025-02-13T15:28:08.719153156Z" level=info msg="StopPodSandbox for \"1fdc9959f235af76b82180f39dc0b3634b4d66128bffd3617d8de10ff163e7fb\"" Feb 13 15:28:08.719355 containerd[1457]: time="2025-02-13T15:28:08.719336815Z" level=info msg="Ensure that sandbox 1fdc9959f235af76b82180f39dc0b3634b4d66128bffd3617d8de10ff163e7fb in task-service has been cleanup successfully" Feb 13 15:28:08.719875 containerd[1457]: time="2025-02-13T15:28:08.719821268Z" level=info msg="TearDown network for sandbox \"1fdc9959f235af76b82180f39dc0b3634b4d66128bffd3617d8de10ff163e7fb\" successfully" Feb 13 15:28:08.719875 containerd[1457]: time="2025-02-13T15:28:08.719845270Z" level=info msg="StopPodSandbox for \"1fdc9959f235af76b82180f39dc0b3634b4d66128bffd3617d8de10ff163e7fb\" returns successfully" Feb 13 15:28:08.720796 containerd[1457]: time="2025-02-13T15:28:08.720669719Z" level=info msg="StopPodSandbox for \"b2452a586ae430ec9ed425f170afb0d9071b24ad7295b6f6a4477e012c9d521e\"" Feb 13 15:28:08.721667 containerd[1457]: time="2025-02-13T15:28:08.720776531Z" level=info msg="TearDown network for sandbox \"b2452a586ae430ec9ed425f170afb0d9071b24ad7295b6f6a4477e012c9d521e\" successfully" Feb 13 15:28:08.721943 containerd[1457]: time="2025-02-13T15:28:08.721815363Z" level=info msg="StopPodSandbox for \"b2452a586ae430ec9ed425f170afb0d9071b24ad7295b6f6a4477e012c9d521e\" returns successfully" Feb 13 15:28:08.723313 containerd[1457]: time="2025-02-13T15:28:08.723097342Z" level=info msg="StopPodSandbox for \"67348c9f7030a07e08918ec471c01d8d492fb7b37edf0b70d665014a3b6d6d75\"" Feb 13 15:28:08.723313 containerd[1457]: time="2025-02-13T15:28:08.723194912Z" level=info msg="TearDown network for sandbox \"67348c9f7030a07e08918ec471c01d8d492fb7b37edf0b70d665014a3b6d6d75\" successfully" Feb 13 15:28:08.723313 containerd[1457]: time="2025-02-13T15:28:08.723204713Z" level=info msg="StopPodSandbox for \"67348c9f7030a07e08918ec471c01d8d492fb7b37edf0b70d665014a3b6d6d75\" returns successfully" Feb 13 15:28:08.723853 containerd[1457]: time="2025-02-13T15:28:08.723805378Z" level=info msg="StopPodSandbox for \"0e912db41237de96aedc11e4bdfeaf221d0a4eb32d2c79b569dc586de06677c4\"" Feb 13 15:28:08.723917 containerd[1457]: time="2025-02-13T15:28:08.723902669Z" level=info msg="TearDown network for sandbox \"0e912db41237de96aedc11e4bdfeaf221d0a4eb32d2c79b569dc586de06677c4\" successfully" Feb 13 15:28:08.723917 containerd[1457]: time="2025-02-13T15:28:08.723914470Z" level=info msg="StopPodSandbox for \"0e912db41237de96aedc11e4bdfeaf221d0a4eb32d2c79b569dc586de06677c4\" returns successfully" Feb 13 15:28:08.724131 kubelet[2608]: E0213 15:28:08.724100 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:08.725978 containerd[1457]: time="2025-02-13T15:28:08.724373880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vhgmw,Uid:859d37ac-44c9-4b92-854b-e6ca0540dbd1,Namespace:kube-system,Attempt:4,}" Feb 13 15:28:08.743243 sshd[4286]: Connection closed by 10.0.0.1 port 49540 Feb 13 15:28:08.743025 sshd-session[4284]: pam_unix(sshd:session): session closed for user core Feb 13 15:28:08.747008 systemd[1]: sshd@8-10.0.0.91:22-10.0.0.1:49540.service: Deactivated successfully. Feb 13 15:28:08.750278 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 15:28:08.751151 systemd-logind[1422]: Session 9 logged out. Waiting for processes to exit. Feb 13 15:28:08.752070 systemd-logind[1422]: Removed session 9. Feb 13 15:28:08.763217 containerd[1457]: time="2025-02-13T15:28:08.763097983Z" level=error msg="Failed to destroy network for sandbox \"c5e16b24742bbc2f37dfe06c90ffdbec5343248ca53db0407b2759c26a1559b7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:08.763473 containerd[1457]: time="2025-02-13T15:28:08.763433139Z" level=error msg="encountered an error cleaning up failed sandbox \"c5e16b24742bbc2f37dfe06c90ffdbec5343248ca53db0407b2759c26a1559b7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:08.763528 containerd[1457]: time="2025-02-13T15:28:08.763504747Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j2fhf,Uid:d26de4e2-c62e-4d8a-96e0-edbb9492094a,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"c5e16b24742bbc2f37dfe06c90ffdbec5343248ca53db0407b2759c26a1559b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:08.765400 kubelet[2608]: E0213 15:28:08.765350 2608 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5e16b24742bbc2f37dfe06c90ffdbec5343248ca53db0407b2759c26a1559b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:08.765519 kubelet[2608]: E0213 15:28:08.765418 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5e16b24742bbc2f37dfe06c90ffdbec5343248ca53db0407b2759c26a1559b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-j2fhf" Feb 13 15:28:08.765519 kubelet[2608]: E0213 15:28:08.765438 2608 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5e16b24742bbc2f37dfe06c90ffdbec5343248ca53db0407b2759c26a1559b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-j2fhf" Feb 13 15:28:08.765519 kubelet[2608]: E0213 15:28:08.765492 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-j2fhf_kube-system(d26de4e2-c62e-4d8a-96e0-edbb9492094a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-j2fhf_kube-system(d26de4e2-c62e-4d8a-96e0-edbb9492094a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c5e16b24742bbc2f37dfe06c90ffdbec5343248ca53db0407b2759c26a1559b7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-j2fhf" podUID="d26de4e2-c62e-4d8a-96e0-edbb9492094a" Feb 13 15:28:08.824420 containerd[1457]: time="2025-02-13T15:28:08.824367123Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:08.827811 containerd[1457]: time="2025-02-13T15:28:08.827655398Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Feb 13 15:28:08.829431 containerd[1457]: time="2025-02-13T15:28:08.829384145Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:08.843342 containerd[1457]: time="2025-02-13T15:28:08.842950130Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:08.847526 containerd[1457]: time="2025-02-13T15:28:08.843798062Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 4.268686667s" Feb 13 15:28:08.847526 containerd[1457]: time="2025-02-13T15:28:08.847523904Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Feb 13 15:28:08.897261 containerd[1457]: time="2025-02-13T15:28:08.897198431Z" level=info msg="CreateContainer within sandbox \"d6c73f0abcc32395de12341c2bc31a7bc0d667286ec5f4e24119796f08c18518\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 15:28:08.916232 containerd[1457]: time="2025-02-13T15:28:08.916178042Z" level=error msg="Failed to destroy network for sandbox \"58c5ff5777c7ef344cc3b143e1c2324011c9ca3754e7407b48350424c32bea2d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:08.916639 containerd[1457]: time="2025-02-13T15:28:08.916511158Z" level=error msg="encountered an error cleaning up failed sandbox \"58c5ff5777c7ef344cc3b143e1c2324011c9ca3754e7407b48350424c32bea2d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:08.916639 containerd[1457]: time="2025-02-13T15:28:08.916572924Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b5dbfc55b-xsmtx,Uid:2ce64576-1ac8-4271-89c9-a8de4b77d706,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"58c5ff5777c7ef344cc3b143e1c2324011c9ca3754e7407b48350424c32bea2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:08.916869 kubelet[2608]: E0213 15:28:08.916822 2608 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58c5ff5777c7ef344cc3b143e1c2324011c9ca3754e7407b48350424c32bea2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:08.916949 kubelet[2608]: E0213 15:28:08.916891 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58c5ff5777c7ef344cc3b143e1c2324011c9ca3754e7407b48350424c32bea2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b5dbfc55b-xsmtx" Feb 13 15:28:08.916949 kubelet[2608]: E0213 15:28:08.916915 2608 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58c5ff5777c7ef344cc3b143e1c2324011c9ca3754e7407b48350424c32bea2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b5dbfc55b-xsmtx" Feb 13 15:28:08.916994 kubelet[2608]: E0213 15:28:08.916955 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b5dbfc55b-xsmtx_calico-apiserver(2ce64576-1ac8-4271-89c9-a8de4b77d706)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b5dbfc55b-xsmtx_calico-apiserver(2ce64576-1ac8-4271-89c9-a8de4b77d706)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"58c5ff5777c7ef344cc3b143e1c2324011c9ca3754e7407b48350424c32bea2d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b5dbfc55b-xsmtx" podUID="2ce64576-1ac8-4271-89c9-a8de4b77d706" Feb 13 15:28:08.928668 containerd[1457]: time="2025-02-13T15:28:08.928595423Z" level=error msg="Failed to destroy network for sandbox \"09ae978d1263afc7237cf637a3169b1af4b9e7342e716b5c0326028e663a24a4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:08.929401 containerd[1457]: time="2025-02-13T15:28:08.928935860Z" level=error msg="encountered an error cleaning up failed sandbox \"09ae978d1263afc7237cf637a3169b1af4b9e7342e716b5c0326028e663a24a4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:08.929401 containerd[1457]: time="2025-02-13T15:28:08.929346024Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b5dbfc55b-kt4sn,Uid:086f2ebd-d6e8-46e2-831d-0f37b85724a2,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"09ae978d1263afc7237cf637a3169b1af4b9e7342e716b5c0326028e663a24a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:08.929696 kubelet[2608]: E0213 15:28:08.929650 2608 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09ae978d1263afc7237cf637a3169b1af4b9e7342e716b5c0326028e663a24a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:08.929752 kubelet[2608]: E0213 15:28:08.929718 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09ae978d1263afc7237cf637a3169b1af4b9e7342e716b5c0326028e663a24a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b5dbfc55b-kt4sn" Feb 13 15:28:08.929752 kubelet[2608]: E0213 15:28:08.929738 2608 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09ae978d1263afc7237cf637a3169b1af4b9e7342e716b5c0326028e663a24a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b5dbfc55b-kt4sn" Feb 13 15:28:08.930071 kubelet[2608]: E0213 15:28:08.929977 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b5dbfc55b-kt4sn_calico-apiserver(086f2ebd-d6e8-46e2-831d-0f37b85724a2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b5dbfc55b-kt4sn_calico-apiserver(086f2ebd-d6e8-46e2-831d-0f37b85724a2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"09ae978d1263afc7237cf637a3169b1af4b9e7342e716b5c0326028e663a24a4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b5dbfc55b-kt4sn" podUID="086f2ebd-d6e8-46e2-831d-0f37b85724a2" Feb 13 15:28:08.931861 containerd[1457]: time="2025-02-13T15:28:08.931820812Z" level=error msg="Failed to destroy network for sandbox \"de5622ef7c1e14e2d930089742198e784ff4a7b8aabb4704edcd78f305d94cf3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:08.932314 containerd[1457]: time="2025-02-13T15:28:08.932271700Z" level=error msg="encountered an error cleaning up failed sandbox \"de5622ef7c1e14e2d930089742198e784ff4a7b8aabb4704edcd78f305d94cf3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:08.932381 containerd[1457]: time="2025-02-13T15:28:08.932339468Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-59ns4,Uid:8c67a7be-7144-4f11-b45e-f04dfd3de75c,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"de5622ef7c1e14e2d930089742198e784ff4a7b8aabb4704edcd78f305d94cf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:08.932945 kubelet[2608]: E0213 15:28:08.932687 2608 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de5622ef7c1e14e2d930089742198e784ff4a7b8aabb4704edcd78f305d94cf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:08.932945 kubelet[2608]: E0213 15:28:08.932751 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de5622ef7c1e14e2d930089742198e784ff4a7b8aabb4704edcd78f305d94cf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-59ns4" Feb 13 15:28:08.932945 kubelet[2608]: E0213 15:28:08.932773 2608 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de5622ef7c1e14e2d930089742198e784ff4a7b8aabb4704edcd78f305d94cf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-59ns4" Feb 13 15:28:08.933186 containerd[1457]: time="2025-02-13T15:28:08.933147755Z" level=error msg="Failed to destroy network for sandbox \"6761151fa2a0f4f698229e01a2b8571c3184e35cf086ea773e1f473cb3b94906\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:08.933320 kubelet[2608]: E0213 15:28:08.932826 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-59ns4_calico-system(8c67a7be-7144-4f11-b45e-f04dfd3de75c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-59ns4_calico-system(8c67a7be-7144-4f11-b45e-f04dfd3de75c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"de5622ef7c1e14e2d930089742198e784ff4a7b8aabb4704edcd78f305d94cf3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-59ns4" podUID="8c67a7be-7144-4f11-b45e-f04dfd3de75c" Feb 13 15:28:08.933848 containerd[1457]: time="2025-02-13T15:28:08.933815587Z" level=error msg="encountered an error cleaning up failed sandbox \"6761151fa2a0f4f698229e01a2b8571c3184e35cf086ea773e1f473cb3b94906\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:08.934086 containerd[1457]: time="2025-02-13T15:28:08.934058893Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-ccbb9dcd9-2n9js,Uid:7201f0b7-6fae-4b37-8849-0e2e56956168,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"6761151fa2a0f4f698229e01a2b8571c3184e35cf086ea773e1f473cb3b94906\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:08.934359 kubelet[2608]: E0213 15:28:08.934314 2608 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6761151fa2a0f4f698229e01a2b8571c3184e35cf086ea773e1f473cb3b94906\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:08.934413 kubelet[2608]: E0213 15:28:08.934375 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6761151fa2a0f4f698229e01a2b8571c3184e35cf086ea773e1f473cb3b94906\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-ccbb9dcd9-2n9js" Feb 13 15:28:08.934413 kubelet[2608]: E0213 15:28:08.934393 2608 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6761151fa2a0f4f698229e01a2b8571c3184e35cf086ea773e1f473cb3b94906\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-ccbb9dcd9-2n9js" Feb 13 15:28:08.934501 kubelet[2608]: E0213 15:28:08.934424 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-ccbb9dcd9-2n9js_calico-system(7201f0b7-6fae-4b37-8849-0e2e56956168)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-ccbb9dcd9-2n9js_calico-system(7201f0b7-6fae-4b37-8849-0e2e56956168)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6761151fa2a0f4f698229e01a2b8571c3184e35cf086ea773e1f473cb3b94906\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-ccbb9dcd9-2n9js" podUID="7201f0b7-6fae-4b37-8849-0e2e56956168" Feb 13 15:28:08.938273 containerd[1457]: time="2025-02-13T15:28:08.938231304Z" level=error msg="Failed to destroy network for sandbox \"89faecaa39a3df7b311017148dcd5f19fc35fa57d662d192cc38be381f1b5304\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:08.938350 containerd[1457]: time="2025-02-13T15:28:08.938259467Z" level=info msg="CreateContainer within sandbox \"d6c73f0abcc32395de12341c2bc31a7bc0d667286ec5f4e24119796f08c18518\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d3e301f328c701f75a8757824a4d4f9abbccf83e29811b06edb46791f522463f\"" Feb 13 15:28:08.938830 containerd[1457]: time="2025-02-13T15:28:08.938748280Z" level=error msg="encountered an error cleaning up failed sandbox \"89faecaa39a3df7b311017148dcd5f19fc35fa57d662d192cc38be381f1b5304\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:08.938909 containerd[1457]: time="2025-02-13T15:28:08.938860772Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vhgmw,Uid:859d37ac-44c9-4b92-854b-e6ca0540dbd1,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"89faecaa39a3df7b311017148dcd5f19fc35fa57d662d192cc38be381f1b5304\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:08.938909 containerd[1457]: time="2025-02-13T15:28:08.938783404Z" level=info msg="StartContainer for \"d3e301f328c701f75a8757824a4d4f9abbccf83e29811b06edb46791f522463f\"" Feb 13 15:28:08.939236 kubelet[2608]: E0213 15:28:08.939055 2608 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89faecaa39a3df7b311017148dcd5f19fc35fa57d662d192cc38be381f1b5304\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:08.939236 kubelet[2608]: E0213 15:28:08.939112 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89faecaa39a3df7b311017148dcd5f19fc35fa57d662d192cc38be381f1b5304\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-vhgmw" Feb 13 15:28:08.939236 kubelet[2608]: E0213 15:28:08.939138 2608 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89faecaa39a3df7b311017148dcd5f19fc35fa57d662d192cc38be381f1b5304\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-vhgmw" Feb 13 15:28:08.939345 kubelet[2608]: E0213 15:28:08.939187 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-vhgmw_kube-system(859d37ac-44c9-4b92-854b-e6ca0540dbd1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-vhgmw_kube-system(859d37ac-44c9-4b92-854b-e6ca0540dbd1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"89faecaa39a3df7b311017148dcd5f19fc35fa57d662d192cc38be381f1b5304\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-vhgmw" podUID="859d37ac-44c9-4b92-854b-e6ca0540dbd1" Feb 13 15:28:08.997829 systemd[1]: Started cri-containerd-d3e301f328c701f75a8757824a4d4f9abbccf83e29811b06edb46791f522463f.scope - libcontainer container d3e301f328c701f75a8757824a4d4f9abbccf83e29811b06edb46791f522463f. Feb 13 15:28:09.035698 containerd[1457]: time="2025-02-13T15:28:09.035572022Z" level=info msg="StartContainer for \"d3e301f328c701f75a8757824a4d4f9abbccf83e29811b06edb46791f522463f\" returns successfully" Feb 13 15:28:09.232353 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 15:28:09.232685 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 15:28:09.253772 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c5e16b24742bbc2f37dfe06c90ffdbec5343248ca53db0407b2759c26a1559b7-shm.mount: Deactivated successfully. Feb 13 15:28:09.254076 systemd[1]: run-netns-cni\x2daaedc71c\x2d1033\x2da6d0\x2dafcd\x2daf6dc2896cdd.mount: Deactivated successfully. Feb 13 15:28:09.254137 systemd[1]: run-netns-cni\x2d998e77d2\x2d485a\x2de5c7\x2d056e\x2d55094c3475ff.mount: Deactivated successfully. Feb 13 15:28:09.254180 systemd[1]: run-netns-cni\x2d94a2c276\x2d2ad0\x2d98db\x2d677e\x2d6eddf350de22.mount: Deactivated successfully. Feb 13 15:28:09.254224 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount457098270.mount: Deactivated successfully. Feb 13 15:28:09.723732 kubelet[2608]: I0213 15:28:09.723592 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58c5ff5777c7ef344cc3b143e1c2324011c9ca3754e7407b48350424c32bea2d" Feb 13 15:28:09.729087 containerd[1457]: time="2025-02-13T15:28:09.725938180Z" level=info msg="StopPodSandbox for \"58c5ff5777c7ef344cc3b143e1c2324011c9ca3754e7407b48350424c32bea2d\"" Feb 13 15:28:09.729087 containerd[1457]: time="2025-02-13T15:28:09.726112398Z" level=info msg="Ensure that sandbox 58c5ff5777c7ef344cc3b143e1c2324011c9ca3754e7407b48350424c32bea2d in task-service has been cleanup successfully" Feb 13 15:28:09.729087 containerd[1457]: time="2025-02-13T15:28:09.726395067Z" level=info msg="TearDown network for sandbox \"58c5ff5777c7ef344cc3b143e1c2324011c9ca3754e7407b48350424c32bea2d\" successfully" Feb 13 15:28:09.729087 containerd[1457]: time="2025-02-13T15:28:09.726411149Z" level=info msg="StopPodSandbox for \"58c5ff5777c7ef344cc3b143e1c2324011c9ca3754e7407b48350424c32bea2d\" returns successfully" Feb 13 15:28:09.728263 systemd[1]: run-netns-cni\x2d87071439\x2df6ae\x2dbab1\x2da0f1\x2dcd472508f1b5.mount: Deactivated successfully. Feb 13 15:28:09.731008 containerd[1457]: time="2025-02-13T15:28:09.730801848Z" level=info msg="StopPodSandbox for \"b203324d387be4f7cabb41c7345bf05412ccb11d6f2993f88f1edc0c931d90ab\"" Feb 13 15:28:09.731008 containerd[1457]: time="2025-02-13T15:28:09.730954144Z" level=info msg="TearDown network for sandbox \"b203324d387be4f7cabb41c7345bf05412ccb11d6f2993f88f1edc0c931d90ab\" successfully" Feb 13 15:28:09.731008 containerd[1457]: time="2025-02-13T15:28:09.730968186Z" level=info msg="StopPodSandbox for \"b203324d387be4f7cabb41c7345bf05412ccb11d6f2993f88f1edc0c931d90ab\" returns successfully" Feb 13 15:28:09.731733 containerd[1457]: time="2025-02-13T15:28:09.731702303Z" level=info msg="StopPodSandbox for \"ea7024231cf1f96c1a2263eb96f69e734b18be76b300f9a0314c4df3b994a6d9\"" Feb 13 15:28:09.731938 containerd[1457]: time="2025-02-13T15:28:09.731799193Z" level=info msg="TearDown network for sandbox \"ea7024231cf1f96c1a2263eb96f69e734b18be76b300f9a0314c4df3b994a6d9\" successfully" Feb 13 15:28:09.731938 containerd[1457]: time="2025-02-13T15:28:09.731812714Z" level=info msg="StopPodSandbox for \"ea7024231cf1f96c1a2263eb96f69e734b18be76b300f9a0314c4df3b994a6d9\" returns successfully" Feb 13 15:28:09.732780 containerd[1457]: time="2025-02-13T15:28:09.732754253Z" level=info msg="StopPodSandbox for \"9f518b19d3683f9344fecab0db2f16d46f20150c992e387844d1811fc72fa170\"" Feb 13 15:28:09.732852 containerd[1457]: time="2025-02-13T15:28:09.732841502Z" level=info msg="TearDown network for sandbox \"9f518b19d3683f9344fecab0db2f16d46f20150c992e387844d1811fc72fa170\" successfully" Feb 13 15:28:09.732878 containerd[1457]: time="2025-02-13T15:28:09.732851783Z" level=info msg="StopPodSandbox for \"9f518b19d3683f9344fecab0db2f16d46f20150c992e387844d1811fc72fa170\" returns successfully" Feb 13 15:28:09.734033 containerd[1457]: time="2025-02-13T15:28:09.734004584Z" level=info msg="StopPodSandbox for \"262a4443b576ec39fe8e513ab34a3dc1609fc6bb9ac7e9f491ab2ed971993fd6\"" Feb 13 15:28:09.734098 containerd[1457]: time="2025-02-13T15:28:09.734076991Z" level=info msg="TearDown network for sandbox \"262a4443b576ec39fe8e513ab34a3dc1609fc6bb9ac7e9f491ab2ed971993fd6\" successfully" Feb 13 15:28:09.734121 containerd[1457]: time="2025-02-13T15:28:09.734095673Z" level=info msg="StopPodSandbox for \"262a4443b576ec39fe8e513ab34a3dc1609fc6bb9ac7e9f491ab2ed971993fd6\" returns successfully" Feb 13 15:28:09.734895 containerd[1457]: time="2025-02-13T15:28:09.734561242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b5dbfc55b-xsmtx,Uid:2ce64576-1ac8-4271-89c9-a8de4b77d706,Namespace:calico-apiserver,Attempt:5,}" Feb 13 15:28:09.735208 kubelet[2608]: E0213 15:28:09.735169 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:09.745261 kubelet[2608]: I0213 15:28:09.745207 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de5622ef7c1e14e2d930089742198e784ff4a7b8aabb4704edcd78f305d94cf3" Feb 13 15:28:09.745785 containerd[1457]: time="2025-02-13T15:28:09.745747052Z" level=info msg="StopPodSandbox for \"de5622ef7c1e14e2d930089742198e784ff4a7b8aabb4704edcd78f305d94cf3\"" Feb 13 15:28:09.745965 containerd[1457]: time="2025-02-13T15:28:09.745946713Z" level=info msg="Ensure that sandbox de5622ef7c1e14e2d930089742198e784ff4a7b8aabb4704edcd78f305d94cf3 in task-service has been cleanup successfully" Feb 13 15:28:09.748953 systemd[1]: run-netns-cni\x2dea22edc9\x2d241d\x2d96e5\x2d97d6\x2d8b91fa3719bb.mount: Deactivated successfully. Feb 13 15:28:09.752675 containerd[1457]: time="2025-02-13T15:28:09.752612771Z" level=info msg="TearDown network for sandbox \"de5622ef7c1e14e2d930089742198e784ff4a7b8aabb4704edcd78f305d94cf3\" successfully" Feb 13 15:28:09.752675 containerd[1457]: time="2025-02-13T15:28:09.752663496Z" level=info msg="StopPodSandbox for \"de5622ef7c1e14e2d930089742198e784ff4a7b8aabb4704edcd78f305d94cf3\" returns successfully" Feb 13 15:28:09.755581 containerd[1457]: time="2025-02-13T15:28:09.755408623Z" level=info msg="StopPodSandbox for \"2ae1783dc181bf0a101c7cecd003149a3d0bc99235467eacc57dcc150794979a\"" Feb 13 15:28:09.755581 containerd[1457]: time="2025-02-13T15:28:09.755510914Z" level=info msg="TearDown network for sandbox \"2ae1783dc181bf0a101c7cecd003149a3d0bc99235467eacc57dcc150794979a\" successfully" Feb 13 15:28:09.755581 containerd[1457]: time="2025-02-13T15:28:09.755521595Z" level=info msg="StopPodSandbox for \"2ae1783dc181bf0a101c7cecd003149a3d0bc99235467eacc57dcc150794979a\" returns successfully" Feb 13 15:28:09.756954 containerd[1457]: time="2025-02-13T15:28:09.756923382Z" level=info msg="StopPodSandbox for \"daae8dd6eeb39b0c2ba55118dc887b99adeb526889c9a3d6f932e9903afa2435\"" Feb 13 15:28:09.757023 containerd[1457]: time="2025-02-13T15:28:09.757011831Z" level=info msg="TearDown network for sandbox \"daae8dd6eeb39b0c2ba55118dc887b99adeb526889c9a3d6f932e9903afa2435\" successfully" Feb 13 15:28:09.757070 containerd[1457]: time="2025-02-13T15:28:09.757022752Z" level=info msg="StopPodSandbox for \"daae8dd6eeb39b0c2ba55118dc887b99adeb526889c9a3d6f932e9903afa2435\" returns successfully" Feb 13 15:28:09.757523 containerd[1457]: time="2025-02-13T15:28:09.757498602Z" level=info msg="StopPodSandbox for \"2c09c28a03fd4472b318aa06a1ea4e03909edc65be9ace9f58b2d77cd52e637a\"" Feb 13 15:28:09.757598 containerd[1457]: time="2025-02-13T15:28:09.757580131Z" level=info msg="TearDown network for sandbox \"2c09c28a03fd4472b318aa06a1ea4e03909edc65be9ace9f58b2d77cd52e637a\" successfully" Feb 13 15:28:09.757634 containerd[1457]: time="2025-02-13T15:28:09.757595572Z" level=info msg="StopPodSandbox for \"2c09c28a03fd4472b318aa06a1ea4e03909edc65be9ace9f58b2d77cd52e637a\" returns successfully" Feb 13 15:28:09.757983 containerd[1457]: time="2025-02-13T15:28:09.757945409Z" level=info msg="StopPodSandbox for \"63565dfc82761cf07ac785a68455c18a81508c45b3108b2a758b41876e99d6f2\"" Feb 13 15:28:09.758042 containerd[1457]: time="2025-02-13T15:28:09.758012776Z" level=info msg="TearDown network for sandbox \"63565dfc82761cf07ac785a68455c18a81508c45b3108b2a758b41876e99d6f2\" successfully" Feb 13 15:28:09.758042 containerd[1457]: time="2025-02-13T15:28:09.758022097Z" level=info msg="StopPodSandbox for \"63565dfc82761cf07ac785a68455c18a81508c45b3108b2a758b41876e99d6f2\" returns successfully" Feb 13 15:28:09.759971 kubelet[2608]: I0213 15:28:09.758641 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6761151fa2a0f4f698229e01a2b8571c3184e35cf086ea773e1f473cb3b94906" Feb 13 15:28:09.760047 containerd[1457]: time="2025-02-13T15:28:09.759141894Z" level=info msg="StopPodSandbox for \"6761151fa2a0f4f698229e01a2b8571c3184e35cf086ea773e1f473cb3b94906\"" Feb 13 15:28:09.760047 containerd[1457]: time="2025-02-13T15:28:09.759303951Z" level=info msg="Ensure that sandbox 6761151fa2a0f4f698229e01a2b8571c3184e35cf086ea773e1f473cb3b94906 in task-service has been cleanup successfully" Feb 13 15:28:09.766775 systemd[1]: run-netns-cni\x2dc11d7591\x2d6fa1\x2d141b\x2d7154\x2dbc3f35f7be96.mount: Deactivated successfully. Feb 13 15:28:09.767454 containerd[1457]: time="2025-02-13T15:28:09.767232460Z" level=info msg="TearDown network for sandbox \"6761151fa2a0f4f698229e01a2b8571c3184e35cf086ea773e1f473cb3b94906\" successfully" Feb 13 15:28:09.767454 containerd[1457]: time="2025-02-13T15:28:09.767266824Z" level=info msg="StopPodSandbox for \"6761151fa2a0f4f698229e01a2b8571c3184e35cf086ea773e1f473cb3b94906\" returns successfully" Feb 13 15:28:09.767454 containerd[1457]: time="2025-02-13T15:28:09.767445003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-59ns4,Uid:8c67a7be-7144-4f11-b45e-f04dfd3de75c,Namespace:calico-system,Attempt:5,}" Feb 13 15:28:09.768374 containerd[1457]: time="2025-02-13T15:28:09.768304733Z" level=info msg="StopPodSandbox for \"0477f5248a84e6b1609e010745ea4791329b3811f61cb4ca28c5f7d4c62c942f\"" Feb 13 15:28:09.768467 containerd[1457]: time="2025-02-13T15:28:09.768414264Z" level=info msg="TearDown network for sandbox \"0477f5248a84e6b1609e010745ea4791329b3811f61cb4ca28c5f7d4c62c942f\" successfully" Feb 13 15:28:09.768467 containerd[1457]: time="2025-02-13T15:28:09.768425705Z" level=info msg="StopPodSandbox for \"0477f5248a84e6b1609e010745ea4791329b3811f61cb4ca28c5f7d4c62c942f\" returns successfully" Feb 13 15:28:09.770893 containerd[1457]: time="2025-02-13T15:28:09.769928423Z" level=info msg="StopPodSandbox for \"38a3f3da6737f9ca3a00e18f45dbf1f049602a27f165bd52b905645c672743cc\"" Feb 13 15:28:09.770893 containerd[1457]: time="2025-02-13T15:28:09.770039914Z" level=info msg="TearDown network for sandbox \"38a3f3da6737f9ca3a00e18f45dbf1f049602a27f165bd52b905645c672743cc\" successfully" Feb 13 15:28:09.770893 containerd[1457]: time="2025-02-13T15:28:09.770052916Z" level=info msg="StopPodSandbox for \"38a3f3da6737f9ca3a00e18f45dbf1f049602a27f165bd52b905645c672743cc\" returns successfully" Feb 13 15:28:09.772147 containerd[1457]: time="2025-02-13T15:28:09.772083288Z" level=info msg="StopPodSandbox for \"776f35edebf581700803135a5d9ab65368a38fb844db753a8fd392cbe7c754d3\"" Feb 13 15:28:09.772223 containerd[1457]: time="2025-02-13T15:28:09.772175658Z" level=info msg="TearDown network for sandbox \"776f35edebf581700803135a5d9ab65368a38fb844db753a8fd392cbe7c754d3\" successfully" Feb 13 15:28:09.772223 containerd[1457]: time="2025-02-13T15:28:09.772185459Z" level=info msg="StopPodSandbox for \"776f35edebf581700803135a5d9ab65368a38fb844db753a8fd392cbe7c754d3\" returns successfully" Feb 13 15:28:09.772737 containerd[1457]: time="2025-02-13T15:28:09.772585861Z" level=info msg="StopPodSandbox for \"77a75dac9c9f8165f19ab22aa953024a5e1c10959673b5e0528dc465b8c7c35e\"" Feb 13 15:28:09.772737 containerd[1457]: time="2025-02-13T15:28:09.772731556Z" level=info msg="TearDown network for sandbox \"77a75dac9c9f8165f19ab22aa953024a5e1c10959673b5e0528dc465b8c7c35e\" successfully" Feb 13 15:28:09.772834 containerd[1457]: time="2025-02-13T15:28:09.772743477Z" level=info msg="StopPodSandbox for \"77a75dac9c9f8165f19ab22aa953024a5e1c10959673b5e0528dc465b8c7c35e\" returns successfully" Feb 13 15:28:09.773650 containerd[1457]: time="2025-02-13T15:28:09.773381344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-ccbb9dcd9-2n9js,Uid:7201f0b7-6fae-4b37-8849-0e2e56956168,Namespace:calico-system,Attempt:5,}" Feb 13 15:28:09.775671 kubelet[2608]: I0213 15:28:09.775639 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89faecaa39a3df7b311017148dcd5f19fc35fa57d662d192cc38be381f1b5304" Feb 13 15:28:09.778309 containerd[1457]: time="2025-02-13T15:28:09.778264295Z" level=info msg="StopPodSandbox for \"89faecaa39a3df7b311017148dcd5f19fc35fa57d662d192cc38be381f1b5304\"" Feb 13 15:28:09.778474 containerd[1457]: time="2025-02-13T15:28:09.778446754Z" level=info msg="Ensure that sandbox 89faecaa39a3df7b311017148dcd5f19fc35fa57d662d192cc38be381f1b5304 in task-service has been cleanup successfully" Feb 13 15:28:09.778848 containerd[1457]: time="2025-02-13T15:28:09.778818433Z" level=info msg="TearDown network for sandbox \"89faecaa39a3df7b311017148dcd5f19fc35fa57d662d192cc38be381f1b5304\" successfully" Feb 13 15:28:09.778848 containerd[1457]: time="2025-02-13T15:28:09.778844476Z" level=info msg="StopPodSandbox for \"89faecaa39a3df7b311017148dcd5f19fc35fa57d662d192cc38be381f1b5304\" returns successfully" Feb 13 15:28:09.779284 containerd[1457]: time="2025-02-13T15:28:09.779122465Z" level=info msg="StopPodSandbox for \"1fdc9959f235af76b82180f39dc0b3634b4d66128bffd3617d8de10ff163e7fb\"" Feb 13 15:28:09.779284 containerd[1457]: time="2025-02-13T15:28:09.779213274Z" level=info msg="TearDown network for sandbox \"1fdc9959f235af76b82180f39dc0b3634b4d66128bffd3617d8de10ff163e7fb\" successfully" Feb 13 15:28:09.779284 containerd[1457]: time="2025-02-13T15:28:09.779223475Z" level=info msg="StopPodSandbox for \"1fdc9959f235af76b82180f39dc0b3634b4d66128bffd3617d8de10ff163e7fb\" returns successfully" Feb 13 15:28:09.779825 containerd[1457]: time="2025-02-13T15:28:09.779791655Z" level=info msg="StopPodSandbox for \"b2452a586ae430ec9ed425f170afb0d9071b24ad7295b6f6a4477e012c9d521e\"" Feb 13 15:28:09.779920 containerd[1457]: time="2025-02-13T15:28:09.779880624Z" level=info msg="TearDown network for sandbox \"b2452a586ae430ec9ed425f170afb0d9071b24ad7295b6f6a4477e012c9d521e\" successfully" Feb 13 15:28:09.779920 containerd[1457]: time="2025-02-13T15:28:09.779890785Z" level=info msg="StopPodSandbox for \"b2452a586ae430ec9ed425f170afb0d9071b24ad7295b6f6a4477e012c9d521e\" returns successfully" Feb 13 15:28:09.783382 containerd[1457]: time="2025-02-13T15:28:09.782185505Z" level=info msg="StopPodSandbox for \"67348c9f7030a07e08918ec471c01d8d492fb7b37edf0b70d665014a3b6d6d75\"" Feb 13 15:28:09.783382 containerd[1457]: time="2025-02-13T15:28:09.782292276Z" level=info msg="TearDown network for sandbox \"67348c9f7030a07e08918ec471c01d8d492fb7b37edf0b70d665014a3b6d6d75\" successfully" Feb 13 15:28:09.783382 containerd[1457]: time="2025-02-13T15:28:09.782303037Z" level=info msg="StopPodSandbox for \"67348c9f7030a07e08918ec471c01d8d492fb7b37edf0b70d665014a3b6d6d75\" returns successfully" Feb 13 15:28:09.783382 containerd[1457]: time="2025-02-13T15:28:09.783222214Z" level=info msg="StopPodSandbox for \"0e912db41237de96aedc11e4bdfeaf221d0a4eb32d2c79b569dc586de06677c4\"" Feb 13 15:28:09.783382 containerd[1457]: time="2025-02-13T15:28:09.783323744Z" level=info msg="TearDown network for sandbox \"0e912db41237de96aedc11e4bdfeaf221d0a4eb32d2c79b569dc586de06677c4\" successfully" Feb 13 15:28:09.783382 containerd[1457]: time="2025-02-13T15:28:09.783334585Z" level=info msg="StopPodSandbox for \"0e912db41237de96aedc11e4bdfeaf221d0a4eb32d2c79b569dc586de06677c4\" returns successfully" Feb 13 15:28:09.783664 kubelet[2608]: I0213 15:28:09.782607 2608 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-m55t4" podStartSLOduration=1.5610958369999999 podStartE2EDuration="13.782581747s" podCreationTimestamp="2025-02-13 15:27:56 +0000 UTC" firstStartedPulling="2025-02-13 15:27:56.658414292 +0000 UTC m=+23.305415643" lastFinishedPulling="2025-02-13 15:28:08.879900202 +0000 UTC m=+35.526901553" observedRunningTime="2025-02-13 15:28:09.768073308 +0000 UTC m=+36.415074659" watchObservedRunningTime="2025-02-13 15:28:09.782581747 +0000 UTC m=+36.429583098" Feb 13 15:28:09.783664 kubelet[2608]: E0213 15:28:09.783512 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:09.784285 containerd[1457]: time="2025-02-13T15:28:09.784011696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vhgmw,Uid:859d37ac-44c9-4b92-854b-e6ca0540dbd1,Namespace:kube-system,Attempt:5,}" Feb 13 15:28:09.787222 kubelet[2608]: I0213 15:28:09.787189 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5e16b24742bbc2f37dfe06c90ffdbec5343248ca53db0407b2759c26a1559b7" Feb 13 15:28:09.788175 containerd[1457]: time="2025-02-13T15:28:09.788132807Z" level=info msg="StopPodSandbox for \"c5e16b24742bbc2f37dfe06c90ffdbec5343248ca53db0407b2759c26a1559b7\"" Feb 13 15:28:09.789534 containerd[1457]: time="2025-02-13T15:28:09.788746672Z" level=info msg="Ensure that sandbox c5e16b24742bbc2f37dfe06c90ffdbec5343248ca53db0407b2759c26a1559b7 in task-service has been cleanup successfully" Feb 13 15:28:09.789741 containerd[1457]: time="2025-02-13T15:28:09.789713293Z" level=info msg="TearDown network for sandbox \"c5e16b24742bbc2f37dfe06c90ffdbec5343248ca53db0407b2759c26a1559b7\" successfully" Feb 13 15:28:09.789741 containerd[1457]: time="2025-02-13T15:28:09.789735255Z" level=info msg="StopPodSandbox for \"c5e16b24742bbc2f37dfe06c90ffdbec5343248ca53db0407b2759c26a1559b7\" returns successfully" Feb 13 15:28:09.790983 containerd[1457]: time="2025-02-13T15:28:09.790526938Z" level=info msg="StopPodSandbox for \"f4d56aece65a454a6744a6656975af454ed23eaa118376aad704578449124dce\"" Feb 13 15:28:09.790983 containerd[1457]: time="2025-02-13T15:28:09.790654031Z" level=info msg="TearDown network for sandbox \"f4d56aece65a454a6744a6656975af454ed23eaa118376aad704578449124dce\" successfully" Feb 13 15:28:09.790983 containerd[1457]: time="2025-02-13T15:28:09.790667673Z" level=info msg="StopPodSandbox for \"f4d56aece65a454a6744a6656975af454ed23eaa118376aad704578449124dce\" returns successfully" Feb 13 15:28:09.791281 containerd[1457]: time="2025-02-13T15:28:09.791256494Z" level=info msg="StopPodSandbox for \"31240264793868d5f6753603cc9e8db3c437501e3593f5eaf3c12fbc398703a1\"" Feb 13 15:28:09.791364 containerd[1457]: time="2025-02-13T15:28:09.791342743Z" level=info msg="TearDown network for sandbox \"31240264793868d5f6753603cc9e8db3c437501e3593f5eaf3c12fbc398703a1\" successfully" Feb 13 15:28:09.791364 containerd[1457]: time="2025-02-13T15:28:09.791356825Z" level=info msg="StopPodSandbox for \"31240264793868d5f6753603cc9e8db3c437501e3593f5eaf3c12fbc398703a1\" returns successfully" Feb 13 15:28:09.793121 kubelet[2608]: I0213 15:28:09.792667 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09ae978d1263afc7237cf637a3169b1af4b9e7342e716b5c0326028e663a24a4" Feb 13 15:28:09.793553 containerd[1457]: time="2025-02-13T15:28:09.793492928Z" level=info msg="StopPodSandbox for \"09ae978d1263afc7237cf637a3169b1af4b9e7342e716b5c0326028e663a24a4\"" Feb 13 15:28:09.794925 containerd[1457]: time="2025-02-13T15:28:09.794732418Z" level=info msg="StopPodSandbox for \"0cdfe4900b8aded4d3dc434c00139a606306819ed296214d867b55f86ba8ee4a\"" Feb 13 15:28:09.796723 containerd[1457]: time="2025-02-13T15:28:09.795085695Z" level=info msg="TearDown network for sandbox \"0cdfe4900b8aded4d3dc434c00139a606306819ed296214d867b55f86ba8ee4a\" successfully" Feb 13 15:28:09.797082 containerd[1457]: time="2025-02-13T15:28:09.796837998Z" level=info msg="StopPodSandbox for \"0cdfe4900b8aded4d3dc434c00139a606306819ed296214d867b55f86ba8ee4a\" returns successfully" Feb 13 15:28:09.797082 containerd[1457]: time="2025-02-13T15:28:09.795273995Z" level=info msg="Ensure that sandbox 09ae978d1263afc7237cf637a3169b1af4b9e7342e716b5c0326028e663a24a4 in task-service has been cleanup successfully" Feb 13 15:28:09.797731 containerd[1457]: time="2025-02-13T15:28:09.797703609Z" level=info msg="TearDown network for sandbox \"09ae978d1263afc7237cf637a3169b1af4b9e7342e716b5c0326028e663a24a4\" successfully" Feb 13 15:28:09.798745 containerd[1457]: time="2025-02-13T15:28:09.798657789Z" level=info msg="StopPodSandbox for \"09ae978d1263afc7237cf637a3169b1af4b9e7342e716b5c0326028e663a24a4\" returns successfully" Feb 13 15:28:09.798832 containerd[1457]: time="2025-02-13T15:28:09.798666390Z" level=info msg="StopPodSandbox for \"400e05d09f7638f6370144bc7c30317522f5d53fca24796970245605aaf907e8\"" Feb 13 15:28:09.798857 containerd[1457]: time="2025-02-13T15:28:09.798833727Z" level=info msg="TearDown network for sandbox \"400e05d09f7638f6370144bc7c30317522f5d53fca24796970245605aaf907e8\" successfully" Feb 13 15:28:09.798857 containerd[1457]: time="2025-02-13T15:28:09.798845888Z" level=info msg="StopPodSandbox for \"400e05d09f7638f6370144bc7c30317522f5d53fca24796970245605aaf907e8\" returns successfully" Feb 13 15:28:09.801204 kubelet[2608]: E0213 15:28:09.799400 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:09.801279 containerd[1457]: time="2025-02-13T15:28:09.800328364Z" level=info msg="StopPodSandbox for \"cc21ee81810fc473d761df47ca40dfc07bf603b78f073619580e97c4954181e5\"" Feb 13 15:28:09.801279 containerd[1457]: time="2025-02-13T15:28:09.800400651Z" level=info msg="TearDown network for sandbox \"cc21ee81810fc473d761df47ca40dfc07bf603b78f073619580e97c4954181e5\" successfully" Feb 13 15:28:09.801279 containerd[1457]: time="2025-02-13T15:28:09.800411812Z" level=info msg="StopPodSandbox for \"cc21ee81810fc473d761df47ca40dfc07bf603b78f073619580e97c4954181e5\" returns successfully" Feb 13 15:28:09.801279 containerd[1457]: time="2025-02-13T15:28:09.800505302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j2fhf,Uid:d26de4e2-c62e-4d8a-96e0-edbb9492094a,Namespace:kube-system,Attempt:5,}" Feb 13 15:28:09.803935 containerd[1457]: time="2025-02-13T15:28:09.803893177Z" level=info msg="StopPodSandbox for \"3f20dd0cef541fa916ded40902978e512025f116521ec7f105b4cd1d798e2080\"" Feb 13 15:28:09.804021 containerd[1457]: time="2025-02-13T15:28:09.803997067Z" level=info msg="TearDown network for sandbox \"3f20dd0cef541fa916ded40902978e512025f116521ec7f105b4cd1d798e2080\" successfully" Feb 13 15:28:09.804021 containerd[1457]: time="2025-02-13T15:28:09.804009629Z" level=info msg="StopPodSandbox for \"3f20dd0cef541fa916ded40902978e512025f116521ec7f105b4cd1d798e2080\" returns successfully" Feb 13 15:28:09.804436 containerd[1457]: time="2025-02-13T15:28:09.804374267Z" level=info msg="StopPodSandbox for \"9bde7141846a528b8406457cadcce6da8913cc6f999d44f2fda3999be82f378f\"" Feb 13 15:28:09.805440 containerd[1457]: time="2025-02-13T15:28:09.805399374Z" level=info msg="TearDown network for sandbox \"9bde7141846a528b8406457cadcce6da8913cc6f999d44f2fda3999be82f378f\" successfully" Feb 13 15:28:09.805440 containerd[1457]: time="2025-02-13T15:28:09.805431938Z" level=info msg="StopPodSandbox for \"9bde7141846a528b8406457cadcce6da8913cc6f999d44f2fda3999be82f378f\" returns successfully" Feb 13 15:28:09.806825 containerd[1457]: time="2025-02-13T15:28:09.806786039Z" level=info msg="StopPodSandbox for \"44d4ae110854cc57cbfa55e39401c987d33916faeb2710cc87a549e4d94657b0\"" Feb 13 15:28:09.806913 containerd[1457]: time="2025-02-13T15:28:09.806892250Z" level=info msg="TearDown network for sandbox \"44d4ae110854cc57cbfa55e39401c987d33916faeb2710cc87a549e4d94657b0\" successfully" Feb 13 15:28:09.806913 containerd[1457]: time="2025-02-13T15:28:09.806903812Z" level=info msg="StopPodSandbox for \"44d4ae110854cc57cbfa55e39401c987d33916faeb2710cc87a549e4d94657b0\" returns successfully" Feb 13 15:28:09.807509 containerd[1457]: time="2025-02-13T15:28:09.807462710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b5dbfc55b-kt4sn,Uid:086f2ebd-d6e8-46e2-831d-0f37b85724a2,Namespace:calico-apiserver,Attempt:5,}" Feb 13 15:28:10.268540 systemd[1]: run-netns-cni\x2d5364a609\x2d14f2\x2d78ae\x2d8a54\x2d7e38279f2843.mount: Deactivated successfully. Feb 13 15:28:10.268663 systemd[1]: run-netns-cni\x2df1112e1f\x2d7477\x2db754\x2d25e1\x2db6f522b3445e.mount: Deactivated successfully. Feb 13 15:28:10.268741 systemd[1]: run-netns-cni\x2d64068eb3\x2d0188\x2dafcd\x2dc102\x2d0b4aaf657ea0.mount: Deactivated successfully. Feb 13 15:28:10.375860 systemd-networkd[1363]: calib8aa0c34400: Link UP Feb 13 15:28:10.377755 systemd-networkd[1363]: calib8aa0c34400: Gained carrier Feb 13 15:28:10.395480 containerd[1457]: 2025-02-13 15:28:09.926 [INFO][4632] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:28:10.395480 containerd[1457]: 2025-02-13 15:28:09.976 [INFO][4632] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--j2fhf-eth0 coredns-7db6d8ff4d- kube-system d26de4e2-c62e-4d8a-96e0-edbb9492094a 774 0 2025-02-13 15:27:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-j2fhf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib8aa0c34400 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="7c31e21b6fb9ff74e454a1b6e1fb4b8d95bd1246b4aa6e7e128c19e554dede47" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j2fhf" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--j2fhf-" Feb 13 15:28:10.395480 containerd[1457]: 2025-02-13 15:28:09.977 [INFO][4632] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7c31e21b6fb9ff74e454a1b6e1fb4b8d95bd1246b4aa6e7e128c19e554dede47" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j2fhf" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--j2fhf-eth0" Feb 13 15:28:10.395480 containerd[1457]: 2025-02-13 15:28:10.303 [INFO][4706] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7c31e21b6fb9ff74e454a1b6e1fb4b8d95bd1246b4aa6e7e128c19e554dede47" HandleID="k8s-pod-network.7c31e21b6fb9ff74e454a1b6e1fb4b8d95bd1246b4aa6e7e128c19e554dede47" Workload="localhost-k8s-coredns--7db6d8ff4d--j2fhf-eth0" Feb 13 15:28:10.395480 containerd[1457]: 2025-02-13 15:28:10.325 [INFO][4706] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7c31e21b6fb9ff74e454a1b6e1fb4b8d95bd1246b4aa6e7e128c19e554dede47" HandleID="k8s-pod-network.7c31e21b6fb9ff74e454a1b6e1fb4b8d95bd1246b4aa6e7e128c19e554dede47" Workload="localhost-k8s-coredns--7db6d8ff4d--j2fhf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000309500), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-j2fhf", "timestamp":"2025-02-13 15:28:10.303679587 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:28:10.395480 containerd[1457]: 2025-02-13 15:28:10.325 [INFO][4706] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:28:10.395480 containerd[1457]: 2025-02-13 15:28:10.325 [INFO][4706] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:28:10.395480 containerd[1457]: 2025-02-13 15:28:10.325 [INFO][4706] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 15:28:10.395480 containerd[1457]: 2025-02-13 15:28:10.329 [INFO][4706] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7c31e21b6fb9ff74e454a1b6e1fb4b8d95bd1246b4aa6e7e128c19e554dede47" host="localhost" Feb 13 15:28:10.395480 containerd[1457]: 2025-02-13 15:28:10.343 [INFO][4706] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 15:28:10.395480 containerd[1457]: 2025-02-13 15:28:10.348 [INFO][4706] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 15:28:10.395480 containerd[1457]: 2025-02-13 15:28:10.350 [INFO][4706] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 15:28:10.395480 containerd[1457]: 2025-02-13 15:28:10.352 [INFO][4706] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 15:28:10.395480 containerd[1457]: 2025-02-13 15:28:10.352 [INFO][4706] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7c31e21b6fb9ff74e454a1b6e1fb4b8d95bd1246b4aa6e7e128c19e554dede47" host="localhost" Feb 13 15:28:10.395480 containerd[1457]: 2025-02-13 15:28:10.354 [INFO][4706] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7c31e21b6fb9ff74e454a1b6e1fb4b8d95bd1246b4aa6e7e128c19e554dede47 Feb 13 15:28:10.395480 containerd[1457]: 2025-02-13 15:28:10.358 [INFO][4706] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7c31e21b6fb9ff74e454a1b6e1fb4b8d95bd1246b4aa6e7e128c19e554dede47" host="localhost" Feb 13 15:28:10.395480 containerd[1457]: 2025-02-13 15:28:10.364 [INFO][4706] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.7c31e21b6fb9ff74e454a1b6e1fb4b8d95bd1246b4aa6e7e128c19e554dede47" host="localhost" Feb 13 15:28:10.395480 containerd[1457]: 2025-02-13 15:28:10.364 [INFO][4706] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.7c31e21b6fb9ff74e454a1b6e1fb4b8d95bd1246b4aa6e7e128c19e554dede47" host="localhost" Feb 13 15:28:10.395480 containerd[1457]: 2025-02-13 15:28:10.364 [INFO][4706] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:28:10.395480 containerd[1457]: 2025-02-13 15:28:10.364 [INFO][4706] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="7c31e21b6fb9ff74e454a1b6e1fb4b8d95bd1246b4aa6e7e128c19e554dede47" HandleID="k8s-pod-network.7c31e21b6fb9ff74e454a1b6e1fb4b8d95bd1246b4aa6e7e128c19e554dede47" Workload="localhost-k8s-coredns--7db6d8ff4d--j2fhf-eth0" Feb 13 15:28:10.396263 containerd[1457]: 2025-02-13 15:28:10.368 [INFO][4632] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7c31e21b6fb9ff74e454a1b6e1fb4b8d95bd1246b4aa6e7e128c19e554dede47" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j2fhf" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--j2fhf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--j2fhf-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"d26de4e2-c62e-4d8a-96e0-edbb9492094a", ResourceVersion:"774", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 27, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-j2fhf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib8aa0c34400", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:28:10.396263 containerd[1457]: 2025-02-13 15:28:10.368 [INFO][4632] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="7c31e21b6fb9ff74e454a1b6e1fb4b8d95bd1246b4aa6e7e128c19e554dede47" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j2fhf" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--j2fhf-eth0" Feb 13 15:28:10.396263 containerd[1457]: 2025-02-13 15:28:10.368 [INFO][4632] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib8aa0c34400 ContainerID="7c31e21b6fb9ff74e454a1b6e1fb4b8d95bd1246b4aa6e7e128c19e554dede47" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j2fhf" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--j2fhf-eth0" Feb 13 15:28:10.396263 containerd[1457]: 2025-02-13 15:28:10.377 [INFO][4632] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7c31e21b6fb9ff74e454a1b6e1fb4b8d95bd1246b4aa6e7e128c19e554dede47" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j2fhf" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--j2fhf-eth0" Feb 13 15:28:10.396263 containerd[1457]: 2025-02-13 15:28:10.378 [INFO][4632] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7c31e21b6fb9ff74e454a1b6e1fb4b8d95bd1246b4aa6e7e128c19e554dede47" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j2fhf" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--j2fhf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--j2fhf-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"d26de4e2-c62e-4d8a-96e0-edbb9492094a", ResourceVersion:"774", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 27, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7c31e21b6fb9ff74e454a1b6e1fb4b8d95bd1246b4aa6e7e128c19e554dede47", Pod:"coredns-7db6d8ff4d-j2fhf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib8aa0c34400", MAC:"a6:5f:74:80:9d:ac", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:28:10.396263 containerd[1457]: 2025-02-13 15:28:10.391 [INFO][4632] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7c31e21b6fb9ff74e454a1b6e1fb4b8d95bd1246b4aa6e7e128c19e554dede47" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j2fhf" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--j2fhf-eth0" Feb 13 15:28:10.414542 systemd-networkd[1363]: calidcc4ae10e88: Link UP Feb 13 15:28:10.414783 systemd-networkd[1363]: calidcc4ae10e88: Gained carrier Feb 13 15:28:10.430132 containerd[1457]: 2025-02-13 15:28:09.913 [INFO][4621] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:28:10.430132 containerd[1457]: 2025-02-13 15:28:09.963 [INFO][4621] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--59ns4-eth0 csi-node-driver- calico-system 8c67a7be-7144-4f11-b45e-f04dfd3de75c 657 0 2025-02-13 15:27:56 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-59ns4 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calidcc4ae10e88 [] []}} ContainerID="613643cac43be36411647a3692a44855d83f01e2de28c8345db617fc5b0bf87b" Namespace="calico-system" Pod="csi-node-driver-59ns4" WorkloadEndpoint="localhost-k8s-csi--node--driver--59ns4-" Feb 13 15:28:10.430132 containerd[1457]: 2025-02-13 15:28:09.963 [INFO][4621] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="613643cac43be36411647a3692a44855d83f01e2de28c8345db617fc5b0bf87b" Namespace="calico-system" Pod="csi-node-driver-59ns4" WorkloadEndpoint="localhost-k8s-csi--node--driver--59ns4-eth0" Feb 13 15:28:10.430132 containerd[1457]: 2025-02-13 15:28:10.302 [INFO][4704] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="613643cac43be36411647a3692a44855d83f01e2de28c8345db617fc5b0bf87b" HandleID="k8s-pod-network.613643cac43be36411647a3692a44855d83f01e2de28c8345db617fc5b0bf87b" Workload="localhost-k8s-csi--node--driver--59ns4-eth0" Feb 13 15:28:10.430132 containerd[1457]: 2025-02-13 15:28:10.326 [INFO][4704] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="613643cac43be36411647a3692a44855d83f01e2de28c8345db617fc5b0bf87b" HandleID="k8s-pod-network.613643cac43be36411647a3692a44855d83f01e2de28c8345db617fc5b0bf87b" Workload="localhost-k8s-csi--node--driver--59ns4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003ac800), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-59ns4", "timestamp":"2025-02-13 15:28:10.302368254 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:28:10.430132 containerd[1457]: 2025-02-13 15:28:10.326 [INFO][4704] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:28:10.430132 containerd[1457]: 2025-02-13 15:28:10.364 [INFO][4704] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:28:10.430132 containerd[1457]: 2025-02-13 15:28:10.365 [INFO][4704] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 15:28:10.430132 containerd[1457]: 2025-02-13 15:28:10.367 [INFO][4704] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.613643cac43be36411647a3692a44855d83f01e2de28c8345db617fc5b0bf87b" host="localhost" Feb 13 15:28:10.430132 containerd[1457]: 2025-02-13 15:28:10.371 [INFO][4704] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 15:28:10.430132 containerd[1457]: 2025-02-13 15:28:10.379 [INFO][4704] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 15:28:10.430132 containerd[1457]: 2025-02-13 15:28:10.381 [INFO][4704] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 15:28:10.430132 containerd[1457]: 2025-02-13 15:28:10.385 [INFO][4704] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 15:28:10.430132 containerd[1457]: 2025-02-13 15:28:10.385 [INFO][4704] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.613643cac43be36411647a3692a44855d83f01e2de28c8345db617fc5b0bf87b" host="localhost" Feb 13 15:28:10.430132 containerd[1457]: 2025-02-13 15:28:10.392 [INFO][4704] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.613643cac43be36411647a3692a44855d83f01e2de28c8345db617fc5b0bf87b Feb 13 15:28:10.430132 containerd[1457]: 2025-02-13 15:28:10.399 [INFO][4704] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.613643cac43be36411647a3692a44855d83f01e2de28c8345db617fc5b0bf87b" host="localhost" Feb 13 15:28:10.430132 containerd[1457]: 2025-02-13 15:28:10.407 [INFO][4704] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.613643cac43be36411647a3692a44855d83f01e2de28c8345db617fc5b0bf87b" host="localhost" Feb 13 15:28:10.430132 containerd[1457]: 2025-02-13 15:28:10.407 [INFO][4704] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.613643cac43be36411647a3692a44855d83f01e2de28c8345db617fc5b0bf87b" host="localhost" Feb 13 15:28:10.430132 containerd[1457]: 2025-02-13 15:28:10.407 [INFO][4704] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:28:10.430132 containerd[1457]: 2025-02-13 15:28:10.407 [INFO][4704] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="613643cac43be36411647a3692a44855d83f01e2de28c8345db617fc5b0bf87b" HandleID="k8s-pod-network.613643cac43be36411647a3692a44855d83f01e2de28c8345db617fc5b0bf87b" Workload="localhost-k8s-csi--node--driver--59ns4-eth0" Feb 13 15:28:10.430752 containerd[1457]: 2025-02-13 15:28:10.410 [INFO][4621] cni-plugin/k8s.go 386: Populated endpoint ContainerID="613643cac43be36411647a3692a44855d83f01e2de28c8345db617fc5b0bf87b" Namespace="calico-system" Pod="csi-node-driver-59ns4" WorkloadEndpoint="localhost-k8s-csi--node--driver--59ns4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--59ns4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8c67a7be-7144-4f11-b45e-f04dfd3de75c", ResourceVersion:"657", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 27, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-59ns4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidcc4ae10e88", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:28:10.430752 containerd[1457]: 2025-02-13 15:28:10.411 [INFO][4621] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="613643cac43be36411647a3692a44855d83f01e2de28c8345db617fc5b0bf87b" Namespace="calico-system" Pod="csi-node-driver-59ns4" WorkloadEndpoint="localhost-k8s-csi--node--driver--59ns4-eth0" Feb 13 15:28:10.430752 containerd[1457]: 2025-02-13 15:28:10.411 [INFO][4621] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidcc4ae10e88 ContainerID="613643cac43be36411647a3692a44855d83f01e2de28c8345db617fc5b0bf87b" Namespace="calico-system" Pod="csi-node-driver-59ns4" WorkloadEndpoint="localhost-k8s-csi--node--driver--59ns4-eth0" Feb 13 15:28:10.430752 containerd[1457]: 2025-02-13 15:28:10.415 [INFO][4621] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="613643cac43be36411647a3692a44855d83f01e2de28c8345db617fc5b0bf87b" Namespace="calico-system" Pod="csi-node-driver-59ns4" WorkloadEndpoint="localhost-k8s-csi--node--driver--59ns4-eth0" Feb 13 15:28:10.430752 containerd[1457]: 2025-02-13 15:28:10.416 [INFO][4621] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="613643cac43be36411647a3692a44855d83f01e2de28c8345db617fc5b0bf87b" Namespace="calico-system" Pod="csi-node-driver-59ns4" WorkloadEndpoint="localhost-k8s-csi--node--driver--59ns4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--59ns4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8c67a7be-7144-4f11-b45e-f04dfd3de75c", ResourceVersion:"657", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 27, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"613643cac43be36411647a3692a44855d83f01e2de28c8345db617fc5b0bf87b", Pod:"csi-node-driver-59ns4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidcc4ae10e88", MAC:"02:3d:a8:a6:16:e8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:28:10.430752 containerd[1457]: 2025-02-13 15:28:10.427 [INFO][4621] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="613643cac43be36411647a3692a44855d83f01e2de28c8345db617fc5b0bf87b" Namespace="calico-system" Pod="csi-node-driver-59ns4" WorkloadEndpoint="localhost-k8s-csi--node--driver--59ns4-eth0" Feb 13 15:28:10.434800 containerd[1457]: time="2025-02-13T15:28:10.434588668Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:28:10.434800 containerd[1457]: time="2025-02-13T15:28:10.434675036Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:28:10.434800 containerd[1457]: time="2025-02-13T15:28:10.434686517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:28:10.435015 containerd[1457]: time="2025-02-13T15:28:10.434820171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:28:10.460636 systemd-networkd[1363]: cali84609cf0854: Link UP Feb 13 15:28:10.461258 systemd-networkd[1363]: cali84609cf0854: Gained carrier Feb 13 15:28:10.461821 systemd[1]: Started cri-containerd-7c31e21b6fb9ff74e454a1b6e1fb4b8d95bd1246b4aa6e7e128c19e554dede47.scope - libcontainer container 7c31e21b6fb9ff74e454a1b6e1fb4b8d95bd1246b4aa6e7e128c19e554dede47. Feb 13 15:28:10.465249 containerd[1457]: time="2025-02-13T15:28:10.463901161Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:28:10.465249 containerd[1457]: time="2025-02-13T15:28:10.463955927Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:28:10.465249 containerd[1457]: time="2025-02-13T15:28:10.463967648Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:28:10.465249 containerd[1457]: time="2025-02-13T15:28:10.464038935Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:28:10.480010 containerd[1457]: 2025-02-13 15:28:09.913 [INFO][4649] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:28:10.480010 containerd[1457]: 2025-02-13 15:28:09.964 [INFO][4649] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--vhgmw-eth0 coredns-7db6d8ff4d- kube-system 859d37ac-44c9-4b92-854b-e6ca0540dbd1 772 0 2025-02-13 15:27:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-vhgmw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali84609cf0854 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="b3c7aeae2f1ac1fa35088628910f633d5b535757d66a31038744c186337084ff" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vhgmw" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--vhgmw-" Feb 13 15:28:10.480010 containerd[1457]: 2025-02-13 15:28:09.965 [INFO][4649] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b3c7aeae2f1ac1fa35088628910f633d5b535757d66a31038744c186337084ff" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vhgmw" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--vhgmw-eth0" Feb 13 15:28:10.480010 containerd[1457]: 2025-02-13 15:28:10.303 [INFO][4702] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b3c7aeae2f1ac1fa35088628910f633d5b535757d66a31038744c186337084ff" HandleID="k8s-pod-network.b3c7aeae2f1ac1fa35088628910f633d5b535757d66a31038744c186337084ff" Workload="localhost-k8s-coredns--7db6d8ff4d--vhgmw-eth0" Feb 13 15:28:10.480010 containerd[1457]: 2025-02-13 15:28:10.328 [INFO][4702] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b3c7aeae2f1ac1fa35088628910f633d5b535757d66a31038744c186337084ff" HandleID="k8s-pod-network.b3c7aeae2f1ac1fa35088628910f633d5b535757d66a31038744c186337084ff" Workload="localhost-k8s-coredns--7db6d8ff4d--vhgmw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002f2940), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-vhgmw", "timestamp":"2025-02-13 15:28:10.302991957 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:28:10.480010 containerd[1457]: 2025-02-13 15:28:10.330 [INFO][4702] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:28:10.480010 containerd[1457]: 2025-02-13 15:28:10.407 [INFO][4702] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:28:10.480010 containerd[1457]: 2025-02-13 15:28:10.407 [INFO][4702] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 15:28:10.480010 containerd[1457]: 2025-02-13 15:28:10.410 [INFO][4702] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b3c7aeae2f1ac1fa35088628910f633d5b535757d66a31038744c186337084ff" host="localhost" Feb 13 15:28:10.480010 containerd[1457]: 2025-02-13 15:28:10.418 [INFO][4702] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 15:28:10.480010 containerd[1457]: 2025-02-13 15:28:10.424 [INFO][4702] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 15:28:10.480010 containerd[1457]: 2025-02-13 15:28:10.428 [INFO][4702] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 15:28:10.480010 containerd[1457]: 2025-02-13 15:28:10.431 [INFO][4702] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 15:28:10.480010 containerd[1457]: 2025-02-13 15:28:10.431 [INFO][4702] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b3c7aeae2f1ac1fa35088628910f633d5b535757d66a31038744c186337084ff" host="localhost" Feb 13 15:28:10.480010 containerd[1457]: 2025-02-13 15:28:10.435 [INFO][4702] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b3c7aeae2f1ac1fa35088628910f633d5b535757d66a31038744c186337084ff Feb 13 15:28:10.480010 containerd[1457]: 2025-02-13 15:28:10.446 [INFO][4702] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b3c7aeae2f1ac1fa35088628910f633d5b535757d66a31038744c186337084ff" host="localhost" Feb 13 15:28:10.480010 containerd[1457]: 2025-02-13 15:28:10.452 [INFO][4702] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.b3c7aeae2f1ac1fa35088628910f633d5b535757d66a31038744c186337084ff" host="localhost" Feb 13 15:28:10.480010 containerd[1457]: 2025-02-13 15:28:10.453 [INFO][4702] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.b3c7aeae2f1ac1fa35088628910f633d5b535757d66a31038744c186337084ff" host="localhost" Feb 13 15:28:10.480010 containerd[1457]: 2025-02-13 15:28:10.453 [INFO][4702] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:28:10.480010 containerd[1457]: 2025-02-13 15:28:10.453 [INFO][4702] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="b3c7aeae2f1ac1fa35088628910f633d5b535757d66a31038744c186337084ff" HandleID="k8s-pod-network.b3c7aeae2f1ac1fa35088628910f633d5b535757d66a31038744c186337084ff" Workload="localhost-k8s-coredns--7db6d8ff4d--vhgmw-eth0" Feb 13 15:28:10.481528 containerd[1457]: 2025-02-13 15:28:10.457 [INFO][4649] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b3c7aeae2f1ac1fa35088628910f633d5b535757d66a31038744c186337084ff" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vhgmw" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--vhgmw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--vhgmw-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"859d37ac-44c9-4b92-854b-e6ca0540dbd1", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 27, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-vhgmw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali84609cf0854", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:28:10.481528 containerd[1457]: 2025-02-13 15:28:10.457 [INFO][4649] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="b3c7aeae2f1ac1fa35088628910f633d5b535757d66a31038744c186337084ff" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vhgmw" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--vhgmw-eth0" Feb 13 15:28:10.481528 containerd[1457]: 2025-02-13 15:28:10.457 [INFO][4649] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali84609cf0854 ContainerID="b3c7aeae2f1ac1fa35088628910f633d5b535757d66a31038744c186337084ff" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vhgmw" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--vhgmw-eth0" Feb 13 15:28:10.481528 containerd[1457]: 2025-02-13 15:28:10.460 [INFO][4649] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b3c7aeae2f1ac1fa35088628910f633d5b535757d66a31038744c186337084ff" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vhgmw" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--vhgmw-eth0" Feb 13 15:28:10.481528 containerd[1457]: 2025-02-13 15:28:10.462 [INFO][4649] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b3c7aeae2f1ac1fa35088628910f633d5b535757d66a31038744c186337084ff" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vhgmw" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--vhgmw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--vhgmw-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"859d37ac-44c9-4b92-854b-e6ca0540dbd1", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 27, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b3c7aeae2f1ac1fa35088628910f633d5b535757d66a31038744c186337084ff", Pod:"coredns-7db6d8ff4d-vhgmw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali84609cf0854", MAC:"76:87:5b:62:aa:85", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:28:10.481528 containerd[1457]: 2025-02-13 15:28:10.477 [INFO][4649] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b3c7aeae2f1ac1fa35088628910f633d5b535757d66a31038744c186337084ff" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vhgmw" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--vhgmw-eth0" Feb 13 15:28:10.493095 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:28:10.499838 systemd[1]: Started cri-containerd-613643cac43be36411647a3692a44855d83f01e2de28c8345db617fc5b0bf87b.scope - libcontainer container 613643cac43be36411647a3692a44855d83f01e2de28c8345db617fc5b0bf87b. Feb 13 15:28:10.518246 systemd-networkd[1363]: cali89d407282e2: Link UP Feb 13 15:28:10.518570 systemd-networkd[1363]: cali89d407282e2: Gained carrier Feb 13 15:28:10.527654 containerd[1457]: time="2025-02-13T15:28:10.526732495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j2fhf,Uid:d26de4e2-c62e-4d8a-96e0-edbb9492094a,Namespace:kube-system,Attempt:5,} returns sandbox id \"7c31e21b6fb9ff74e454a1b6e1fb4b8d95bd1246b4aa6e7e128c19e554dede47\"" Feb 13 15:28:10.529555 kubelet[2608]: E0213 15:28:10.529507 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:10.533357 containerd[1457]: time="2025-02-13T15:28:10.532135564Z" level=info msg="CreateContainer within sandbox \"7c31e21b6fb9ff74e454a1b6e1fb4b8d95bd1246b4aa6e7e128c19e554dede47\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:28:10.545752 containerd[1457]: 2025-02-13 15:28:09.991 [INFO][4656] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:28:10.545752 containerd[1457]: 2025-02-13 15:28:10.017 [INFO][4656] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5b5dbfc55b--kt4sn-eth0 calico-apiserver-5b5dbfc55b- calico-apiserver 086f2ebd-d6e8-46e2-831d-0f37b85724a2 777 0 2025-02-13 15:27:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5b5dbfc55b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5b5dbfc55b-kt4sn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali89d407282e2 [] []}} ContainerID="f44a1d1ecf86d1c7a8df93aa8129276428556add7d7643e26ce6e3b233f87ceb" Namespace="calico-apiserver" Pod="calico-apiserver-5b5dbfc55b-kt4sn" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b5dbfc55b--kt4sn-" Feb 13 15:28:10.545752 containerd[1457]: 2025-02-13 15:28:10.017 [INFO][4656] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f44a1d1ecf86d1c7a8df93aa8129276428556add7d7643e26ce6e3b233f87ceb" Namespace="calico-apiserver" Pod="calico-apiserver-5b5dbfc55b-kt4sn" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b5dbfc55b--kt4sn-eth0" Feb 13 15:28:10.545752 containerd[1457]: 2025-02-13 15:28:10.302 [INFO][4725] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f44a1d1ecf86d1c7a8df93aa8129276428556add7d7643e26ce6e3b233f87ceb" HandleID="k8s-pod-network.f44a1d1ecf86d1c7a8df93aa8129276428556add7d7643e26ce6e3b233f87ceb" Workload="localhost-k8s-calico--apiserver--5b5dbfc55b--kt4sn-eth0" Feb 13 15:28:10.545752 containerd[1457]: 2025-02-13 15:28:10.331 [INFO][4725] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f44a1d1ecf86d1c7a8df93aa8129276428556add7d7643e26ce6e3b233f87ceb" HandleID="k8s-pod-network.f44a1d1ecf86d1c7a8df93aa8129276428556add7d7643e26ce6e3b233f87ceb" Workload="localhost-k8s-calico--apiserver--5b5dbfc55b--kt4sn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000400080), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5b5dbfc55b-kt4sn", "timestamp":"2025-02-13 15:28:10.302384575 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:28:10.545752 containerd[1457]: 2025-02-13 15:28:10.331 [INFO][4725] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:28:10.545752 containerd[1457]: 2025-02-13 15:28:10.453 [INFO][4725] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:28:10.545752 containerd[1457]: 2025-02-13 15:28:10.453 [INFO][4725] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 15:28:10.545752 containerd[1457]: 2025-02-13 15:28:10.456 [INFO][4725] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f44a1d1ecf86d1c7a8df93aa8129276428556add7d7643e26ce6e3b233f87ceb" host="localhost" Feb 13 15:28:10.545752 containerd[1457]: 2025-02-13 15:28:10.465 [INFO][4725] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 15:28:10.545752 containerd[1457]: 2025-02-13 15:28:10.481 [INFO][4725] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 15:28:10.545752 containerd[1457]: 2025-02-13 15:28:10.484 [INFO][4725] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 15:28:10.545752 containerd[1457]: 2025-02-13 15:28:10.488 [INFO][4725] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 15:28:10.545752 containerd[1457]: 2025-02-13 15:28:10.488 [INFO][4725] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f44a1d1ecf86d1c7a8df93aa8129276428556add7d7643e26ce6e3b233f87ceb" host="localhost" Feb 13 15:28:10.545752 containerd[1457]: 2025-02-13 15:28:10.491 [INFO][4725] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f44a1d1ecf86d1c7a8df93aa8129276428556add7d7643e26ce6e3b233f87ceb Feb 13 15:28:10.545752 containerd[1457]: 2025-02-13 15:28:10.501 [INFO][4725] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f44a1d1ecf86d1c7a8df93aa8129276428556add7d7643e26ce6e3b233f87ceb" host="localhost" Feb 13 15:28:10.545752 containerd[1457]: 2025-02-13 15:28:10.510 [INFO][4725] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.f44a1d1ecf86d1c7a8df93aa8129276428556add7d7643e26ce6e3b233f87ceb" host="localhost" Feb 13 15:28:10.545752 containerd[1457]: 2025-02-13 15:28:10.511 [INFO][4725] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.f44a1d1ecf86d1c7a8df93aa8129276428556add7d7643e26ce6e3b233f87ceb" host="localhost" Feb 13 15:28:10.545752 containerd[1457]: 2025-02-13 15:28:10.511 [INFO][4725] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:28:10.545752 containerd[1457]: 2025-02-13 15:28:10.511 [INFO][4725] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="f44a1d1ecf86d1c7a8df93aa8129276428556add7d7643e26ce6e3b233f87ceb" HandleID="k8s-pod-network.f44a1d1ecf86d1c7a8df93aa8129276428556add7d7643e26ce6e3b233f87ceb" Workload="localhost-k8s-calico--apiserver--5b5dbfc55b--kt4sn-eth0" Feb 13 15:28:10.546300 containerd[1457]: 2025-02-13 15:28:10.515 [INFO][4656] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f44a1d1ecf86d1c7a8df93aa8129276428556add7d7643e26ce6e3b233f87ceb" Namespace="calico-apiserver" Pod="calico-apiserver-5b5dbfc55b-kt4sn" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b5dbfc55b--kt4sn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b5dbfc55b--kt4sn-eth0", GenerateName:"calico-apiserver-5b5dbfc55b-", Namespace:"calico-apiserver", SelfLink:"", UID:"086f2ebd-d6e8-46e2-831d-0f37b85724a2", ResourceVersion:"777", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 27, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b5dbfc55b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5b5dbfc55b-kt4sn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali89d407282e2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:28:10.546300 containerd[1457]: 2025-02-13 15:28:10.515 [INFO][4656] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="f44a1d1ecf86d1c7a8df93aa8129276428556add7d7643e26ce6e3b233f87ceb" Namespace="calico-apiserver" Pod="calico-apiserver-5b5dbfc55b-kt4sn" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b5dbfc55b--kt4sn-eth0" Feb 13 15:28:10.546300 containerd[1457]: 2025-02-13 15:28:10.515 [INFO][4656] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali89d407282e2 ContainerID="f44a1d1ecf86d1c7a8df93aa8129276428556add7d7643e26ce6e3b233f87ceb" Namespace="calico-apiserver" Pod="calico-apiserver-5b5dbfc55b-kt4sn" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b5dbfc55b--kt4sn-eth0" Feb 13 15:28:10.546300 containerd[1457]: 2025-02-13 15:28:10.518 [INFO][4656] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f44a1d1ecf86d1c7a8df93aa8129276428556add7d7643e26ce6e3b233f87ceb" Namespace="calico-apiserver" Pod="calico-apiserver-5b5dbfc55b-kt4sn" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b5dbfc55b--kt4sn-eth0" Feb 13 15:28:10.546300 containerd[1457]: 2025-02-13 15:28:10.518 [INFO][4656] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f44a1d1ecf86d1c7a8df93aa8129276428556add7d7643e26ce6e3b233f87ceb" Namespace="calico-apiserver" Pod="calico-apiserver-5b5dbfc55b-kt4sn" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b5dbfc55b--kt4sn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b5dbfc55b--kt4sn-eth0", GenerateName:"calico-apiserver-5b5dbfc55b-", Namespace:"calico-apiserver", SelfLink:"", UID:"086f2ebd-d6e8-46e2-831d-0f37b85724a2", ResourceVersion:"777", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 27, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b5dbfc55b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f44a1d1ecf86d1c7a8df93aa8129276428556add7d7643e26ce6e3b233f87ceb", Pod:"calico-apiserver-5b5dbfc55b-kt4sn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali89d407282e2", MAC:"d2:e0:c6:c5:f5:51", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:28:10.546300 containerd[1457]: 2025-02-13 15:28:10.538 [INFO][4656] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f44a1d1ecf86d1c7a8df93aa8129276428556add7d7643e26ce6e3b233f87ceb" Namespace="calico-apiserver" Pod="calico-apiserver-5b5dbfc55b-kt4sn" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b5dbfc55b--kt4sn-eth0" Feb 13 15:28:10.555841 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:28:10.556773 containerd[1457]: time="2025-02-13T15:28:10.532213652Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:28:10.556773 containerd[1457]: time="2025-02-13T15:28:10.555655350Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:28:10.556773 containerd[1457]: time="2025-02-13T15:28:10.555682272Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:28:10.569970 containerd[1457]: time="2025-02-13T15:28:10.566274747Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:28:10.591235 containerd[1457]: time="2025-02-13T15:28:10.591146270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-59ns4,Uid:8c67a7be-7144-4f11-b45e-f04dfd3de75c,Namespace:calico-system,Attempt:5,} returns sandbox id \"613643cac43be36411647a3692a44855d83f01e2de28c8345db617fc5b0bf87b\"" Feb 13 15:28:10.595610 containerd[1457]: time="2025-02-13T15:28:10.595570439Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 15:28:10.613980 systemd[1]: Started cri-containerd-b3c7aeae2f1ac1fa35088628910f633d5b535757d66a31038744c186337084ff.scope - libcontainer container b3c7aeae2f1ac1fa35088628910f633d5b535757d66a31038744c186337084ff. Feb 13 15:28:10.628757 systemd-networkd[1363]: calidef72dec309: Link UP Feb 13 15:28:10.628999 systemd-networkd[1363]: calidef72dec309: Gained carrier Feb 13 15:28:10.637442 containerd[1457]: time="2025-02-13T15:28:10.636809743Z" level=info msg="CreateContainer within sandbox \"7c31e21b6fb9ff74e454a1b6e1fb4b8d95bd1246b4aa6e7e128c19e554dede47\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fbc73b750d49da0fa12812722c7d63ac01e2d4f1e9412416f48e989f83f49618\"" Feb 13 15:28:10.638455 containerd[1457]: time="2025-02-13T15:28:10.638364340Z" level=info msg="StartContainer for \"fbc73b750d49da0fa12812722c7d63ac01e2d4f1e9412416f48e989f83f49618\"" Feb 13 15:28:10.639524 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:28:10.641449 containerd[1457]: time="2025-02-13T15:28:10.640724740Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:28:10.641449 containerd[1457]: time="2025-02-13T15:28:10.640864514Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:28:10.641449 containerd[1457]: time="2025-02-13T15:28:10.640882156Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:28:10.641449 containerd[1457]: time="2025-02-13T15:28:10.641268355Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:28:10.664878 containerd[1457]: 2025-02-13 15:28:09.905 [INFO][4602] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:28:10.664878 containerd[1457]: 2025-02-13 15:28:09.967 [INFO][4602] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--ccbb9dcd9--2n9js-eth0 calico-kube-controllers-ccbb9dcd9- calico-system 7201f0b7-6fae-4b37-8849-0e2e56956168 775 0 2025-02-13 15:27:56 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:ccbb9dcd9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-ccbb9dcd9-2n9js eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calidef72dec309 [] []}} ContainerID="91e6a1fdf76ae26f3fa426c9a5ba6dd5cad034b7b1553aabe9b5bfc81aa561bb" Namespace="calico-system" Pod="calico-kube-controllers-ccbb9dcd9-2n9js" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--ccbb9dcd9--2n9js-" Feb 13 15:28:10.664878 containerd[1457]: 2025-02-13 15:28:09.967 [INFO][4602] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="91e6a1fdf76ae26f3fa426c9a5ba6dd5cad034b7b1553aabe9b5bfc81aa561bb" Namespace="calico-system" Pod="calico-kube-controllers-ccbb9dcd9-2n9js" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--ccbb9dcd9--2n9js-eth0" Feb 13 15:28:10.664878 containerd[1457]: 2025-02-13 15:28:10.303 [INFO][4705] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="91e6a1fdf76ae26f3fa426c9a5ba6dd5cad034b7b1553aabe9b5bfc81aa561bb" HandleID="k8s-pod-network.91e6a1fdf76ae26f3fa426c9a5ba6dd5cad034b7b1553aabe9b5bfc81aa561bb" Workload="localhost-k8s-calico--kube--controllers--ccbb9dcd9--2n9js-eth0" Feb 13 15:28:10.664878 containerd[1457]: 2025-02-13 15:28:10.335 [INFO][4705] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="91e6a1fdf76ae26f3fa426c9a5ba6dd5cad034b7b1553aabe9b5bfc81aa561bb" HandleID="k8s-pod-network.91e6a1fdf76ae26f3fa426c9a5ba6dd5cad034b7b1553aabe9b5bfc81aa561bb" Workload="localhost-k8s-calico--kube--controllers--ccbb9dcd9--2n9js-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000312260), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-ccbb9dcd9-2n9js", "timestamp":"2025-02-13 15:28:10.303930652 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:28:10.664878 containerd[1457]: 2025-02-13 15:28:10.335 [INFO][4705] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:28:10.664878 containerd[1457]: 2025-02-13 15:28:10.511 [INFO][4705] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:28:10.664878 containerd[1457]: 2025-02-13 15:28:10.511 [INFO][4705] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 15:28:10.664878 containerd[1457]: 2025-02-13 15:28:10.538 [INFO][4705] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.91e6a1fdf76ae26f3fa426c9a5ba6dd5cad034b7b1553aabe9b5bfc81aa561bb" host="localhost" Feb 13 15:28:10.664878 containerd[1457]: 2025-02-13 15:28:10.571 [INFO][4705] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 15:28:10.664878 containerd[1457]: 2025-02-13 15:28:10.586 [INFO][4705] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 15:28:10.664878 containerd[1457]: 2025-02-13 15:28:10.589 [INFO][4705] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 15:28:10.664878 containerd[1457]: 2025-02-13 15:28:10.594 [INFO][4705] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 15:28:10.664878 containerd[1457]: 2025-02-13 15:28:10.594 [INFO][4705] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.91e6a1fdf76ae26f3fa426c9a5ba6dd5cad034b7b1553aabe9b5bfc81aa561bb" host="localhost" Feb 13 15:28:10.664878 containerd[1457]: 2025-02-13 15:28:10.597 [INFO][4705] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.91e6a1fdf76ae26f3fa426c9a5ba6dd5cad034b7b1553aabe9b5bfc81aa561bb Feb 13 15:28:10.664878 containerd[1457]: 2025-02-13 15:28:10.603 [INFO][4705] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.91e6a1fdf76ae26f3fa426c9a5ba6dd5cad034b7b1553aabe9b5bfc81aa561bb" host="localhost" Feb 13 15:28:10.664878 containerd[1457]: 2025-02-13 15:28:10.615 [INFO][4705] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.91e6a1fdf76ae26f3fa426c9a5ba6dd5cad034b7b1553aabe9b5bfc81aa561bb" host="localhost" Feb 13 15:28:10.664878 containerd[1457]: 2025-02-13 15:28:10.615 [INFO][4705] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.91e6a1fdf76ae26f3fa426c9a5ba6dd5cad034b7b1553aabe9b5bfc81aa561bb" host="localhost" Feb 13 15:28:10.664878 containerd[1457]: 2025-02-13 15:28:10.615 [INFO][4705] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:28:10.664878 containerd[1457]: 2025-02-13 15:28:10.615 [INFO][4705] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="91e6a1fdf76ae26f3fa426c9a5ba6dd5cad034b7b1553aabe9b5bfc81aa561bb" HandleID="k8s-pod-network.91e6a1fdf76ae26f3fa426c9a5ba6dd5cad034b7b1553aabe9b5bfc81aa561bb" Workload="localhost-k8s-calico--kube--controllers--ccbb9dcd9--2n9js-eth0" Feb 13 15:28:10.665465 containerd[1457]: 2025-02-13 15:28:10.622 [INFO][4602] cni-plugin/k8s.go 386: Populated endpoint ContainerID="91e6a1fdf76ae26f3fa426c9a5ba6dd5cad034b7b1553aabe9b5bfc81aa561bb" Namespace="calico-system" Pod="calico-kube-controllers-ccbb9dcd9-2n9js" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--ccbb9dcd9--2n9js-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--ccbb9dcd9--2n9js-eth0", GenerateName:"calico-kube-controllers-ccbb9dcd9-", Namespace:"calico-system", SelfLink:"", UID:"7201f0b7-6fae-4b37-8849-0e2e56956168", ResourceVersion:"775", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 27, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"ccbb9dcd9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-ccbb9dcd9-2n9js", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidef72dec309", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:28:10.665465 containerd[1457]: 2025-02-13 15:28:10.622 [INFO][4602] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="91e6a1fdf76ae26f3fa426c9a5ba6dd5cad034b7b1553aabe9b5bfc81aa561bb" Namespace="calico-system" Pod="calico-kube-controllers-ccbb9dcd9-2n9js" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--ccbb9dcd9--2n9js-eth0" Feb 13 15:28:10.665465 containerd[1457]: 2025-02-13 15:28:10.622 [INFO][4602] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidef72dec309 ContainerID="91e6a1fdf76ae26f3fa426c9a5ba6dd5cad034b7b1553aabe9b5bfc81aa561bb" Namespace="calico-system" Pod="calico-kube-controllers-ccbb9dcd9-2n9js" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--ccbb9dcd9--2n9js-eth0" Feb 13 15:28:10.665465 containerd[1457]: 2025-02-13 15:28:10.631 [INFO][4602] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="91e6a1fdf76ae26f3fa426c9a5ba6dd5cad034b7b1553aabe9b5bfc81aa561bb" Namespace="calico-system" Pod="calico-kube-controllers-ccbb9dcd9-2n9js" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--ccbb9dcd9--2n9js-eth0" Feb 13 15:28:10.665465 containerd[1457]: 2025-02-13 15:28:10.632 [INFO][4602] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="91e6a1fdf76ae26f3fa426c9a5ba6dd5cad034b7b1553aabe9b5bfc81aa561bb" Namespace="calico-system" Pod="calico-kube-controllers-ccbb9dcd9-2n9js" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--ccbb9dcd9--2n9js-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--ccbb9dcd9--2n9js-eth0", GenerateName:"calico-kube-controllers-ccbb9dcd9-", Namespace:"calico-system", SelfLink:"", UID:"7201f0b7-6fae-4b37-8849-0e2e56956168", ResourceVersion:"775", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 27, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"ccbb9dcd9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"91e6a1fdf76ae26f3fa426c9a5ba6dd5cad034b7b1553aabe9b5bfc81aa561bb", Pod:"calico-kube-controllers-ccbb9dcd9-2n9js", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidef72dec309", MAC:"ba:59:0f:ed:df:7f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:28:10.665465 containerd[1457]: 2025-02-13 15:28:10.646 [INFO][4602] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="91e6a1fdf76ae26f3fa426c9a5ba6dd5cad034b7b1553aabe9b5bfc81aa561bb" Namespace="calico-system" Pod="calico-kube-controllers-ccbb9dcd9-2n9js" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--ccbb9dcd9--2n9js-eth0" Feb 13 15:28:10.686868 systemd[1]: Started cri-containerd-f44a1d1ecf86d1c7a8df93aa8129276428556add7d7643e26ce6e3b233f87ceb.scope - libcontainer container f44a1d1ecf86d1c7a8df93aa8129276428556add7d7643e26ce6e3b233f87ceb. Feb 13 15:28:10.704904 systemd-networkd[1363]: cali9dc4c847929: Link UP Feb 13 15:28:10.705238 systemd-networkd[1363]: cali9dc4c847929: Gained carrier Feb 13 15:28:10.722076 containerd[1457]: time="2025-02-13T15:28:10.720777301Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:28:10.722076 containerd[1457]: time="2025-02-13T15:28:10.720859950Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:28:10.722076 containerd[1457]: time="2025-02-13T15:28:10.720877031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:28:10.722076 containerd[1457]: time="2025-02-13T15:28:10.720979282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:28:10.726531 containerd[1457]: time="2025-02-13T15:28:10.726478200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vhgmw,Uid:859d37ac-44c9-4b92-854b-e6ca0540dbd1,Namespace:kube-system,Attempt:5,} returns sandbox id \"b3c7aeae2f1ac1fa35088628910f633d5b535757d66a31038744c186337084ff\"" Feb 13 15:28:10.727436 kubelet[2608]: E0213 15:28:10.727402 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:10.730852 containerd[1457]: 2025-02-13 15:28:09.811 [INFO][4588] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:28:10.730852 containerd[1457]: 2025-02-13 15:28:09.962 [INFO][4588] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5b5dbfc55b--xsmtx-eth0 calico-apiserver-5b5dbfc55b- calico-apiserver 2ce64576-1ac8-4271-89c9-a8de4b77d706 773 0 2025-02-13 15:27:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5b5dbfc55b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5b5dbfc55b-xsmtx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9dc4c847929 [] []}} ContainerID="33fce3bf3a4540710c9d09f976d616d2015ff9f26ce23cc119ccf8ba55627501" Namespace="calico-apiserver" Pod="calico-apiserver-5b5dbfc55b-xsmtx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b5dbfc55b--xsmtx-" Feb 13 15:28:10.730852 containerd[1457]: 2025-02-13 15:28:09.963 [INFO][4588] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="33fce3bf3a4540710c9d09f976d616d2015ff9f26ce23cc119ccf8ba55627501" Namespace="calico-apiserver" Pod="calico-apiserver-5b5dbfc55b-xsmtx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b5dbfc55b--xsmtx-eth0" Feb 13 15:28:10.730852 containerd[1457]: 2025-02-13 15:28:10.306 [INFO][4703] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="33fce3bf3a4540710c9d09f976d616d2015ff9f26ce23cc119ccf8ba55627501" HandleID="k8s-pod-network.33fce3bf3a4540710c9d09f976d616d2015ff9f26ce23cc119ccf8ba55627501" Workload="localhost-k8s-calico--apiserver--5b5dbfc55b--xsmtx-eth0" Feb 13 15:28:10.730852 containerd[1457]: 2025-02-13 15:28:10.335 [INFO][4703] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="33fce3bf3a4540710c9d09f976d616d2015ff9f26ce23cc119ccf8ba55627501" HandleID="k8s-pod-network.33fce3bf3a4540710c9d09f976d616d2015ff9f26ce23cc119ccf8ba55627501" Workload="localhost-k8s-calico--apiserver--5b5dbfc55b--xsmtx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004dca0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5b5dbfc55b-xsmtx", "timestamp":"2025-02-13 15:28:10.306740097 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:28:10.730852 containerd[1457]: 2025-02-13 15:28:10.335 [INFO][4703] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:28:10.730852 containerd[1457]: 2025-02-13 15:28:10.615 [INFO][4703] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:28:10.730852 containerd[1457]: 2025-02-13 15:28:10.616 [INFO][4703] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 15:28:10.730852 containerd[1457]: 2025-02-13 15:28:10.619 [INFO][4703] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.33fce3bf3a4540710c9d09f976d616d2015ff9f26ce23cc119ccf8ba55627501" host="localhost" Feb 13 15:28:10.730852 containerd[1457]: 2025-02-13 15:28:10.644 [INFO][4703] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 15:28:10.730852 containerd[1457]: 2025-02-13 15:28:10.653 [INFO][4703] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 15:28:10.730852 containerd[1457]: 2025-02-13 15:28:10.656 [INFO][4703] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 15:28:10.730852 containerd[1457]: 2025-02-13 15:28:10.663 [INFO][4703] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 15:28:10.730852 containerd[1457]: 2025-02-13 15:28:10.663 [INFO][4703] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.33fce3bf3a4540710c9d09f976d616d2015ff9f26ce23cc119ccf8ba55627501" host="localhost" Feb 13 15:28:10.730852 containerd[1457]: 2025-02-13 15:28:10.674 [INFO][4703] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.33fce3bf3a4540710c9d09f976d616d2015ff9f26ce23cc119ccf8ba55627501 Feb 13 15:28:10.730852 containerd[1457]: 2025-02-13 15:28:10.681 [INFO][4703] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.33fce3bf3a4540710c9d09f976d616d2015ff9f26ce23cc119ccf8ba55627501" host="localhost" Feb 13 15:28:10.730852 containerd[1457]: 2025-02-13 15:28:10.693 [INFO][4703] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.33fce3bf3a4540710c9d09f976d616d2015ff9f26ce23cc119ccf8ba55627501" host="localhost" Feb 13 15:28:10.730852 containerd[1457]: 2025-02-13 15:28:10.693 [INFO][4703] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.33fce3bf3a4540710c9d09f976d616d2015ff9f26ce23cc119ccf8ba55627501" host="localhost" Feb 13 15:28:10.730852 containerd[1457]: 2025-02-13 15:28:10.693 [INFO][4703] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:28:10.730852 containerd[1457]: 2025-02-13 15:28:10.693 [INFO][4703] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="33fce3bf3a4540710c9d09f976d616d2015ff9f26ce23cc119ccf8ba55627501" HandleID="k8s-pod-network.33fce3bf3a4540710c9d09f976d616d2015ff9f26ce23cc119ccf8ba55627501" Workload="localhost-k8s-calico--apiserver--5b5dbfc55b--xsmtx-eth0" Feb 13 15:28:10.731662 containerd[1457]: 2025-02-13 15:28:10.701 [INFO][4588] cni-plugin/k8s.go 386: Populated endpoint ContainerID="33fce3bf3a4540710c9d09f976d616d2015ff9f26ce23cc119ccf8ba55627501" Namespace="calico-apiserver" Pod="calico-apiserver-5b5dbfc55b-xsmtx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b5dbfc55b--xsmtx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b5dbfc55b--xsmtx-eth0", GenerateName:"calico-apiserver-5b5dbfc55b-", Namespace:"calico-apiserver", SelfLink:"", UID:"2ce64576-1ac8-4271-89c9-a8de4b77d706", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 27, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b5dbfc55b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5b5dbfc55b-xsmtx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9dc4c847929", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:28:10.731662 containerd[1457]: 2025-02-13 15:28:10.702 [INFO][4588] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="33fce3bf3a4540710c9d09f976d616d2015ff9f26ce23cc119ccf8ba55627501" Namespace="calico-apiserver" Pod="calico-apiserver-5b5dbfc55b-xsmtx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b5dbfc55b--xsmtx-eth0" Feb 13 15:28:10.731662 containerd[1457]: 2025-02-13 15:28:10.702 [INFO][4588] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9dc4c847929 ContainerID="33fce3bf3a4540710c9d09f976d616d2015ff9f26ce23cc119ccf8ba55627501" Namespace="calico-apiserver" Pod="calico-apiserver-5b5dbfc55b-xsmtx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b5dbfc55b--xsmtx-eth0" Feb 13 15:28:10.731662 containerd[1457]: 2025-02-13 15:28:10.705 [INFO][4588] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="33fce3bf3a4540710c9d09f976d616d2015ff9f26ce23cc119ccf8ba55627501" Namespace="calico-apiserver" Pod="calico-apiserver-5b5dbfc55b-xsmtx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b5dbfc55b--xsmtx-eth0" Feb 13 15:28:10.731662 containerd[1457]: 2025-02-13 15:28:10.705 [INFO][4588] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="33fce3bf3a4540710c9d09f976d616d2015ff9f26ce23cc119ccf8ba55627501" Namespace="calico-apiserver" Pod="calico-apiserver-5b5dbfc55b-xsmtx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b5dbfc55b--xsmtx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b5dbfc55b--xsmtx-eth0", GenerateName:"calico-apiserver-5b5dbfc55b-", Namespace:"calico-apiserver", SelfLink:"", UID:"2ce64576-1ac8-4271-89c9-a8de4b77d706", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 27, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b5dbfc55b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"33fce3bf3a4540710c9d09f976d616d2015ff9f26ce23cc119ccf8ba55627501", Pod:"calico-apiserver-5b5dbfc55b-xsmtx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9dc4c847929", MAC:"5a:9c:29:1e:f9:64", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:28:10.731662 containerd[1457]: 2025-02-13 15:28:10.720 [INFO][4588] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="33fce3bf3a4540710c9d09f976d616d2015ff9f26ce23cc119ccf8ba55627501" Namespace="calico-apiserver" Pod="calico-apiserver-5b5dbfc55b-xsmtx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b5dbfc55b--xsmtx-eth0" Feb 13 15:28:10.731662 containerd[1457]: time="2025-02-13T15:28:10.731093428Z" level=info msg="CreateContainer within sandbox \"b3c7aeae2f1ac1fa35088628910f633d5b535757d66a31038744c186337084ff\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:28:10.751918 systemd[1]: Started cri-containerd-91e6a1fdf76ae26f3fa426c9a5ba6dd5cad034b7b1553aabe9b5bfc81aa561bb.scope - libcontainer container 91e6a1fdf76ae26f3fa426c9a5ba6dd5cad034b7b1553aabe9b5bfc81aa561bb. Feb 13 15:28:10.770152 systemd[1]: Started cri-containerd-fbc73b750d49da0fa12812722c7d63ac01e2d4f1e9412416f48e989f83f49618.scope - libcontainer container fbc73b750d49da0fa12812722c7d63ac01e2d4f1e9412416f48e989f83f49618. Feb 13 15:28:10.777082 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:28:10.785261 containerd[1457]: time="2025-02-13T15:28:10.785031460Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:28:10.785261 containerd[1457]: time="2025-02-13T15:28:10.785097227Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:28:10.785261 containerd[1457]: time="2025-02-13T15:28:10.785112348Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:28:10.788675 containerd[1457]: time="2025-02-13T15:28:10.787302090Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:28:10.802524 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:28:10.830405 kubelet[2608]: E0213 15:28:10.830353 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:10.835016 systemd[1]: Started cri-containerd-33fce3bf3a4540710c9d09f976d616d2015ff9f26ce23cc119ccf8ba55627501.scope - libcontainer container 33fce3bf3a4540710c9d09f976d616d2015ff9f26ce23cc119ccf8ba55627501. Feb 13 15:28:10.859833 containerd[1457]: time="2025-02-13T15:28:10.857707673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-ccbb9dcd9-2n9js,Uid:7201f0b7-6fae-4b37-8849-0e2e56956168,Namespace:calico-system,Attempt:5,} returns sandbox id \"91e6a1fdf76ae26f3fa426c9a5ba6dd5cad034b7b1553aabe9b5bfc81aa561bb\"" Feb 13 15:28:10.865782 containerd[1457]: time="2025-02-13T15:28:10.865742328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b5dbfc55b-kt4sn,Uid:086f2ebd-d6e8-46e2-831d-0f37b85724a2,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"f44a1d1ecf86d1c7a8df93aa8129276428556add7d7643e26ce6e3b233f87ceb\"" Feb 13 15:28:10.875065 containerd[1457]: time="2025-02-13T15:28:10.875025550Z" level=info msg="CreateContainer within sandbox \"b3c7aeae2f1ac1fa35088628910f633d5b535757d66a31038744c186337084ff\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"205ed02f1d3929d6f544325972026a9d0035451fb0528d3eba3f003bce626edb\"" Feb 13 15:28:10.876659 containerd[1457]: time="2025-02-13T15:28:10.875231491Z" level=info msg="StartContainer for \"fbc73b750d49da0fa12812722c7d63ac01e2d4f1e9412416f48e989f83f49618\" returns successfully" Feb 13 15:28:10.878636 containerd[1457]: time="2025-02-13T15:28:10.878064658Z" level=info msg="StartContainer for \"205ed02f1d3929d6f544325972026a9d0035451fb0528d3eba3f003bce626edb\"" Feb 13 15:28:10.884704 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:28:10.916076 containerd[1457]: time="2025-02-13T15:28:10.916035270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b5dbfc55b-xsmtx,Uid:2ce64576-1ac8-4271-89c9-a8de4b77d706,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"33fce3bf3a4540710c9d09f976d616d2015ff9f26ce23cc119ccf8ba55627501\"" Feb 13 15:28:10.940856 systemd[1]: Started cri-containerd-205ed02f1d3929d6f544325972026a9d0035451fb0528d3eba3f003bce626edb.scope - libcontainer container 205ed02f1d3929d6f544325972026a9d0035451fb0528d3eba3f003bce626edb. Feb 13 15:28:10.988429 containerd[1457]: time="2025-02-13T15:28:10.988376129Z" level=info msg="StartContainer for \"205ed02f1d3929d6f544325972026a9d0035451fb0528d3eba3f003bce626edb\" returns successfully" Feb 13 15:28:11.592789 systemd-networkd[1363]: cali89d407282e2: Gained IPv6LL Feb 13 15:28:11.656768 systemd-networkd[1363]: calidcc4ae10e88: Gained IPv6LL Feb 13 15:28:11.853820 kubelet[2608]: E0213 15:28:11.851683 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:11.865591 kubelet[2608]: E0213 15:28:11.863702 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:11.873526 kubelet[2608]: E0213 15:28:11.873067 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:11.888526 kubelet[2608]: I0213 15:28:11.888172 2608 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-vhgmw" podStartSLOduration=21.88815372 podStartE2EDuration="21.88815372s" podCreationTimestamp="2025-02-13 15:27:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:28:11.866800698 +0000 UTC m=+38.513802049" watchObservedRunningTime="2025-02-13 15:28:11.88815372 +0000 UTC m=+38.535155071" Feb 13 15:28:11.888526 kubelet[2608]: I0213 15:28:11.888339 2608 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-j2fhf" podStartSLOduration=21.888331298 podStartE2EDuration="21.888331298s" podCreationTimestamp="2025-02-13 15:27:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:28:11.888322217 +0000 UTC m=+38.535323688" watchObservedRunningTime="2025-02-13 15:28:11.888331298 +0000 UTC m=+38.535332649" Feb 13 15:28:11.922740 containerd[1457]: time="2025-02-13T15:28:11.922684040Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:11.924724 containerd[1457]: time="2025-02-13T15:28:11.924668396Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Feb 13 15:28:11.928046 containerd[1457]: time="2025-02-13T15:28:11.928002804Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:11.933340 containerd[1457]: time="2025-02-13T15:28:11.933282004Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.337480862s" Feb 13 15:28:11.933340 containerd[1457]: time="2025-02-13T15:28:11.933331329Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Feb 13 15:28:11.935590 containerd[1457]: time="2025-02-13T15:28:11.935544747Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:11.937212 containerd[1457]: time="2025-02-13T15:28:11.937090539Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Feb 13 15:28:11.940937 containerd[1457]: time="2025-02-13T15:28:11.940894913Z" level=info msg="CreateContainer within sandbox \"613643cac43be36411647a3692a44855d83f01e2de28c8345db617fc5b0bf87b\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 15:28:11.969238 containerd[1457]: time="2025-02-13T15:28:11.969190459Z" level=info msg="CreateContainer within sandbox \"613643cac43be36411647a3692a44855d83f01e2de28c8345db617fc5b0bf87b\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"846115a85c210da23ae65dcf3e921cfab3a9f397c953bd4d0b64062527015375\"" Feb 13 15:28:11.969946 containerd[1457]: time="2025-02-13T15:28:11.969914211Z" level=info msg="StartContainer for \"846115a85c210da23ae65dcf3e921cfab3a9f397c953bd4d0b64062527015375\"" Feb 13 15:28:12.025187 systemd[1]: Started cri-containerd-846115a85c210da23ae65dcf3e921cfab3a9f397c953bd4d0b64062527015375.scope - libcontainer container 846115a85c210da23ae65dcf3e921cfab3a9f397c953bd4d0b64062527015375. Feb 13 15:28:12.078077 containerd[1457]: time="2025-02-13T15:28:12.078030319Z" level=info msg="StartContainer for \"846115a85c210da23ae65dcf3e921cfab3a9f397c953bd4d0b64062527015375\" returns successfully" Feb 13 15:28:12.104774 systemd-networkd[1363]: cali84609cf0854: Gained IPv6LL Feb 13 15:28:12.250759 systemd[1]: run-containerd-runc-k8s.io-846115a85c210da23ae65dcf3e921cfab3a9f397c953bd4d0b64062527015375-runc.PpxD55.mount: Deactivated successfully. Feb 13 15:28:12.296754 systemd-networkd[1363]: calidef72dec309: Gained IPv6LL Feb 13 15:28:12.361393 systemd-networkd[1363]: cali9dc4c847929: Gained IPv6LL Feb 13 15:28:12.424811 systemd-networkd[1363]: calib8aa0c34400: Gained IPv6LL Feb 13 15:28:12.877475 kubelet[2608]: E0213 15:28:12.877433 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:12.878074 kubelet[2608]: E0213 15:28:12.877959 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:13.755086 systemd[1]: Started sshd@9-10.0.0.91:22-10.0.0.1:46988.service - OpenSSH per-connection server daemon (10.0.0.1:46988). Feb 13 15:28:13.850974 containerd[1457]: time="2025-02-13T15:28:13.850907680Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:13.851956 containerd[1457]: time="2025-02-13T15:28:13.851605905Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Feb 13 15:28:13.852435 containerd[1457]: time="2025-02-13T15:28:13.852387977Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:13.855071 containerd[1457]: time="2025-02-13T15:28:13.855030023Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:13.856388 containerd[1457]: time="2025-02-13T15:28:13.855748930Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 1.918603826s" Feb 13 15:28:13.856388 containerd[1457]: time="2025-02-13T15:28:13.855782173Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Feb 13 15:28:13.858121 containerd[1457]: time="2025-02-13T15:28:13.857581341Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 15:28:13.870747 containerd[1457]: time="2025-02-13T15:28:13.870657797Z" level=info msg="CreateContainer within sandbox \"91e6a1fdf76ae26f3fa426c9a5ba6dd5cad034b7b1553aabe9b5bfc81aa561bb\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 13 15:28:13.875328 sshd[5391]: Accepted publickey for core from 10.0.0.1 port 46988 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:28:13.877189 sshd-session[5391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:28:13.880157 kubelet[2608]: E0213 15:28:13.880112 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:13.888828 systemd-logind[1422]: New session 10 of user core. Feb 13 15:28:13.891053 containerd[1457]: time="2025-02-13T15:28:13.891014131Z" level=info msg="CreateContainer within sandbox \"91e6a1fdf76ae26f3fa426c9a5ba6dd5cad034b7b1553aabe9b5bfc81aa561bb\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"d2dbccabb01cc974921c43d3973ecbbfca369473c540fbbe077afcf8e8544b02\"" Feb 13 15:28:13.891735 containerd[1457]: time="2025-02-13T15:28:13.891526779Z" level=info msg="StartContainer for \"d2dbccabb01cc974921c43d3973ecbbfca369473c540fbbe077afcf8e8544b02\"" Feb 13 15:28:13.897853 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 15:28:13.935851 systemd[1]: Started cri-containerd-d2dbccabb01cc974921c43d3973ecbbfca369473c540fbbe077afcf8e8544b02.scope - libcontainer container d2dbccabb01cc974921c43d3973ecbbfca369473c540fbbe077afcf8e8544b02. Feb 13 15:28:14.003030 containerd[1457]: time="2025-02-13T15:28:14.002977822Z" level=info msg="StartContainer for \"d2dbccabb01cc974921c43d3973ecbbfca369473c540fbbe077afcf8e8544b02\" returns successfully" Feb 13 15:28:14.082049 sshd[5411]: Connection closed by 10.0.0.1 port 46988 Feb 13 15:28:14.084434 sshd-session[5391]: pam_unix(sshd:session): session closed for user core Feb 13 15:28:14.091869 systemd[1]: sshd@9-10.0.0.91:22-10.0.0.1:46988.service: Deactivated successfully. Feb 13 15:28:14.095532 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 15:28:14.096673 systemd-logind[1422]: Session 10 logged out. Waiting for processes to exit. Feb 13 15:28:14.105972 systemd[1]: Started sshd@10-10.0.0.91:22-10.0.0.1:47000.service - OpenSSH per-connection server daemon (10.0.0.1:47000). Feb 13 15:28:14.107374 systemd-logind[1422]: Removed session 10. Feb 13 15:28:14.152573 sshd[5459]: Accepted publickey for core from 10.0.0.1 port 47000 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:28:14.154377 sshd-session[5459]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:28:14.162403 systemd-logind[1422]: New session 11 of user core. Feb 13 15:28:14.173844 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 15:28:14.362325 sshd[5470]: Connection closed by 10.0.0.1 port 47000 Feb 13 15:28:14.362718 sshd-session[5459]: pam_unix(sshd:session): session closed for user core Feb 13 15:28:14.378804 systemd[1]: sshd@10-10.0.0.91:22-10.0.0.1:47000.service: Deactivated successfully. Feb 13 15:28:14.385120 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 15:28:14.390084 systemd-logind[1422]: Session 11 logged out. Waiting for processes to exit. Feb 13 15:28:14.400001 systemd[1]: Started sshd@11-10.0.0.91:22-10.0.0.1:47008.service - OpenSSH per-connection server daemon (10.0.0.1:47008). Feb 13 15:28:14.401001 systemd-logind[1422]: Removed session 11. Feb 13 15:28:14.439974 sshd[5480]: Accepted publickey for core from 10.0.0.1 port 47008 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:28:14.441541 sshd-session[5480]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:28:14.445892 systemd-logind[1422]: New session 12 of user core. Feb 13 15:28:14.451805 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 15:28:14.628581 sshd[5482]: Connection closed by 10.0.0.1 port 47008 Feb 13 15:28:14.629150 sshd-session[5480]: pam_unix(sshd:session): session closed for user core Feb 13 15:28:14.633528 systemd[1]: sshd@11-10.0.0.91:22-10.0.0.1:47008.service: Deactivated successfully. Feb 13 15:28:14.635419 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 15:28:14.637164 systemd-logind[1422]: Session 12 logged out. Waiting for processes to exit. Feb 13 15:28:14.638586 systemd-logind[1422]: Removed session 12. Feb 13 15:28:14.899732 kubelet[2608]: I0213 15:28:14.898398 2608 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-ccbb9dcd9-2n9js" podStartSLOduration=15.902645106 podStartE2EDuration="18.89838188s" podCreationTimestamp="2025-02-13 15:27:56 +0000 UTC" firstStartedPulling="2025-02-13 15:28:10.861139621 +0000 UTC m=+37.508140972" lastFinishedPulling="2025-02-13 15:28:13.856876395 +0000 UTC m=+40.503877746" observedRunningTime="2025-02-13 15:28:14.898064691 +0000 UTC m=+41.545066042" watchObservedRunningTime="2025-02-13 15:28:14.89838188 +0000 UTC m=+41.545383231" Feb 13 15:28:15.402514 kubelet[2608]: I0213 15:28:15.402455 2608 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:28:15.403323 kubelet[2608]: E0213 15:28:15.403300 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:15.768669 containerd[1457]: time="2025-02-13T15:28:15.768532481Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:15.769021 containerd[1457]: time="2025-02-13T15:28:15.768953718Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Feb 13 15:28:15.769913 containerd[1457]: time="2025-02-13T15:28:15.769866679Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:15.771967 containerd[1457]: time="2025-02-13T15:28:15.771920180Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:15.773042 containerd[1457]: time="2025-02-13T15:28:15.772910427Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 1.915292763s" Feb 13 15:28:15.773042 containerd[1457]: time="2025-02-13T15:28:15.772949151Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Feb 13 15:28:15.774601 containerd[1457]: time="2025-02-13T15:28:15.774550732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 15:28:15.775464 containerd[1457]: time="2025-02-13T15:28:15.775422969Z" level=info msg="CreateContainer within sandbox \"f44a1d1ecf86d1c7a8df93aa8129276428556add7d7643e26ce6e3b233f87ceb\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 15:28:15.788138 containerd[1457]: time="2025-02-13T15:28:15.787993959Z" level=info msg="CreateContainer within sandbox \"f44a1d1ecf86d1c7a8df93aa8129276428556add7d7643e26ce6e3b233f87ceb\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"03abf711d970a18a7880cb9f88e77440003106b46cc2ea82073d0b3169eb99de\"" Feb 13 15:28:15.788931 containerd[1457]: time="2025-02-13T15:28:15.788884837Z" level=info msg="StartContainer for \"03abf711d970a18a7880cb9f88e77440003106b46cc2ea82073d0b3169eb99de\"" Feb 13 15:28:15.816855 systemd[1]: Started cri-containerd-03abf711d970a18a7880cb9f88e77440003106b46cc2ea82073d0b3169eb99de.scope - libcontainer container 03abf711d970a18a7880cb9f88e77440003106b46cc2ea82073d0b3169eb99de. Feb 13 15:28:15.851061 containerd[1457]: time="2025-02-13T15:28:15.851015361Z" level=info msg="StartContainer for \"03abf711d970a18a7880cb9f88e77440003106b46cc2ea82073d0b3169eb99de\" returns successfully" Feb 13 15:28:15.893096 kubelet[2608]: E0213 15:28:15.893050 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:15.910730 kubelet[2608]: I0213 15:28:15.910409 2608 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5b5dbfc55b-kt4sn" podStartSLOduration=15.003879102 podStartE2EDuration="19.910390842s" podCreationTimestamp="2025-02-13 15:27:56 +0000 UTC" firstStartedPulling="2025-02-13 15:28:10.867371413 +0000 UTC m=+37.514372764" lastFinishedPulling="2025-02-13 15:28:15.773883153 +0000 UTC m=+42.420884504" observedRunningTime="2025-02-13 15:28:15.907573833 +0000 UTC m=+42.554575184" watchObservedRunningTime="2025-02-13 15:28:15.910390842 +0000 UTC m=+42.557392153" Feb 13 15:28:16.261755 kernel: bpftool[5642]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 15:28:16.456327 systemd-networkd[1363]: vxlan.calico: Link UP Feb 13 15:28:16.456334 systemd-networkd[1363]: vxlan.calico: Gained carrier Feb 13 15:28:16.487844 containerd[1457]: time="2025-02-13T15:28:16.487782472Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:16.489698 containerd[1457]: time="2025-02-13T15:28:16.489266520Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Feb 13 15:28:16.491405 containerd[1457]: time="2025-02-13T15:28:16.491350739Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 716.758204ms" Feb 13 15:28:16.491466 containerd[1457]: time="2025-02-13T15:28:16.491404824Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Feb 13 15:28:16.493239 containerd[1457]: time="2025-02-13T15:28:16.493183657Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 15:28:16.498969 containerd[1457]: time="2025-02-13T15:28:16.498920911Z" level=info msg="CreateContainer within sandbox \"33fce3bf3a4540710c9d09f976d616d2015ff9f26ce23cc119ccf8ba55627501\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 15:28:16.527398 containerd[1457]: time="2025-02-13T15:28:16.527265392Z" level=info msg="CreateContainer within sandbox \"33fce3bf3a4540710c9d09f976d616d2015ff9f26ce23cc119ccf8ba55627501\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"967214b7bbdc4dd40d4ab10033e0c0fa1359812fdae12b9b455bced2a5f3d98c\"" Feb 13 15:28:16.528157 containerd[1457]: time="2025-02-13T15:28:16.528118625Z" level=info msg="StartContainer for \"967214b7bbdc4dd40d4ab10033e0c0fa1359812fdae12b9b455bced2a5f3d98c\"" Feb 13 15:28:16.564872 systemd[1]: Started cri-containerd-967214b7bbdc4dd40d4ab10033e0c0fa1359812fdae12b9b455bced2a5f3d98c.scope - libcontainer container 967214b7bbdc4dd40d4ab10033e0c0fa1359812fdae12b9b455bced2a5f3d98c. Feb 13 15:28:16.605020 containerd[1457]: time="2025-02-13T15:28:16.604902236Z" level=info msg="StartContainer for \"967214b7bbdc4dd40d4ab10033e0c0fa1359812fdae12b9b455bced2a5f3d98c\" returns successfully" Feb 13 15:28:16.899435 kubelet[2608]: I0213 15:28:16.899399 2608 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:28:16.911423 kubelet[2608]: I0213 15:28:16.911343 2608 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5b5dbfc55b-xsmtx" podStartSLOduration=15.336292843 podStartE2EDuration="20.911319618s" podCreationTimestamp="2025-02-13 15:27:56 +0000 UTC" firstStartedPulling="2025-02-13 15:28:10.917938423 +0000 UTC m=+37.564939734" lastFinishedPulling="2025-02-13 15:28:16.492965158 +0000 UTC m=+43.139966509" observedRunningTime="2025-02-13 15:28:16.911008072 +0000 UTC m=+43.558009423" watchObservedRunningTime="2025-02-13 15:28:16.911319618 +0000 UTC m=+43.558320969" Feb 13 15:28:17.822036 containerd[1457]: time="2025-02-13T15:28:17.821986762Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:17.825936 containerd[1457]: time="2025-02-13T15:28:17.825682913Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Feb 13 15:28:17.828380 containerd[1457]: time="2025-02-13T15:28:17.828024150Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:17.831346 containerd[1457]: time="2025-02-13T15:28:17.831292384Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:17.832500 containerd[1457]: time="2025-02-13T15:28:17.832359154Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.339121812s" Feb 13 15:28:17.832500 containerd[1457]: time="2025-02-13T15:28:17.832406518Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Feb 13 15:28:17.835018 containerd[1457]: time="2025-02-13T15:28:17.834967413Z" level=info msg="CreateContainer within sandbox \"613643cac43be36411647a3692a44855d83f01e2de28c8345db617fc5b0bf87b\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 15:28:17.903016 kubelet[2608]: I0213 15:28:17.902969 2608 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:28:17.904431 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1958859367.mount: Deactivated successfully. Feb 13 15:28:17.913500 containerd[1457]: time="2025-02-13T15:28:17.913331801Z" level=info msg="CreateContainer within sandbox \"613643cac43be36411647a3692a44855d83f01e2de28c8345db617fc5b0bf87b\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"24e8d6fc63178f4e4aa1ead16507443d733bea7d99871f13288342edbac6faa1\"" Feb 13 15:28:17.914008 containerd[1457]: time="2025-02-13T15:28:17.913976656Z" level=info msg="StartContainer for \"24e8d6fc63178f4e4aa1ead16507443d733bea7d99871f13288342edbac6faa1\"" Feb 13 15:28:17.961846 systemd[1]: Started cri-containerd-24e8d6fc63178f4e4aa1ead16507443d733bea7d99871f13288342edbac6faa1.scope - libcontainer container 24e8d6fc63178f4e4aa1ead16507443d733bea7d99871f13288342edbac6faa1. Feb 13 15:28:18.021232 containerd[1457]: time="2025-02-13T15:28:18.021059981Z" level=info msg="StartContainer for \"24e8d6fc63178f4e4aa1ead16507443d733bea7d99871f13288342edbac6faa1\" returns successfully" Feb 13 15:28:18.313533 systemd-networkd[1363]: vxlan.calico: Gained IPv6LL Feb 13 15:28:18.504748 kubelet[2608]: I0213 15:28:18.504440 2608 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 15:28:18.513594 kubelet[2608]: I0213 15:28:18.513562 2608 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 15:28:18.946099 kubelet[2608]: I0213 15:28:18.945948 2608 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-59ns4" podStartSLOduration=15.707724973 podStartE2EDuration="22.945924736s" podCreationTimestamp="2025-02-13 15:27:56 +0000 UTC" firstStartedPulling="2025-02-13 15:28:10.595071108 +0000 UTC m=+37.242072459" lastFinishedPulling="2025-02-13 15:28:17.833270871 +0000 UTC m=+44.480272222" observedRunningTime="2025-02-13 15:28:18.936471039 +0000 UTC m=+45.583472390" watchObservedRunningTime="2025-02-13 15:28:18.945924736 +0000 UTC m=+45.592926087" Feb 13 15:28:19.644432 systemd[1]: Started sshd@12-10.0.0.91:22-10.0.0.1:47018.service - OpenSSH per-connection server daemon (10.0.0.1:47018). Feb 13 15:28:19.740819 sshd[5816]: Accepted publickey for core from 10.0.0.1 port 47018 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:28:19.742747 sshd-session[5816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:28:19.749477 systemd-logind[1422]: New session 13 of user core. Feb 13 15:28:19.774983 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 15:28:19.969515 sshd[5818]: Connection closed by 10.0.0.1 port 47018 Feb 13 15:28:19.970341 sshd-session[5816]: pam_unix(sshd:session): session closed for user core Feb 13 15:28:19.974772 systemd[1]: sshd@12-10.0.0.91:22-10.0.0.1:47018.service: Deactivated successfully. Feb 13 15:28:19.977145 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 15:28:19.978148 systemd-logind[1422]: Session 13 logged out. Waiting for processes to exit. Feb 13 15:28:19.979215 systemd-logind[1422]: Removed session 13. Feb 13 15:28:24.986101 systemd[1]: Started sshd@13-10.0.0.91:22-10.0.0.1:49236.service - OpenSSH per-connection server daemon (10.0.0.1:49236). Feb 13 15:28:25.047418 sshd[5844]: Accepted publickey for core from 10.0.0.1 port 49236 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:28:25.049042 sshd-session[5844]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:28:25.055748 systemd-logind[1422]: New session 14 of user core. Feb 13 15:28:25.063938 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 15:28:25.225657 sshd[5846]: Connection closed by 10.0.0.1 port 49236 Feb 13 15:28:25.226247 sshd-session[5844]: pam_unix(sshd:session): session closed for user core Feb 13 15:28:25.235465 systemd[1]: sshd@13-10.0.0.91:22-10.0.0.1:49236.service: Deactivated successfully. Feb 13 15:28:25.238456 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 15:28:25.241148 systemd-logind[1422]: Session 14 logged out. Waiting for processes to exit. Feb 13 15:28:25.260031 systemd[1]: Started sshd@14-10.0.0.91:22-10.0.0.1:49252.service - OpenSSH per-connection server daemon (10.0.0.1:49252). Feb 13 15:28:25.260614 systemd-logind[1422]: Removed session 14. Feb 13 15:28:25.301958 sshd[5859]: Accepted publickey for core from 10.0.0.1 port 49252 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:28:25.303499 sshd-session[5859]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:28:25.307681 systemd-logind[1422]: New session 15 of user core. Feb 13 15:28:25.319867 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 15:28:25.557496 sshd[5861]: Connection closed by 10.0.0.1 port 49252 Feb 13 15:28:25.558490 sshd-session[5859]: pam_unix(sshd:session): session closed for user core Feb 13 15:28:25.566331 systemd[1]: sshd@14-10.0.0.91:22-10.0.0.1:49252.service: Deactivated successfully. Feb 13 15:28:25.569126 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 15:28:25.570809 systemd-logind[1422]: Session 15 logged out. Waiting for processes to exit. Feb 13 15:28:25.579773 systemd[1]: Started sshd@15-10.0.0.91:22-10.0.0.1:49268.service - OpenSSH per-connection server daemon (10.0.0.1:49268). Feb 13 15:28:25.581610 systemd-logind[1422]: Removed session 15. Feb 13 15:28:25.633294 sshd[5871]: Accepted publickey for core from 10.0.0.1 port 49268 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:28:25.634826 sshd-session[5871]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:28:25.638877 systemd-logind[1422]: New session 16 of user core. Feb 13 15:28:25.646832 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 15:28:27.252127 sshd[5873]: Connection closed by 10.0.0.1 port 49268 Feb 13 15:28:27.254242 sshd-session[5871]: pam_unix(sshd:session): session closed for user core Feb 13 15:28:27.262302 systemd[1]: sshd@15-10.0.0.91:22-10.0.0.1:49268.service: Deactivated successfully. Feb 13 15:28:27.264147 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 15:28:27.270095 systemd-logind[1422]: Session 16 logged out. Waiting for processes to exit. Feb 13 15:28:27.286085 systemd[1]: Started sshd@16-10.0.0.91:22-10.0.0.1:49276.service - OpenSSH per-connection server daemon (10.0.0.1:49276). Feb 13 15:28:27.289660 systemd-logind[1422]: Removed session 16. Feb 13 15:28:27.335287 sshd[5903]: Accepted publickey for core from 10.0.0.1 port 49276 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:28:27.336938 sshd-session[5903]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:28:27.341797 systemd-logind[1422]: New session 17 of user core. Feb 13 15:28:27.353836 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 15:28:27.710060 sshd[5906]: Connection closed by 10.0.0.1 port 49276 Feb 13 15:28:27.710301 sshd-session[5903]: pam_unix(sshd:session): session closed for user core Feb 13 15:28:27.720109 systemd[1]: sshd@16-10.0.0.91:22-10.0.0.1:49276.service: Deactivated successfully. Feb 13 15:28:27.724204 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 15:28:27.725098 systemd-logind[1422]: Session 17 logged out. Waiting for processes to exit. Feb 13 15:28:27.737975 systemd[1]: Started sshd@17-10.0.0.91:22-10.0.0.1:49288.service - OpenSSH per-connection server daemon (10.0.0.1:49288). Feb 13 15:28:27.740394 systemd-logind[1422]: Removed session 17. Feb 13 15:28:27.776386 sshd[5916]: Accepted publickey for core from 10.0.0.1 port 49288 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:28:27.779213 sshd-session[5916]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:28:27.784700 systemd-logind[1422]: New session 18 of user core. Feb 13 15:28:27.790841 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 15:28:27.936545 sshd[5918]: Connection closed by 10.0.0.1 port 49288 Feb 13 15:28:27.937149 sshd-session[5916]: pam_unix(sshd:session): session closed for user core Feb 13 15:28:27.940976 systemd[1]: sshd@17-10.0.0.91:22-10.0.0.1:49288.service: Deactivated successfully. Feb 13 15:28:27.942819 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 15:28:27.943640 systemd-logind[1422]: Session 18 logged out. Waiting for processes to exit. Feb 13 15:28:27.944842 systemd-logind[1422]: Removed session 18. Feb 13 15:28:31.043495 kubelet[2608]: E0213 15:28:31.043452 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:32.949271 systemd[1]: Started sshd@18-10.0.0.91:22-10.0.0.1:52434.service - OpenSSH per-connection server daemon (10.0.0.1:52434). Feb 13 15:28:33.011016 sshd[5954]: Accepted publickey for core from 10.0.0.1 port 52434 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:28:33.012475 sshd-session[5954]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:28:33.016281 systemd-logind[1422]: New session 19 of user core. Feb 13 15:28:33.025486 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 15:28:33.161343 sshd[5956]: Connection closed by 10.0.0.1 port 52434 Feb 13 15:28:33.160421 sshd-session[5954]: pam_unix(sshd:session): session closed for user core Feb 13 15:28:33.163808 systemd[1]: sshd@18-10.0.0.91:22-10.0.0.1:52434.service: Deactivated successfully. Feb 13 15:28:33.165788 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 15:28:33.166813 systemd-logind[1422]: Session 19 logged out. Waiting for processes to exit. Feb 13 15:28:33.167548 systemd-logind[1422]: Removed session 19. Feb 13 15:28:33.423728 containerd[1457]: time="2025-02-13T15:28:33.423660343Z" level=info msg="StopPodSandbox for \"63565dfc82761cf07ac785a68455c18a81508c45b3108b2a758b41876e99d6f2\"" Feb 13 15:28:33.424310 containerd[1457]: time="2025-02-13T15:28:33.423826234Z" level=info msg="TearDown network for sandbox \"63565dfc82761cf07ac785a68455c18a81508c45b3108b2a758b41876e99d6f2\" successfully" Feb 13 15:28:33.424310 containerd[1457]: time="2025-02-13T15:28:33.423838195Z" level=info msg="StopPodSandbox for \"63565dfc82761cf07ac785a68455c18a81508c45b3108b2a758b41876e99d6f2\" returns successfully" Feb 13 15:28:33.427396 containerd[1457]: time="2025-02-13T15:28:33.426469045Z" level=info msg="RemovePodSandbox for \"63565dfc82761cf07ac785a68455c18a81508c45b3108b2a758b41876e99d6f2\"" Feb 13 15:28:33.427396 containerd[1457]: time="2025-02-13T15:28:33.426506367Z" level=info msg="Forcibly stopping sandbox \"63565dfc82761cf07ac785a68455c18a81508c45b3108b2a758b41876e99d6f2\"" Feb 13 15:28:33.427396 containerd[1457]: time="2025-02-13T15:28:33.426580492Z" level=info msg="TearDown network for sandbox \"63565dfc82761cf07ac785a68455c18a81508c45b3108b2a758b41876e99d6f2\" successfully" Feb 13 15:28:33.432756 containerd[1457]: time="2025-02-13T15:28:33.431370000Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"63565dfc82761cf07ac785a68455c18a81508c45b3108b2a758b41876e99d6f2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:33.432756 containerd[1457]: time="2025-02-13T15:28:33.432694806Z" level=info msg="RemovePodSandbox \"63565dfc82761cf07ac785a68455c18a81508c45b3108b2a758b41876e99d6f2\" returns successfully" Feb 13 15:28:33.438133 containerd[1457]: time="2025-02-13T15:28:33.435654237Z" level=info msg="StopPodSandbox for \"2c09c28a03fd4472b318aa06a1ea4e03909edc65be9ace9f58b2d77cd52e637a\"" Feb 13 15:28:33.438133 containerd[1457]: time="2025-02-13T15:28:33.435834008Z" level=info msg="TearDown network for sandbox \"2c09c28a03fd4472b318aa06a1ea4e03909edc65be9ace9f58b2d77cd52e637a\" successfully" Feb 13 15:28:33.438133 containerd[1457]: time="2025-02-13T15:28:33.435849209Z" level=info msg="StopPodSandbox for \"2c09c28a03fd4472b318aa06a1ea4e03909edc65be9ace9f58b2d77cd52e637a\" returns successfully" Feb 13 15:28:33.438133 containerd[1457]: time="2025-02-13T15:28:33.436213753Z" level=info msg="RemovePodSandbox for \"2c09c28a03fd4472b318aa06a1ea4e03909edc65be9ace9f58b2d77cd52e637a\"" Feb 13 15:28:33.438133 containerd[1457]: time="2025-02-13T15:28:33.436239394Z" level=info msg="Forcibly stopping sandbox \"2c09c28a03fd4472b318aa06a1ea4e03909edc65be9ace9f58b2d77cd52e637a\"" Feb 13 15:28:33.438133 containerd[1457]: time="2025-02-13T15:28:33.436311959Z" level=info msg="TearDown network for sandbox \"2c09c28a03fd4472b318aa06a1ea4e03909edc65be9ace9f58b2d77cd52e637a\" successfully" Feb 13 15:28:33.440030 containerd[1457]: time="2025-02-13T15:28:33.439838426Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2c09c28a03fd4472b318aa06a1ea4e03909edc65be9ace9f58b2d77cd52e637a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:33.440030 containerd[1457]: time="2025-02-13T15:28:33.439903071Z" level=info msg="RemovePodSandbox \"2c09c28a03fd4472b318aa06a1ea4e03909edc65be9ace9f58b2d77cd52e637a\" returns successfully" Feb 13 15:28:33.440375 containerd[1457]: time="2025-02-13T15:28:33.440294216Z" level=info msg="StopPodSandbox for \"daae8dd6eeb39b0c2ba55118dc887b99adeb526889c9a3d6f932e9903afa2435\"" Feb 13 15:28:33.440435 containerd[1457]: time="2025-02-13T15:28:33.440387142Z" level=info msg="TearDown network for sandbox \"daae8dd6eeb39b0c2ba55118dc887b99adeb526889c9a3d6f932e9903afa2435\" successfully" Feb 13 15:28:33.440435 containerd[1457]: time="2025-02-13T15:28:33.440397383Z" level=info msg="StopPodSandbox for \"daae8dd6eeb39b0c2ba55118dc887b99adeb526889c9a3d6f932e9903afa2435\" returns successfully" Feb 13 15:28:33.444542 containerd[1457]: time="2025-02-13T15:28:33.440743485Z" level=info msg="RemovePodSandbox for \"daae8dd6eeb39b0c2ba55118dc887b99adeb526889c9a3d6f932e9903afa2435\"" Feb 13 15:28:33.444542 containerd[1457]: time="2025-02-13T15:28:33.440767646Z" level=info msg="Forcibly stopping sandbox \"daae8dd6eeb39b0c2ba55118dc887b99adeb526889c9a3d6f932e9903afa2435\"" Feb 13 15:28:33.444542 containerd[1457]: time="2025-02-13T15:28:33.440827490Z" level=info msg="TearDown network for sandbox \"daae8dd6eeb39b0c2ba55118dc887b99adeb526889c9a3d6f932e9903afa2435\" successfully" Feb 13 15:28:33.454008 containerd[1457]: time="2025-02-13T15:28:33.451083552Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"daae8dd6eeb39b0c2ba55118dc887b99adeb526889c9a3d6f932e9903afa2435\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:33.454008 containerd[1457]: time="2025-02-13T15:28:33.453933815Z" level=info msg="RemovePodSandbox \"daae8dd6eeb39b0c2ba55118dc887b99adeb526889c9a3d6f932e9903afa2435\" returns successfully" Feb 13 15:28:33.454926 containerd[1457]: time="2025-02-13T15:28:33.454769069Z" level=info msg="StopPodSandbox for \"2ae1783dc181bf0a101c7cecd003149a3d0bc99235467eacc57dcc150794979a\"" Feb 13 15:28:33.454926 containerd[1457]: time="2025-02-13T15:28:33.454865155Z" level=info msg="TearDown network for sandbox \"2ae1783dc181bf0a101c7cecd003149a3d0bc99235467eacc57dcc150794979a\" successfully" Feb 13 15:28:33.454926 containerd[1457]: time="2025-02-13T15:28:33.454874556Z" level=info msg="StopPodSandbox for \"2ae1783dc181bf0a101c7cecd003149a3d0bc99235467eacc57dcc150794979a\" returns successfully" Feb 13 15:28:33.455980 containerd[1457]: time="2025-02-13T15:28:33.455345666Z" level=info msg="RemovePodSandbox for \"2ae1783dc181bf0a101c7cecd003149a3d0bc99235467eacc57dcc150794979a\"" Feb 13 15:28:33.455980 containerd[1457]: time="2025-02-13T15:28:33.455373068Z" level=info msg="Forcibly stopping sandbox \"2ae1783dc181bf0a101c7cecd003149a3d0bc99235467eacc57dcc150794979a\"" Feb 13 15:28:33.455980 containerd[1457]: time="2025-02-13T15:28:33.455440072Z" level=info msg="TearDown network for sandbox \"2ae1783dc181bf0a101c7cecd003149a3d0bc99235467eacc57dcc150794979a\" successfully" Feb 13 15:28:33.461365 containerd[1457]: time="2025-02-13T15:28:33.461149441Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2ae1783dc181bf0a101c7cecd003149a3d0bc99235467eacc57dcc150794979a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:33.461365 containerd[1457]: time="2025-02-13T15:28:33.461218405Z" level=info msg="RemovePodSandbox \"2ae1783dc181bf0a101c7cecd003149a3d0bc99235467eacc57dcc150794979a\" returns successfully" Feb 13 15:28:33.461911 containerd[1457]: time="2025-02-13T15:28:33.461684635Z" level=info msg="StopPodSandbox for \"de5622ef7c1e14e2d930089742198e784ff4a7b8aabb4704edcd78f305d94cf3\"" Feb 13 15:28:33.461911 containerd[1457]: time="2025-02-13T15:28:33.461789602Z" level=info msg="TearDown network for sandbox \"de5622ef7c1e14e2d930089742198e784ff4a7b8aabb4704edcd78f305d94cf3\" successfully" Feb 13 15:28:33.461911 containerd[1457]: time="2025-02-13T15:28:33.461799482Z" level=info msg="StopPodSandbox for \"de5622ef7c1e14e2d930089742198e784ff4a7b8aabb4704edcd78f305d94cf3\" returns successfully" Feb 13 15:28:33.462689 containerd[1457]: time="2025-02-13T15:28:33.462487927Z" level=info msg="RemovePodSandbox for \"de5622ef7c1e14e2d930089742198e784ff4a7b8aabb4704edcd78f305d94cf3\"" Feb 13 15:28:33.462689 containerd[1457]: time="2025-02-13T15:28:33.462514128Z" level=info msg="Forcibly stopping sandbox \"de5622ef7c1e14e2d930089742198e784ff4a7b8aabb4704edcd78f305d94cf3\"" Feb 13 15:28:33.462689 containerd[1457]: time="2025-02-13T15:28:33.462577293Z" level=info msg="TearDown network for sandbox \"de5622ef7c1e14e2d930089742198e784ff4a7b8aabb4704edcd78f305d94cf3\" successfully" Feb 13 15:28:33.465192 containerd[1457]: time="2025-02-13T15:28:33.465131577Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"de5622ef7c1e14e2d930089742198e784ff4a7b8aabb4704edcd78f305d94cf3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:33.465192 containerd[1457]: time="2025-02-13T15:28:33.465188181Z" level=info msg="RemovePodSandbox \"de5622ef7c1e14e2d930089742198e784ff4a7b8aabb4704edcd78f305d94cf3\" returns successfully" Feb 13 15:28:33.465888 containerd[1457]: time="2025-02-13T15:28:33.465819222Z" level=info msg="StopPodSandbox for \"400e05d09f7638f6370144bc7c30317522f5d53fca24796970245605aaf907e8\"" Feb 13 15:28:33.465925 containerd[1457]: time="2025-02-13T15:28:33.465909507Z" level=info msg="TearDown network for sandbox \"400e05d09f7638f6370144bc7c30317522f5d53fca24796970245605aaf907e8\" successfully" Feb 13 15:28:33.465925 containerd[1457]: time="2025-02-13T15:28:33.465919468Z" level=info msg="StopPodSandbox for \"400e05d09f7638f6370144bc7c30317522f5d53fca24796970245605aaf907e8\" returns successfully" Feb 13 15:28:33.466241 containerd[1457]: time="2025-02-13T15:28:33.466202166Z" level=info msg="RemovePodSandbox for \"400e05d09f7638f6370144bc7c30317522f5d53fca24796970245605aaf907e8\"" Feb 13 15:28:33.466379 containerd[1457]: time="2025-02-13T15:28:33.466245809Z" level=info msg="Forcibly stopping sandbox \"400e05d09f7638f6370144bc7c30317522f5d53fca24796970245605aaf907e8\"" Feb 13 15:28:33.466379 containerd[1457]: time="2025-02-13T15:28:33.466332055Z" level=info msg="TearDown network for sandbox \"400e05d09f7638f6370144bc7c30317522f5d53fca24796970245605aaf907e8\" successfully" Feb 13 15:28:33.469033 containerd[1457]: time="2025-02-13T15:28:33.468985946Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"400e05d09f7638f6370144bc7c30317522f5d53fca24796970245605aaf907e8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:33.469096 containerd[1457]: time="2025-02-13T15:28:33.469056390Z" level=info msg="RemovePodSandbox \"400e05d09f7638f6370144bc7c30317522f5d53fca24796970245605aaf907e8\" returns successfully" Feb 13 15:28:33.469479 containerd[1457]: time="2025-02-13T15:28:33.469450256Z" level=info msg="StopPodSandbox for \"0cdfe4900b8aded4d3dc434c00139a606306819ed296214d867b55f86ba8ee4a\"" Feb 13 15:28:33.469592 containerd[1457]: time="2025-02-13T15:28:33.469556303Z" level=info msg="TearDown network for sandbox \"0cdfe4900b8aded4d3dc434c00139a606306819ed296214d867b55f86ba8ee4a\" successfully" Feb 13 15:28:33.469592 containerd[1457]: time="2025-02-13T15:28:33.469569783Z" level=info msg="StopPodSandbox for \"0cdfe4900b8aded4d3dc434c00139a606306819ed296214d867b55f86ba8ee4a\" returns successfully" Feb 13 15:28:33.470261 containerd[1457]: time="2025-02-13T15:28:33.470228146Z" level=info msg="RemovePodSandbox for \"0cdfe4900b8aded4d3dc434c00139a606306819ed296214d867b55f86ba8ee4a\"" Feb 13 15:28:33.470465 containerd[1457]: time="2025-02-13T15:28:33.470265148Z" level=info msg="Forcibly stopping sandbox \"0cdfe4900b8aded4d3dc434c00139a606306819ed296214d867b55f86ba8ee4a\"" Feb 13 15:28:33.470465 containerd[1457]: time="2025-02-13T15:28:33.470341833Z" level=info msg="TearDown network for sandbox \"0cdfe4900b8aded4d3dc434c00139a606306819ed296214d867b55f86ba8ee4a\" successfully" Feb 13 15:28:33.474083 containerd[1457]: time="2025-02-13T15:28:33.473818697Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0cdfe4900b8aded4d3dc434c00139a606306819ed296214d867b55f86ba8ee4a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:33.474083 containerd[1457]: time="2025-02-13T15:28:33.473896502Z" level=info msg="RemovePodSandbox \"0cdfe4900b8aded4d3dc434c00139a606306819ed296214d867b55f86ba8ee4a\" returns successfully" Feb 13 15:28:33.475101 containerd[1457]: time="2025-02-13T15:28:33.474512742Z" level=info msg="StopPodSandbox for \"31240264793868d5f6753603cc9e8db3c437501e3593f5eaf3c12fbc398703a1\"" Feb 13 15:28:33.475101 containerd[1457]: time="2025-02-13T15:28:33.474608228Z" level=info msg="TearDown network for sandbox \"31240264793868d5f6753603cc9e8db3c437501e3593f5eaf3c12fbc398703a1\" successfully" Feb 13 15:28:33.475101 containerd[1457]: time="2025-02-13T15:28:33.474628390Z" level=info msg="StopPodSandbox for \"31240264793868d5f6753603cc9e8db3c437501e3593f5eaf3c12fbc398703a1\" returns successfully" Feb 13 15:28:33.475512 containerd[1457]: time="2025-02-13T15:28:33.475361077Z" level=info msg="RemovePodSandbox for \"31240264793868d5f6753603cc9e8db3c437501e3593f5eaf3c12fbc398703a1\"" Feb 13 15:28:33.475512 containerd[1457]: time="2025-02-13T15:28:33.475394759Z" level=info msg="Forcibly stopping sandbox \"31240264793868d5f6753603cc9e8db3c437501e3593f5eaf3c12fbc398703a1\"" Feb 13 15:28:33.475512 containerd[1457]: time="2025-02-13T15:28:33.475467964Z" level=info msg="TearDown network for sandbox \"31240264793868d5f6753603cc9e8db3c437501e3593f5eaf3c12fbc398703a1\" successfully" Feb 13 15:28:33.478347 containerd[1457]: time="2025-02-13T15:28:33.478222381Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"31240264793868d5f6753603cc9e8db3c437501e3593f5eaf3c12fbc398703a1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:33.478347 containerd[1457]: time="2025-02-13T15:28:33.478315067Z" level=info msg="RemovePodSandbox \"31240264793868d5f6753603cc9e8db3c437501e3593f5eaf3c12fbc398703a1\" returns successfully" Feb 13 15:28:33.478809 containerd[1457]: time="2025-02-13T15:28:33.478772537Z" level=info msg="StopPodSandbox for \"f4d56aece65a454a6744a6656975af454ed23eaa118376aad704578449124dce\"" Feb 13 15:28:33.478894 containerd[1457]: time="2025-02-13T15:28:33.478869183Z" level=info msg="TearDown network for sandbox \"f4d56aece65a454a6744a6656975af454ed23eaa118376aad704578449124dce\" successfully" Feb 13 15:28:33.478894 containerd[1457]: time="2025-02-13T15:28:33.478881584Z" level=info msg="StopPodSandbox for \"f4d56aece65a454a6744a6656975af454ed23eaa118376aad704578449124dce\" returns successfully" Feb 13 15:28:33.479185 containerd[1457]: time="2025-02-13T15:28:33.479138320Z" level=info msg="RemovePodSandbox for \"f4d56aece65a454a6744a6656975af454ed23eaa118376aad704578449124dce\"" Feb 13 15:28:33.479185 containerd[1457]: time="2025-02-13T15:28:33.479171602Z" level=info msg="Forcibly stopping sandbox \"f4d56aece65a454a6744a6656975af454ed23eaa118376aad704578449124dce\"" Feb 13 15:28:33.479384 containerd[1457]: time="2025-02-13T15:28:33.479239967Z" level=info msg="TearDown network for sandbox \"f4d56aece65a454a6744a6656975af454ed23eaa118376aad704578449124dce\" successfully" Feb 13 15:28:33.482138 containerd[1457]: time="2025-02-13T15:28:33.482054668Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f4d56aece65a454a6744a6656975af454ed23eaa118376aad704578449124dce\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:33.482138 containerd[1457]: time="2025-02-13T15:28:33.482118352Z" level=info msg="RemovePodSandbox \"f4d56aece65a454a6744a6656975af454ed23eaa118376aad704578449124dce\" returns successfully" Feb 13 15:28:33.482528 containerd[1457]: time="2025-02-13T15:28:33.482418572Z" level=info msg="StopPodSandbox for \"c5e16b24742bbc2f37dfe06c90ffdbec5343248ca53db0407b2759c26a1559b7\"" Feb 13 15:28:33.482528 containerd[1457]: time="2025-02-13T15:28:33.482512298Z" level=info msg="TearDown network for sandbox \"c5e16b24742bbc2f37dfe06c90ffdbec5343248ca53db0407b2759c26a1559b7\" successfully" Feb 13 15:28:33.482528 containerd[1457]: time="2025-02-13T15:28:33.482524979Z" level=info msg="StopPodSandbox for \"c5e16b24742bbc2f37dfe06c90ffdbec5343248ca53db0407b2759c26a1559b7\" returns successfully" Feb 13 15:28:33.483803 containerd[1457]: time="2025-02-13T15:28:33.482858800Z" level=info msg="RemovePodSandbox for \"c5e16b24742bbc2f37dfe06c90ffdbec5343248ca53db0407b2759c26a1559b7\"" Feb 13 15:28:33.483803 containerd[1457]: time="2025-02-13T15:28:33.482901803Z" level=info msg="Forcibly stopping sandbox \"c5e16b24742bbc2f37dfe06c90ffdbec5343248ca53db0407b2759c26a1559b7\"" Feb 13 15:28:33.483803 containerd[1457]: time="2025-02-13T15:28:33.482961927Z" level=info msg="TearDown network for sandbox \"c5e16b24742bbc2f37dfe06c90ffdbec5343248ca53db0407b2759c26a1559b7\" successfully" Feb 13 15:28:33.487015 containerd[1457]: time="2025-02-13T15:28:33.486834617Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c5e16b24742bbc2f37dfe06c90ffdbec5343248ca53db0407b2759c26a1559b7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:33.487015 containerd[1457]: time="2025-02-13T15:28:33.486923782Z" level=info msg="RemovePodSandbox \"c5e16b24742bbc2f37dfe06c90ffdbec5343248ca53db0407b2759c26a1559b7\" returns successfully" Feb 13 15:28:33.487601 containerd[1457]: time="2025-02-13T15:28:33.487482818Z" level=info msg="StopPodSandbox for \"77a75dac9c9f8165f19ab22aa953024a5e1c10959673b5e0528dc465b8c7c35e\"" Feb 13 15:28:33.487601 containerd[1457]: time="2025-02-13T15:28:33.487584345Z" level=info msg="TearDown network for sandbox \"77a75dac9c9f8165f19ab22aa953024a5e1c10959673b5e0528dc465b8c7c35e\" successfully" Feb 13 15:28:33.487601 containerd[1457]: time="2025-02-13T15:28:33.487594746Z" level=info msg="StopPodSandbox for \"77a75dac9c9f8165f19ab22aa953024a5e1c10959673b5e0528dc465b8c7c35e\" returns successfully" Feb 13 15:28:33.488795 containerd[1457]: time="2025-02-13T15:28:33.488755780Z" level=info msg="RemovePodSandbox for \"77a75dac9c9f8165f19ab22aa953024a5e1c10959673b5e0528dc465b8c7c35e\"" Feb 13 15:28:33.488867 containerd[1457]: time="2025-02-13T15:28:33.488793543Z" level=info msg="Forcibly stopping sandbox \"77a75dac9c9f8165f19ab22aa953024a5e1c10959673b5e0528dc465b8c7c35e\"" Feb 13 15:28:33.488913 containerd[1457]: time="2025-02-13T15:28:33.488867748Z" level=info msg="TearDown network for sandbox \"77a75dac9c9f8165f19ab22aa953024a5e1c10959673b5e0528dc465b8c7c35e\" successfully" Feb 13 15:28:33.495216 containerd[1457]: time="2025-02-13T15:28:33.495162553Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"77a75dac9c9f8165f19ab22aa953024a5e1c10959673b5e0528dc465b8c7c35e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:33.495313 containerd[1457]: time="2025-02-13T15:28:33.495242199Z" level=info msg="RemovePodSandbox \"77a75dac9c9f8165f19ab22aa953024a5e1c10959673b5e0528dc465b8c7c35e\" returns successfully" Feb 13 15:28:33.495687 containerd[1457]: time="2025-02-13T15:28:33.495663226Z" level=info msg="StopPodSandbox for \"776f35edebf581700803135a5d9ab65368a38fb844db753a8fd392cbe7c754d3\"" Feb 13 15:28:33.495784 containerd[1457]: time="2025-02-13T15:28:33.495767832Z" level=info msg="TearDown network for sandbox \"776f35edebf581700803135a5d9ab65368a38fb844db753a8fd392cbe7c754d3\" successfully" Feb 13 15:28:33.495809 containerd[1457]: time="2025-02-13T15:28:33.495782553Z" level=info msg="StopPodSandbox for \"776f35edebf581700803135a5d9ab65368a38fb844db753a8fd392cbe7c754d3\" returns successfully" Feb 13 15:28:33.501150 containerd[1457]: time="2025-02-13T15:28:33.501095976Z" level=info msg="RemovePodSandbox for \"776f35edebf581700803135a5d9ab65368a38fb844db753a8fd392cbe7c754d3\"" Feb 13 15:28:33.501150 containerd[1457]: time="2025-02-13T15:28:33.501149859Z" level=info msg="Forcibly stopping sandbox \"776f35edebf581700803135a5d9ab65368a38fb844db753a8fd392cbe7c754d3\"" Feb 13 15:28:33.501257 containerd[1457]: time="2025-02-13T15:28:33.501242465Z" level=info msg="TearDown network for sandbox \"776f35edebf581700803135a5d9ab65368a38fb844db753a8fd392cbe7c754d3\" successfully" Feb 13 15:28:33.509205 containerd[1457]: time="2025-02-13T15:28:33.509131654Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"776f35edebf581700803135a5d9ab65368a38fb844db753a8fd392cbe7c754d3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:33.509312 containerd[1457]: time="2025-02-13T15:28:33.509246982Z" level=info msg="RemovePodSandbox \"776f35edebf581700803135a5d9ab65368a38fb844db753a8fd392cbe7c754d3\" returns successfully" Feb 13 15:28:33.509736 containerd[1457]: time="2025-02-13T15:28:33.509712132Z" level=info msg="StopPodSandbox for \"38a3f3da6737f9ca3a00e18f45dbf1f049602a27f165bd52b905645c672743cc\"" Feb 13 15:28:33.509836 containerd[1457]: time="2025-02-13T15:28:33.509820379Z" level=info msg="TearDown network for sandbox \"38a3f3da6737f9ca3a00e18f45dbf1f049602a27f165bd52b905645c672743cc\" successfully" Feb 13 15:28:33.509836 containerd[1457]: time="2025-02-13T15:28:33.509833499Z" level=info msg="StopPodSandbox for \"38a3f3da6737f9ca3a00e18f45dbf1f049602a27f165bd52b905645c672743cc\" returns successfully" Feb 13 15:28:33.510168 containerd[1457]: time="2025-02-13T15:28:33.510104917Z" level=info msg="RemovePodSandbox for \"38a3f3da6737f9ca3a00e18f45dbf1f049602a27f165bd52b905645c672743cc\"" Feb 13 15:28:33.510168 containerd[1457]: time="2025-02-13T15:28:33.510131959Z" level=info msg="Forcibly stopping sandbox \"38a3f3da6737f9ca3a00e18f45dbf1f049602a27f165bd52b905645c672743cc\"" Feb 13 15:28:33.510261 containerd[1457]: time="2025-02-13T15:28:33.510224645Z" level=info msg="TearDown network for sandbox \"38a3f3da6737f9ca3a00e18f45dbf1f049602a27f165bd52b905645c672743cc\" successfully" Feb 13 15:28:33.513758 containerd[1457]: time="2025-02-13T15:28:33.513711189Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"38a3f3da6737f9ca3a00e18f45dbf1f049602a27f165bd52b905645c672743cc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:33.513835 containerd[1457]: time="2025-02-13T15:28:33.513816036Z" level=info msg="RemovePodSandbox \"38a3f3da6737f9ca3a00e18f45dbf1f049602a27f165bd52b905645c672743cc\" returns successfully" Feb 13 15:28:33.514205 containerd[1457]: time="2025-02-13T15:28:33.514181140Z" level=info msg="StopPodSandbox for \"0477f5248a84e6b1609e010745ea4791329b3811f61cb4ca28c5f7d4c62c942f\"" Feb 13 15:28:33.514320 containerd[1457]: time="2025-02-13T15:28:33.514299307Z" level=info msg="TearDown network for sandbox \"0477f5248a84e6b1609e010745ea4791329b3811f61cb4ca28c5f7d4c62c942f\" successfully" Feb 13 15:28:33.514320 containerd[1457]: time="2025-02-13T15:28:33.514316228Z" level=info msg="StopPodSandbox for \"0477f5248a84e6b1609e010745ea4791329b3811f61cb4ca28c5f7d4c62c942f\" returns successfully" Feb 13 15:28:33.514669 containerd[1457]: time="2025-02-13T15:28:33.514643649Z" level=info msg="RemovePodSandbox for \"0477f5248a84e6b1609e010745ea4791329b3811f61cb4ca28c5f7d4c62c942f\"" Feb 13 15:28:33.514748 containerd[1457]: time="2025-02-13T15:28:33.514675292Z" level=info msg="Forcibly stopping sandbox \"0477f5248a84e6b1609e010745ea4791329b3811f61cb4ca28c5f7d4c62c942f\"" Feb 13 15:28:33.514748 containerd[1457]: time="2025-02-13T15:28:33.514745656Z" level=info msg="TearDown network for sandbox \"0477f5248a84e6b1609e010745ea4791329b3811f61cb4ca28c5f7d4c62c942f\" successfully" Feb 13 15:28:33.517208 containerd[1457]: time="2025-02-13T15:28:33.517163852Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0477f5248a84e6b1609e010745ea4791329b3811f61cb4ca28c5f7d4c62c942f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:33.517263 containerd[1457]: time="2025-02-13T15:28:33.517231296Z" level=info msg="RemovePodSandbox \"0477f5248a84e6b1609e010745ea4791329b3811f61cb4ca28c5f7d4c62c942f\" returns successfully" Feb 13 15:28:33.517704 containerd[1457]: time="2025-02-13T15:28:33.517661884Z" level=info msg="StopPodSandbox for \"6761151fa2a0f4f698229e01a2b8571c3184e35cf086ea773e1f473cb3b94906\"" Feb 13 15:28:33.517778 containerd[1457]: time="2025-02-13T15:28:33.517761291Z" level=info msg="TearDown network for sandbox \"6761151fa2a0f4f698229e01a2b8571c3184e35cf086ea773e1f473cb3b94906\" successfully" Feb 13 15:28:33.517816 containerd[1457]: time="2025-02-13T15:28:33.517776611Z" level=info msg="StopPodSandbox for \"6761151fa2a0f4f698229e01a2b8571c3184e35cf086ea773e1f473cb3b94906\" returns successfully" Feb 13 15:28:33.518246 containerd[1457]: time="2025-02-13T15:28:33.518216440Z" level=info msg="RemovePodSandbox for \"6761151fa2a0f4f698229e01a2b8571c3184e35cf086ea773e1f473cb3b94906\"" Feb 13 15:28:33.518309 containerd[1457]: time="2025-02-13T15:28:33.518250322Z" level=info msg="Forcibly stopping sandbox \"6761151fa2a0f4f698229e01a2b8571c3184e35cf086ea773e1f473cb3b94906\"" Feb 13 15:28:33.518359 containerd[1457]: time="2025-02-13T15:28:33.518341208Z" level=info msg="TearDown network for sandbox \"6761151fa2a0f4f698229e01a2b8571c3184e35cf086ea773e1f473cb3b94906\" successfully" Feb 13 15:28:33.521832 containerd[1457]: time="2025-02-13T15:28:33.521770349Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6761151fa2a0f4f698229e01a2b8571c3184e35cf086ea773e1f473cb3b94906\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:33.521898 containerd[1457]: time="2025-02-13T15:28:33.521841714Z" level=info msg="RemovePodSandbox \"6761151fa2a0f4f698229e01a2b8571c3184e35cf086ea773e1f473cb3b94906\" returns successfully" Feb 13 15:28:33.522292 containerd[1457]: time="2025-02-13T15:28:33.522259861Z" level=info msg="StopPodSandbox for \"0e912db41237de96aedc11e4bdfeaf221d0a4eb32d2c79b569dc586de06677c4\"" Feb 13 15:28:33.522398 containerd[1457]: time="2025-02-13T15:28:33.522381348Z" level=info msg="TearDown network for sandbox \"0e912db41237de96aedc11e4bdfeaf221d0a4eb32d2c79b569dc586de06677c4\" successfully" Feb 13 15:28:33.522434 containerd[1457]: time="2025-02-13T15:28:33.522397389Z" level=info msg="StopPodSandbox for \"0e912db41237de96aedc11e4bdfeaf221d0a4eb32d2c79b569dc586de06677c4\" returns successfully" Feb 13 15:28:33.522800 containerd[1457]: time="2025-02-13T15:28:33.522778534Z" level=info msg="RemovePodSandbox for \"0e912db41237de96aedc11e4bdfeaf221d0a4eb32d2c79b569dc586de06677c4\"" Feb 13 15:28:33.522852 containerd[1457]: time="2025-02-13T15:28:33.522804736Z" level=info msg="Forcibly stopping sandbox \"0e912db41237de96aedc11e4bdfeaf221d0a4eb32d2c79b569dc586de06677c4\"" Feb 13 15:28:33.522896 containerd[1457]: time="2025-02-13T15:28:33.522880461Z" level=info msg="TearDown network for sandbox \"0e912db41237de96aedc11e4bdfeaf221d0a4eb32d2c79b569dc586de06677c4\" successfully" Feb 13 15:28:33.525918 containerd[1457]: time="2025-02-13T15:28:33.525868533Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0e912db41237de96aedc11e4bdfeaf221d0a4eb32d2c79b569dc586de06677c4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:33.525968 containerd[1457]: time="2025-02-13T15:28:33.525938938Z" level=info msg="RemovePodSandbox \"0e912db41237de96aedc11e4bdfeaf221d0a4eb32d2c79b569dc586de06677c4\" returns successfully" Feb 13 15:28:33.526471 containerd[1457]: time="2025-02-13T15:28:33.526392007Z" level=info msg="StopPodSandbox for \"67348c9f7030a07e08918ec471c01d8d492fb7b37edf0b70d665014a3b6d6d75\"" Feb 13 15:28:33.526592 containerd[1457]: time="2025-02-13T15:28:33.526575299Z" level=info msg="TearDown network for sandbox \"67348c9f7030a07e08918ec471c01d8d492fb7b37edf0b70d665014a3b6d6d75\" successfully" Feb 13 15:28:33.526778 containerd[1457]: time="2025-02-13T15:28:33.526644823Z" level=info msg="StopPodSandbox for \"67348c9f7030a07e08918ec471c01d8d492fb7b37edf0b70d665014a3b6d6d75\" returns successfully" Feb 13 15:28:33.526972 containerd[1457]: time="2025-02-13T15:28:33.526919801Z" level=info msg="RemovePodSandbox for \"67348c9f7030a07e08918ec471c01d8d492fb7b37edf0b70d665014a3b6d6d75\"" Feb 13 15:28:33.526972 containerd[1457]: time="2025-02-13T15:28:33.526953083Z" level=info msg="Forcibly stopping sandbox \"67348c9f7030a07e08918ec471c01d8d492fb7b37edf0b70d665014a3b6d6d75\"" Feb 13 15:28:33.527049 containerd[1457]: time="2025-02-13T15:28:33.527019207Z" level=info msg="TearDown network for sandbox \"67348c9f7030a07e08918ec471c01d8d492fb7b37edf0b70d665014a3b6d6d75\" successfully" Feb 13 15:28:33.529539 containerd[1457]: time="2025-02-13T15:28:33.529503048Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"67348c9f7030a07e08918ec471c01d8d492fb7b37edf0b70d665014a3b6d6d75\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:33.529594 containerd[1457]: time="2025-02-13T15:28:33.529567492Z" level=info msg="RemovePodSandbox \"67348c9f7030a07e08918ec471c01d8d492fb7b37edf0b70d665014a3b6d6d75\" returns successfully" Feb 13 15:28:33.529958 containerd[1457]: time="2025-02-13T15:28:33.529924755Z" level=info msg="StopPodSandbox for \"b2452a586ae430ec9ed425f170afb0d9071b24ad7295b6f6a4477e012c9d521e\"" Feb 13 15:28:33.530027 containerd[1457]: time="2025-02-13T15:28:33.530012520Z" level=info msg="TearDown network for sandbox \"b2452a586ae430ec9ed425f170afb0d9071b24ad7295b6f6a4477e012c9d521e\" successfully" Feb 13 15:28:33.530053 containerd[1457]: time="2025-02-13T15:28:33.530025521Z" level=info msg="StopPodSandbox for \"b2452a586ae430ec9ed425f170afb0d9071b24ad7295b6f6a4477e012c9d521e\" returns successfully" Feb 13 15:28:33.530720 containerd[1457]: time="2025-02-13T15:28:33.530688484Z" level=info msg="RemovePodSandbox for \"b2452a586ae430ec9ed425f170afb0d9071b24ad7295b6f6a4477e012c9d521e\"" Feb 13 15:28:33.531400 containerd[1457]: time="2025-02-13T15:28:33.530867215Z" level=info msg="Forcibly stopping sandbox \"b2452a586ae430ec9ed425f170afb0d9071b24ad7295b6f6a4477e012c9d521e\"" Feb 13 15:28:33.531400 containerd[1457]: time="2025-02-13T15:28:33.530940300Z" level=info msg="TearDown network for sandbox \"b2452a586ae430ec9ed425f170afb0d9071b24ad7295b6f6a4477e012c9d521e\" successfully" Feb 13 15:28:33.533636 containerd[1457]: time="2025-02-13T15:28:33.533584031Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b2452a586ae430ec9ed425f170afb0d9071b24ad7295b6f6a4477e012c9d521e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:33.533752 containerd[1457]: time="2025-02-13T15:28:33.533735680Z" level=info msg="RemovePodSandbox \"b2452a586ae430ec9ed425f170afb0d9071b24ad7295b6f6a4477e012c9d521e\" returns successfully" Feb 13 15:28:33.534172 containerd[1457]: time="2025-02-13T15:28:33.534147227Z" level=info msg="StopPodSandbox for \"1fdc9959f235af76b82180f39dc0b3634b4d66128bffd3617d8de10ff163e7fb\"" Feb 13 15:28:33.534248 containerd[1457]: time="2025-02-13T15:28:33.534231952Z" level=info msg="TearDown network for sandbox \"1fdc9959f235af76b82180f39dc0b3634b4d66128bffd3617d8de10ff163e7fb\" successfully" Feb 13 15:28:33.534248 containerd[1457]: time="2025-02-13T15:28:33.534245593Z" level=info msg="StopPodSandbox for \"1fdc9959f235af76b82180f39dc0b3634b4d66128bffd3617d8de10ff163e7fb\" returns successfully" Feb 13 15:28:33.536660 containerd[1457]: time="2025-02-13T15:28:33.534728864Z" level=info msg="RemovePodSandbox for \"1fdc9959f235af76b82180f39dc0b3634b4d66128bffd3617d8de10ff163e7fb\"" Feb 13 15:28:33.536660 containerd[1457]: time="2025-02-13T15:28:33.534757026Z" level=info msg="Forcibly stopping sandbox \"1fdc9959f235af76b82180f39dc0b3634b4d66128bffd3617d8de10ff163e7fb\"" Feb 13 15:28:33.536660 containerd[1457]: time="2025-02-13T15:28:33.534819310Z" level=info msg="TearDown network for sandbox \"1fdc9959f235af76b82180f39dc0b3634b4d66128bffd3617d8de10ff163e7fb\" successfully" Feb 13 15:28:33.544765 containerd[1457]: time="2025-02-13T15:28:33.544710908Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1fdc9959f235af76b82180f39dc0b3634b4d66128bffd3617d8de10ff163e7fb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:33.545106 containerd[1457]: time="2025-02-13T15:28:33.544785873Z" level=info msg="RemovePodSandbox \"1fdc9959f235af76b82180f39dc0b3634b4d66128bffd3617d8de10ff163e7fb\" returns successfully" Feb 13 15:28:33.545272 containerd[1457]: time="2025-02-13T15:28:33.545234022Z" level=info msg="StopPodSandbox for \"89faecaa39a3df7b311017148dcd5f19fc35fa57d662d192cc38be381f1b5304\"" Feb 13 15:28:33.545380 containerd[1457]: time="2025-02-13T15:28:33.545344989Z" level=info msg="TearDown network for sandbox \"89faecaa39a3df7b311017148dcd5f19fc35fa57d662d192cc38be381f1b5304\" successfully" Feb 13 15:28:33.545380 containerd[1457]: time="2025-02-13T15:28:33.545360470Z" level=info msg="StopPodSandbox for \"89faecaa39a3df7b311017148dcd5f19fc35fa57d662d192cc38be381f1b5304\" returns successfully" Feb 13 15:28:33.545745 containerd[1457]: time="2025-02-13T15:28:33.545712933Z" level=info msg="RemovePodSandbox for \"89faecaa39a3df7b311017148dcd5f19fc35fa57d662d192cc38be381f1b5304\"" Feb 13 15:28:33.545745 containerd[1457]: time="2025-02-13T15:28:33.545736934Z" level=info msg="Forcibly stopping sandbox \"89faecaa39a3df7b311017148dcd5f19fc35fa57d662d192cc38be381f1b5304\"" Feb 13 15:28:33.545821 containerd[1457]: time="2025-02-13T15:28:33.545794938Z" level=info msg="TearDown network for sandbox \"89faecaa39a3df7b311017148dcd5f19fc35fa57d662d192cc38be381f1b5304\" successfully" Feb 13 15:28:33.549241 containerd[1457]: time="2025-02-13T15:28:33.549202078Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"89faecaa39a3df7b311017148dcd5f19fc35fa57d662d192cc38be381f1b5304\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:33.549302 containerd[1457]: time="2025-02-13T15:28:33.549258281Z" level=info msg="RemovePodSandbox \"89faecaa39a3df7b311017148dcd5f19fc35fa57d662d192cc38be381f1b5304\" returns successfully" Feb 13 15:28:33.549625 containerd[1457]: time="2025-02-13T15:28:33.549579502Z" level=info msg="StopPodSandbox for \"44d4ae110854cc57cbfa55e39401c987d33916faeb2710cc87a549e4d94657b0\"" Feb 13 15:28:33.549737 containerd[1457]: time="2025-02-13T15:28:33.549717671Z" level=info msg="TearDown network for sandbox \"44d4ae110854cc57cbfa55e39401c987d33916faeb2710cc87a549e4d94657b0\" successfully" Feb 13 15:28:33.549737 containerd[1457]: time="2025-02-13T15:28:33.549733072Z" level=info msg="StopPodSandbox for \"44d4ae110854cc57cbfa55e39401c987d33916faeb2710cc87a549e4d94657b0\" returns successfully" Feb 13 15:28:33.550138 containerd[1457]: time="2025-02-13T15:28:33.550056893Z" level=info msg="RemovePodSandbox for \"44d4ae110854cc57cbfa55e39401c987d33916faeb2710cc87a549e4d94657b0\"" Feb 13 15:28:33.550138 containerd[1457]: time="2025-02-13T15:28:33.550077534Z" level=info msg="Forcibly stopping sandbox \"44d4ae110854cc57cbfa55e39401c987d33916faeb2710cc87a549e4d94657b0\"" Feb 13 15:28:33.550138 containerd[1457]: time="2025-02-13T15:28:33.550136498Z" level=info msg="TearDown network for sandbox \"44d4ae110854cc57cbfa55e39401c987d33916faeb2710cc87a549e4d94657b0\" successfully" Feb 13 15:28:33.552965 containerd[1457]: time="2025-02-13T15:28:33.552918437Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"44d4ae110854cc57cbfa55e39401c987d33916faeb2710cc87a549e4d94657b0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:33.553023 containerd[1457]: time="2025-02-13T15:28:33.552989202Z" level=info msg="RemovePodSandbox \"44d4ae110854cc57cbfa55e39401c987d33916faeb2710cc87a549e4d94657b0\" returns successfully" Feb 13 15:28:33.553390 containerd[1457]: time="2025-02-13T15:28:33.553364906Z" level=info msg="StopPodSandbox for \"9bde7141846a528b8406457cadcce6da8913cc6f999d44f2fda3999be82f378f\"" Feb 13 15:28:33.553488 containerd[1457]: time="2025-02-13T15:28:33.553463272Z" level=info msg="TearDown network for sandbox \"9bde7141846a528b8406457cadcce6da8913cc6f999d44f2fda3999be82f378f\" successfully" Feb 13 15:28:33.553524 containerd[1457]: time="2025-02-13T15:28:33.553486634Z" level=info msg="StopPodSandbox for \"9bde7141846a528b8406457cadcce6da8913cc6f999d44f2fda3999be82f378f\" returns successfully" Feb 13 15:28:33.554263 containerd[1457]: time="2025-02-13T15:28:33.553836296Z" level=info msg="RemovePodSandbox for \"9bde7141846a528b8406457cadcce6da8913cc6f999d44f2fda3999be82f378f\"" Feb 13 15:28:33.554263 containerd[1457]: time="2025-02-13T15:28:33.553868739Z" level=info msg="Forcibly stopping sandbox \"9bde7141846a528b8406457cadcce6da8913cc6f999d44f2fda3999be82f378f\"" Feb 13 15:28:33.554263 containerd[1457]: time="2025-02-13T15:28:33.553931383Z" level=info msg="TearDown network for sandbox \"9bde7141846a528b8406457cadcce6da8913cc6f999d44f2fda3999be82f378f\" successfully" Feb 13 15:28:33.556788 containerd[1457]: time="2025-02-13T15:28:33.556753404Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9bde7141846a528b8406457cadcce6da8913cc6f999d44f2fda3999be82f378f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:33.556903 containerd[1457]: time="2025-02-13T15:28:33.556887853Z" level=info msg="RemovePodSandbox \"9bde7141846a528b8406457cadcce6da8913cc6f999d44f2fda3999be82f378f\" returns successfully" Feb 13 15:28:33.557683 containerd[1457]: time="2025-02-13T15:28:33.557286159Z" level=info msg="StopPodSandbox for \"3f20dd0cef541fa916ded40902978e512025f116521ec7f105b4cd1d798e2080\"" Feb 13 15:28:33.557683 containerd[1457]: time="2025-02-13T15:28:33.557395606Z" level=info msg="TearDown network for sandbox \"3f20dd0cef541fa916ded40902978e512025f116521ec7f105b4cd1d798e2080\" successfully" Feb 13 15:28:33.557683 containerd[1457]: time="2025-02-13T15:28:33.557405607Z" level=info msg="StopPodSandbox for \"3f20dd0cef541fa916ded40902978e512025f116521ec7f105b4cd1d798e2080\" returns successfully" Feb 13 15:28:33.559165 containerd[1457]: time="2025-02-13T15:28:33.559133918Z" level=info msg="RemovePodSandbox for \"3f20dd0cef541fa916ded40902978e512025f116521ec7f105b4cd1d798e2080\"" Feb 13 15:28:33.559221 containerd[1457]: time="2025-02-13T15:28:33.559169240Z" level=info msg="Forcibly stopping sandbox \"3f20dd0cef541fa916ded40902978e512025f116521ec7f105b4cd1d798e2080\"" Feb 13 15:28:33.562873 containerd[1457]: time="2025-02-13T15:28:33.562787554Z" level=info msg="TearDown network for sandbox \"3f20dd0cef541fa916ded40902978e512025f116521ec7f105b4cd1d798e2080\" successfully" Feb 13 15:28:33.565606 containerd[1457]: time="2025-02-13T15:28:33.565546051Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3f20dd0cef541fa916ded40902978e512025f116521ec7f105b4cd1d798e2080\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:33.575286 containerd[1457]: time="2025-02-13T15:28:33.565602855Z" level=info msg="RemovePodSandbox \"3f20dd0cef541fa916ded40902978e512025f116521ec7f105b4cd1d798e2080\" returns successfully" Feb 13 15:28:33.575784 containerd[1457]: time="2025-02-13T15:28:33.575742349Z" level=info msg="StopPodSandbox for \"cc21ee81810fc473d761df47ca40dfc07bf603b78f073619580e97c4954181e5\"" Feb 13 15:28:33.575864 containerd[1457]: time="2025-02-13T15:28:33.575854596Z" level=info msg="TearDown network for sandbox \"cc21ee81810fc473d761df47ca40dfc07bf603b78f073619580e97c4954181e5\" successfully" Feb 13 15:28:33.575896 containerd[1457]: time="2025-02-13T15:28:33.575866197Z" level=info msg="StopPodSandbox for \"cc21ee81810fc473d761df47ca40dfc07bf603b78f073619580e97c4954181e5\" returns successfully" Feb 13 15:28:33.576229 containerd[1457]: time="2025-02-13T15:28:33.576183617Z" level=info msg="RemovePodSandbox for \"cc21ee81810fc473d761df47ca40dfc07bf603b78f073619580e97c4954181e5\"" Feb 13 15:28:33.576229 containerd[1457]: time="2025-02-13T15:28:33.576214139Z" level=info msg="Forcibly stopping sandbox \"cc21ee81810fc473d761df47ca40dfc07bf603b78f073619580e97c4954181e5\"" Feb 13 15:28:33.576379 containerd[1457]: time="2025-02-13T15:28:33.576276663Z" level=info msg="TearDown network for sandbox \"cc21ee81810fc473d761df47ca40dfc07bf603b78f073619580e97c4954181e5\" successfully" Feb 13 15:28:33.578830 containerd[1457]: time="2025-02-13T15:28:33.578772264Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cc21ee81810fc473d761df47ca40dfc07bf603b78f073619580e97c4954181e5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:33.578830 containerd[1457]: time="2025-02-13T15:28:33.578827828Z" level=info msg="RemovePodSandbox \"cc21ee81810fc473d761df47ca40dfc07bf603b78f073619580e97c4954181e5\" returns successfully" Feb 13 15:28:33.579367 containerd[1457]: time="2025-02-13T15:28:33.579182251Z" level=info msg="StopPodSandbox for \"09ae978d1263afc7237cf637a3169b1af4b9e7342e716b5c0326028e663a24a4\"" Feb 13 15:28:33.579367 containerd[1457]: time="2025-02-13T15:28:33.579278657Z" level=info msg="TearDown network for sandbox \"09ae978d1263afc7237cf637a3169b1af4b9e7342e716b5c0326028e663a24a4\" successfully" Feb 13 15:28:33.579367 containerd[1457]: time="2025-02-13T15:28:33.579295098Z" level=info msg="StopPodSandbox for \"09ae978d1263afc7237cf637a3169b1af4b9e7342e716b5c0326028e663a24a4\" returns successfully" Feb 13 15:28:33.579533 containerd[1457]: time="2025-02-13T15:28:33.579512472Z" level=info msg="RemovePodSandbox for \"09ae978d1263afc7237cf637a3169b1af4b9e7342e716b5c0326028e663a24a4\"" Feb 13 15:28:33.579587 containerd[1457]: time="2025-02-13T15:28:33.579535753Z" level=info msg="Forcibly stopping sandbox \"09ae978d1263afc7237cf637a3169b1af4b9e7342e716b5c0326028e663a24a4\"" Feb 13 15:28:33.579664 containerd[1457]: time="2025-02-13T15:28:33.579597157Z" level=info msg="TearDown network for sandbox \"09ae978d1263afc7237cf637a3169b1af4b9e7342e716b5c0326028e663a24a4\" successfully" Feb 13 15:28:33.582328 containerd[1457]: time="2025-02-13T15:28:33.582283891Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"09ae978d1263afc7237cf637a3169b1af4b9e7342e716b5c0326028e663a24a4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:33.582394 containerd[1457]: time="2025-02-13T15:28:33.582344414Z" level=info msg="RemovePodSandbox \"09ae978d1263afc7237cf637a3169b1af4b9e7342e716b5c0326028e663a24a4\" returns successfully" Feb 13 15:28:33.582675 containerd[1457]: time="2025-02-13T15:28:33.582650194Z" level=info msg="StopPodSandbox for \"262a4443b576ec39fe8e513ab34a3dc1609fc6bb9ac7e9f491ab2ed971993fd6\"" Feb 13 15:28:33.582756 containerd[1457]: time="2025-02-13T15:28:33.582741080Z" level=info msg="TearDown network for sandbox \"262a4443b576ec39fe8e513ab34a3dc1609fc6bb9ac7e9f491ab2ed971993fd6\" successfully" Feb 13 15:28:33.582862 containerd[1457]: time="2025-02-13T15:28:33.582775442Z" level=info msg="StopPodSandbox for \"262a4443b576ec39fe8e513ab34a3dc1609fc6bb9ac7e9f491ab2ed971993fd6\" returns successfully" Feb 13 15:28:33.583143 containerd[1457]: time="2025-02-13T15:28:33.583108464Z" level=info msg="RemovePodSandbox for \"262a4443b576ec39fe8e513ab34a3dc1609fc6bb9ac7e9f491ab2ed971993fd6\"" Feb 13 15:28:33.583658 containerd[1457]: time="2025-02-13T15:28:33.583212750Z" level=info msg="Forcibly stopping sandbox \"262a4443b576ec39fe8e513ab34a3dc1609fc6bb9ac7e9f491ab2ed971993fd6\"" Feb 13 15:28:33.583658 containerd[1457]: time="2025-02-13T15:28:33.583283755Z" level=info msg="TearDown network for sandbox \"262a4443b576ec39fe8e513ab34a3dc1609fc6bb9ac7e9f491ab2ed971993fd6\" successfully" Feb 13 15:28:33.585737 containerd[1457]: time="2025-02-13T15:28:33.585694070Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"262a4443b576ec39fe8e513ab34a3dc1609fc6bb9ac7e9f491ab2ed971993fd6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:33.585789 containerd[1457]: time="2025-02-13T15:28:33.585751554Z" level=info msg="RemovePodSandbox \"262a4443b576ec39fe8e513ab34a3dc1609fc6bb9ac7e9f491ab2ed971993fd6\" returns successfully" Feb 13 15:28:33.586077 containerd[1457]: time="2025-02-13T15:28:33.586045533Z" level=info msg="StopPodSandbox for \"9f518b19d3683f9344fecab0db2f16d46f20150c992e387844d1811fc72fa170\"" Feb 13 15:28:33.586168 containerd[1457]: time="2025-02-13T15:28:33.586142339Z" level=info msg="TearDown network for sandbox \"9f518b19d3683f9344fecab0db2f16d46f20150c992e387844d1811fc72fa170\" successfully" Feb 13 15:28:33.586168 containerd[1457]: time="2025-02-13T15:28:33.586159700Z" level=info msg="StopPodSandbox for \"9f518b19d3683f9344fecab0db2f16d46f20150c992e387844d1811fc72fa170\" returns successfully" Feb 13 15:28:33.586405 containerd[1457]: time="2025-02-13T15:28:33.586378435Z" level=info msg="RemovePodSandbox for \"9f518b19d3683f9344fecab0db2f16d46f20150c992e387844d1811fc72fa170\"" Feb 13 15:28:33.586441 containerd[1457]: time="2025-02-13T15:28:33.586409517Z" level=info msg="Forcibly stopping sandbox \"9f518b19d3683f9344fecab0db2f16d46f20150c992e387844d1811fc72fa170\"" Feb 13 15:28:33.586489 containerd[1457]: time="2025-02-13T15:28:33.586475561Z" level=info msg="TearDown network for sandbox \"9f518b19d3683f9344fecab0db2f16d46f20150c992e387844d1811fc72fa170\" successfully" Feb 13 15:28:33.588816 containerd[1457]: time="2025-02-13T15:28:33.588779709Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9f518b19d3683f9344fecab0db2f16d46f20150c992e387844d1811fc72fa170\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:33.588885 containerd[1457]: time="2025-02-13T15:28:33.588837553Z" level=info msg="RemovePodSandbox \"9f518b19d3683f9344fecab0db2f16d46f20150c992e387844d1811fc72fa170\" returns successfully" Feb 13 15:28:33.589321 containerd[1457]: time="2025-02-13T15:28:33.589141813Z" level=info msg="StopPodSandbox for \"ea7024231cf1f96c1a2263eb96f69e734b18be76b300f9a0314c4df3b994a6d9\"" Feb 13 15:28:33.589321 containerd[1457]: time="2025-02-13T15:28:33.589237299Z" level=info msg="TearDown network for sandbox \"ea7024231cf1f96c1a2263eb96f69e734b18be76b300f9a0314c4df3b994a6d9\" successfully" Feb 13 15:28:33.589321 containerd[1457]: time="2025-02-13T15:28:33.589246379Z" level=info msg="StopPodSandbox for \"ea7024231cf1f96c1a2263eb96f69e734b18be76b300f9a0314c4df3b994a6d9\" returns successfully" Feb 13 15:28:33.589511 containerd[1457]: time="2025-02-13T15:28:33.589487195Z" level=info msg="RemovePodSandbox for \"ea7024231cf1f96c1a2263eb96f69e734b18be76b300f9a0314c4df3b994a6d9\"" Feb 13 15:28:33.589599 containerd[1457]: time="2025-02-13T15:28:33.589516957Z" level=info msg="Forcibly stopping sandbox \"ea7024231cf1f96c1a2263eb96f69e734b18be76b300f9a0314c4df3b994a6d9\"" Feb 13 15:28:33.589599 containerd[1457]: time="2025-02-13T15:28:33.589586521Z" level=info msg="TearDown network for sandbox \"ea7024231cf1f96c1a2263eb96f69e734b18be76b300f9a0314c4df3b994a6d9\" successfully" Feb 13 15:28:33.592000 containerd[1457]: time="2025-02-13T15:28:33.591958354Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ea7024231cf1f96c1a2263eb96f69e734b18be76b300f9a0314c4df3b994a6d9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:33.592071 containerd[1457]: time="2025-02-13T15:28:33.592010558Z" level=info msg="RemovePodSandbox \"ea7024231cf1f96c1a2263eb96f69e734b18be76b300f9a0314c4df3b994a6d9\" returns successfully" Feb 13 15:28:33.592485 containerd[1457]: time="2025-02-13T15:28:33.592325698Z" level=info msg="StopPodSandbox for \"b203324d387be4f7cabb41c7345bf05412ccb11d6f2993f88f1edc0c931d90ab\"" Feb 13 15:28:33.592485 containerd[1457]: time="2025-02-13T15:28:33.592418664Z" level=info msg="TearDown network for sandbox \"b203324d387be4f7cabb41c7345bf05412ccb11d6f2993f88f1edc0c931d90ab\" successfully" Feb 13 15:28:33.592485 containerd[1457]: time="2025-02-13T15:28:33.592427985Z" level=info msg="StopPodSandbox for \"b203324d387be4f7cabb41c7345bf05412ccb11d6f2993f88f1edc0c931d90ab\" returns successfully" Feb 13 15:28:33.592753 containerd[1457]: time="2025-02-13T15:28:33.592729924Z" level=info msg="RemovePodSandbox for \"b203324d387be4f7cabb41c7345bf05412ccb11d6f2993f88f1edc0c931d90ab\"" Feb 13 15:28:33.592796 containerd[1457]: time="2025-02-13T15:28:33.592770367Z" level=info msg="Forcibly stopping sandbox \"b203324d387be4f7cabb41c7345bf05412ccb11d6f2993f88f1edc0c931d90ab\"" Feb 13 15:28:33.592868 containerd[1457]: time="2025-02-13T15:28:33.592853332Z" level=info msg="TearDown network for sandbox \"b203324d387be4f7cabb41c7345bf05412ccb11d6f2993f88f1edc0c931d90ab\" successfully" Feb 13 15:28:33.595499 containerd[1457]: time="2025-02-13T15:28:33.595461980Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b203324d387be4f7cabb41c7345bf05412ccb11d6f2993f88f1edc0c931d90ab\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:33.595559 containerd[1457]: time="2025-02-13T15:28:33.595519104Z" level=info msg="RemovePodSandbox \"b203324d387be4f7cabb41c7345bf05412ccb11d6f2993f88f1edc0c931d90ab\" returns successfully" Feb 13 15:28:33.595858 containerd[1457]: time="2025-02-13T15:28:33.595834444Z" level=info msg="StopPodSandbox for \"58c5ff5777c7ef344cc3b143e1c2324011c9ca3754e7407b48350424c32bea2d\"" Feb 13 15:28:33.595951 containerd[1457]: time="2025-02-13T15:28:33.595934811Z" level=info msg="TearDown network for sandbox \"58c5ff5777c7ef344cc3b143e1c2324011c9ca3754e7407b48350424c32bea2d\" successfully" Feb 13 15:28:33.595981 containerd[1457]: time="2025-02-13T15:28:33.595949572Z" level=info msg="StopPodSandbox for \"58c5ff5777c7ef344cc3b143e1c2324011c9ca3754e7407b48350424c32bea2d\" returns successfully" Feb 13 15:28:33.596662 containerd[1457]: time="2025-02-13T15:28:33.596269192Z" level=info msg="RemovePodSandbox for \"58c5ff5777c7ef344cc3b143e1c2324011c9ca3754e7407b48350424c32bea2d\"" Feb 13 15:28:33.596662 containerd[1457]: time="2025-02-13T15:28:33.596305555Z" level=info msg="Forcibly stopping sandbox \"58c5ff5777c7ef344cc3b143e1c2324011c9ca3754e7407b48350424c32bea2d\"" Feb 13 15:28:33.596662 containerd[1457]: time="2025-02-13T15:28:33.596369279Z" level=info msg="TearDown network for sandbox \"58c5ff5777c7ef344cc3b143e1c2324011c9ca3754e7407b48350424c32bea2d\" successfully" Feb 13 15:28:33.599114 containerd[1457]: time="2025-02-13T15:28:33.599077573Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"58c5ff5777c7ef344cc3b143e1c2324011c9ca3754e7407b48350424c32bea2d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:33.599176 containerd[1457]: time="2025-02-13T15:28:33.599138177Z" level=info msg="RemovePodSandbox \"58c5ff5777c7ef344cc3b143e1c2324011c9ca3754e7407b48350424c32bea2d\" returns successfully" Feb 13 15:28:38.171610 systemd[1]: Started sshd@19-10.0.0.91:22-10.0.0.1:52438.service - OpenSSH per-connection server daemon (10.0.0.1:52438). Feb 13 15:28:38.228402 sshd[5999]: Accepted publickey for core from 10.0.0.1 port 52438 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:28:38.228926 sshd-session[5999]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:28:38.233016 systemd-logind[1422]: New session 20 of user core. Feb 13 15:28:38.238830 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 15:28:38.382735 sshd[6001]: Connection closed by 10.0.0.1 port 52438 Feb 13 15:28:38.383257 sshd-session[5999]: pam_unix(sshd:session): session closed for user core Feb 13 15:28:38.387109 systemd[1]: sshd@19-10.0.0.91:22-10.0.0.1:52438.service: Deactivated successfully. Feb 13 15:28:38.390160 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 15:28:38.390939 systemd-logind[1422]: Session 20 logged out. Waiting for processes to exit. Feb 13 15:28:38.391848 systemd-logind[1422]: Removed session 20. Feb 13 15:28:43.394674 systemd[1]: Started sshd@20-10.0.0.91:22-10.0.0.1:60912.service - OpenSSH per-connection server daemon (10.0.0.1:60912). Feb 13 15:28:43.449907 sshd[6016]: Accepted publickey for core from 10.0.0.1 port 60912 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8 Feb 13 15:28:43.451387 sshd-session[6016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:28:43.455765 systemd-logind[1422]: New session 21 of user core. Feb 13 15:28:43.460793 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 15:28:43.605346 sshd[6018]: Connection closed by 10.0.0.1 port 60912 Feb 13 15:28:43.606815 sshd-session[6016]: pam_unix(sshd:session): session closed for user core Feb 13 15:28:43.610061 systemd[1]: sshd@20-10.0.0.91:22-10.0.0.1:60912.service: Deactivated successfully. Feb 13 15:28:43.611919 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 15:28:43.612521 systemd-logind[1422]: Session 21 logged out. Waiting for processes to exit. Feb 13 15:28:43.613268 systemd-logind[1422]: Removed session 21.