Feb 13 19:22:19.904518 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 19:22:19.904539 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu Feb 13 17:46:24 -00 2025 Feb 13 19:22:19.904548 kernel: KASLR enabled Feb 13 19:22:19.904554 kernel: efi: EFI v2.7 by EDK II Feb 13 19:22:19.904560 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbbf018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40d98 Feb 13 19:22:19.904565 kernel: random: crng init done Feb 13 19:22:19.904572 kernel: secureboot: Secure boot disabled Feb 13 19:22:19.904578 kernel: ACPI: Early table checksum verification disabled Feb 13 19:22:19.904584 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Feb 13 19:22:19.904591 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 13 19:22:19.904597 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:22:19.904603 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:22:19.904608 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:22:19.904614 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:22:19.904621 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:22:19.904629 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:22:19.904635 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:22:19.904641 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:22:19.904647 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:22:19.904653 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 13 19:22:19.904659 kernel: NUMA: Failed to initialise from firmware Feb 13 19:22:19.904665 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 19:22:19.904671 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Feb 13 19:22:19.904677 kernel: Zone ranges: Feb 13 19:22:19.904684 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 19:22:19.904691 kernel: DMA32 empty Feb 13 19:22:19.904697 kernel: Normal empty Feb 13 19:22:19.904703 kernel: Movable zone start for each node Feb 13 19:22:19.904709 kernel: Early memory node ranges Feb 13 19:22:19.904724 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Feb 13 19:22:19.904730 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Feb 13 19:22:19.904759 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Feb 13 19:22:19.904767 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Feb 13 19:22:19.904773 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Feb 13 19:22:19.904779 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Feb 13 19:22:19.904786 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Feb 13 19:22:19.904792 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 19:22:19.904801 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 13 19:22:19.904807 kernel: psci: probing for conduit method from ACPI. Feb 13 19:22:19.904813 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 19:22:19.904822 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 19:22:19.904829 kernel: psci: Trusted OS migration not required Feb 13 19:22:19.904835 kernel: psci: SMC Calling Convention v1.1 Feb 13 19:22:19.904843 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 13 19:22:19.904850 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 19:22:19.904856 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 19:22:19.904863 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 13 19:22:19.904869 kernel: Detected PIPT I-cache on CPU0 Feb 13 19:22:19.904876 kernel: CPU features: detected: GIC system register CPU interface Feb 13 19:22:19.904883 kernel: CPU features: detected: Hardware dirty bit management Feb 13 19:22:19.904889 kernel: CPU features: detected: Spectre-v4 Feb 13 19:22:19.904895 kernel: CPU features: detected: Spectre-BHB Feb 13 19:22:19.904902 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 19:22:19.904910 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 19:22:19.904916 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 19:22:19.904923 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 19:22:19.904929 kernel: alternatives: applying boot alternatives Feb 13 19:22:19.904937 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=5785d28b783f64f8b8d29b6ea80baf9f88b0129b21e0dd81447612b348e04e7a Feb 13 19:22:19.904944 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:22:19.904950 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:22:19.904957 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:22:19.904963 kernel: Fallback order for Node 0: 0 Feb 13 19:22:19.904970 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 13 19:22:19.904976 kernel: Policy zone: DMA Feb 13 19:22:19.904984 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:22:19.904991 kernel: software IO TLB: area num 4. Feb 13 19:22:19.904997 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Feb 13 19:22:19.905004 kernel: Memory: 2386324K/2572288K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39680K init, 897K bss, 185964K reserved, 0K cma-reserved) Feb 13 19:22:19.905011 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 19:22:19.905017 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:22:19.905024 kernel: rcu: RCU event tracing is enabled. Feb 13 19:22:19.905031 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 19:22:19.905038 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:22:19.905044 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:22:19.905051 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:22:19.905057 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 19:22:19.905065 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 19:22:19.905072 kernel: GICv3: 256 SPIs implemented Feb 13 19:22:19.905078 kernel: GICv3: 0 Extended SPIs implemented Feb 13 19:22:19.905085 kernel: Root IRQ handler: gic_handle_irq Feb 13 19:22:19.905091 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 19:22:19.905098 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 13 19:22:19.905104 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 13 19:22:19.905110 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 19:22:19.905117 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Feb 13 19:22:19.905124 kernel: GICv3: using LPI property table @0x00000000400f0000 Feb 13 19:22:19.905130 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Feb 13 19:22:19.905138 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:22:19.905145 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:22:19.905151 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 19:22:19.905158 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 19:22:19.905164 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 19:22:19.905171 kernel: arm-pv: using stolen time PV Feb 13 19:22:19.905178 kernel: Console: colour dummy device 80x25 Feb 13 19:22:19.905184 kernel: ACPI: Core revision 20230628 Feb 13 19:22:19.905191 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 19:22:19.905198 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:22:19.905206 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:22:19.905213 kernel: landlock: Up and running. Feb 13 19:22:19.905220 kernel: SELinux: Initializing. Feb 13 19:22:19.905226 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:22:19.905233 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:22:19.905240 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:22:19.905247 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:22:19.905254 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:22:19.905261 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:22:19.905268 kernel: Platform MSI: ITS@0x8080000 domain created Feb 13 19:22:19.905275 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 13 19:22:19.905282 kernel: Remapping and enabling EFI services. Feb 13 19:22:19.905288 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:22:19.905295 kernel: Detected PIPT I-cache on CPU1 Feb 13 19:22:19.905302 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 13 19:22:19.905309 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Feb 13 19:22:19.905315 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:22:19.905322 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 19:22:19.905329 kernel: Detected PIPT I-cache on CPU2 Feb 13 19:22:19.905337 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 13 19:22:19.905344 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Feb 13 19:22:19.905355 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:22:19.905364 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 13 19:22:19.905371 kernel: Detected PIPT I-cache on CPU3 Feb 13 19:22:19.905378 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 13 19:22:19.905385 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Feb 13 19:22:19.905392 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:22:19.905399 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 13 19:22:19.905408 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 19:22:19.905415 kernel: SMP: Total of 4 processors activated. Feb 13 19:22:19.905422 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 19:22:19.905429 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 19:22:19.905436 kernel: CPU features: detected: Common not Private translations Feb 13 19:22:19.905443 kernel: CPU features: detected: CRC32 instructions Feb 13 19:22:19.905450 kernel: CPU features: detected: Enhanced Virtualization Traps Feb 13 19:22:19.905457 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 19:22:19.905466 kernel: CPU features: detected: LSE atomic instructions Feb 13 19:22:19.905473 kernel: CPU features: detected: Privileged Access Never Feb 13 19:22:19.905480 kernel: CPU features: detected: RAS Extension Support Feb 13 19:22:19.905487 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 13 19:22:19.905494 kernel: CPU: All CPU(s) started at EL1 Feb 13 19:22:19.905501 kernel: alternatives: applying system-wide alternatives Feb 13 19:22:19.905508 kernel: devtmpfs: initialized Feb 13 19:22:19.905515 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:22:19.905522 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 19:22:19.905530 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:22:19.905538 kernel: SMBIOS 3.0.0 present. Feb 13 19:22:19.905545 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Feb 13 19:22:19.905552 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:22:19.905559 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 19:22:19.905567 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 19:22:19.905574 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 19:22:19.905581 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:22:19.905589 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Feb 13 19:22:19.905597 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:22:19.905604 kernel: cpuidle: using governor menu Feb 13 19:22:19.905611 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 19:22:19.905618 kernel: ASID allocator initialised with 32768 entries Feb 13 19:22:19.905625 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:22:19.905632 kernel: Serial: AMBA PL011 UART driver Feb 13 19:22:19.905639 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 19:22:19.905646 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 19:22:19.905653 kernel: Modules: 508960 pages in range for PLT usage Feb 13 19:22:19.905662 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:22:19.905669 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:22:19.905676 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 19:22:19.905683 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 19:22:19.905690 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:22:19.905698 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:22:19.905705 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 19:22:19.905716 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 19:22:19.905723 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:22:19.905732 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:22:19.905745 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:22:19.905753 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:22:19.905760 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 19:22:19.905767 kernel: ACPI: Interpreter enabled Feb 13 19:22:19.905774 kernel: ACPI: Using GIC for interrupt routing Feb 13 19:22:19.905781 kernel: ACPI: MCFG table detected, 1 entries Feb 13 19:22:19.905788 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 13 19:22:19.905795 kernel: printk: console [ttyAMA0] enabled Feb 13 19:22:19.905802 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 19:22:19.905933 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:22:19.906006 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 19:22:19.906071 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 19:22:19.906135 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 13 19:22:19.906198 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 13 19:22:19.906207 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 13 19:22:19.906217 kernel: PCI host bridge to bus 0000:00 Feb 13 19:22:19.906287 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 13 19:22:19.906346 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 19:22:19.906403 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 13 19:22:19.906460 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 19:22:19.906538 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 13 19:22:19.906612 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 19:22:19.906681 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 13 19:22:19.906767 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 13 19:22:19.906853 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 19:22:19.906920 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 19:22:19.906986 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 13 19:22:19.907051 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 13 19:22:19.907110 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 13 19:22:19.907171 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 19:22:19.907228 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 13 19:22:19.907237 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 19:22:19.907244 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 19:22:19.907251 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 19:22:19.907258 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 19:22:19.907265 kernel: iommu: Default domain type: Translated Feb 13 19:22:19.907273 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 19:22:19.907282 kernel: efivars: Registered efivars operations Feb 13 19:22:19.907289 kernel: vgaarb: loaded Feb 13 19:22:19.907296 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 19:22:19.907303 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:22:19.907310 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:22:19.907318 kernel: pnp: PnP ACPI init Feb 13 19:22:19.907391 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 13 19:22:19.907401 kernel: pnp: PnP ACPI: found 1 devices Feb 13 19:22:19.907410 kernel: NET: Registered PF_INET protocol family Feb 13 19:22:19.907417 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:22:19.907424 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 19:22:19.907431 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:22:19.907439 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:22:19.907446 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 19:22:19.907453 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 19:22:19.907460 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:22:19.907467 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:22:19.907475 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:22:19.907482 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:22:19.907489 kernel: kvm [1]: HYP mode not available Feb 13 19:22:19.907496 kernel: Initialise system trusted keyrings Feb 13 19:22:19.907504 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 19:22:19.907511 kernel: Key type asymmetric registered Feb 13 19:22:19.907518 kernel: Asymmetric key parser 'x509' registered Feb 13 19:22:19.907525 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 19:22:19.907532 kernel: io scheduler mq-deadline registered Feb 13 19:22:19.907546 kernel: io scheduler kyber registered Feb 13 19:22:19.907553 kernel: io scheduler bfq registered Feb 13 19:22:19.907560 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 19:22:19.907567 kernel: ACPI: button: Power Button [PWRB] Feb 13 19:22:19.907574 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 19:22:19.907639 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 13 19:22:19.907649 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:22:19.907656 kernel: thunder_xcv, ver 1.0 Feb 13 19:22:19.907663 kernel: thunder_bgx, ver 1.0 Feb 13 19:22:19.907672 kernel: nicpf, ver 1.0 Feb 13 19:22:19.907679 kernel: nicvf, ver 1.0 Feb 13 19:22:19.907859 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 19:22:19.907928 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T19:22:19 UTC (1739474539) Feb 13 19:22:19.907938 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 19:22:19.907945 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 13 19:22:19.907952 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 19:22:19.907959 kernel: watchdog: Hard watchdog permanently disabled Feb 13 19:22:19.907970 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:22:19.907977 kernel: Segment Routing with IPv6 Feb 13 19:22:19.907984 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:22:19.907991 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:22:19.907998 kernel: Key type dns_resolver registered Feb 13 19:22:19.908005 kernel: registered taskstats version 1 Feb 13 19:22:19.908012 kernel: Loading compiled-in X.509 certificates Feb 13 19:22:19.908019 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 916055ad16f0ba578cce640a9ac58627fd43c936' Feb 13 19:22:19.908026 kernel: Key type .fscrypt registered Feb 13 19:22:19.908033 kernel: Key type fscrypt-provisioning registered Feb 13 19:22:19.908042 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:22:19.908049 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:22:19.908056 kernel: ima: No architecture policies found Feb 13 19:22:19.908063 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 19:22:19.908070 kernel: clk: Disabling unused clocks Feb 13 19:22:19.908077 kernel: Freeing unused kernel memory: 39680K Feb 13 19:22:19.908084 kernel: Run /init as init process Feb 13 19:22:19.908091 kernel: with arguments: Feb 13 19:22:19.908099 kernel: /init Feb 13 19:22:19.908106 kernel: with environment: Feb 13 19:22:19.908113 kernel: HOME=/ Feb 13 19:22:19.908120 kernel: TERM=linux Feb 13 19:22:19.908127 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:22:19.908136 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:22:19.908145 systemd[1]: Detected virtualization kvm. Feb 13 19:22:19.908153 systemd[1]: Detected architecture arm64. Feb 13 19:22:19.908162 systemd[1]: Running in initrd. Feb 13 19:22:19.908169 systemd[1]: No hostname configured, using default hostname. Feb 13 19:22:19.908176 systemd[1]: Hostname set to . Feb 13 19:22:19.908184 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:22:19.908192 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:22:19.908199 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:22:19.908207 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:22:19.908215 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:22:19.908225 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:22:19.908232 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:22:19.908240 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:22:19.908249 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:22:19.908257 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:22:19.908265 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:22:19.908272 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:22:19.908282 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:22:19.908289 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:22:19.908297 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:22:19.908304 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:22:19.908312 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:22:19.908320 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:22:19.908327 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:22:19.908335 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:22:19.908344 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:22:19.908352 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:22:19.908359 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:22:19.908367 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:22:19.908374 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:22:19.908382 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:22:19.908390 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:22:19.908398 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:22:19.908405 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:22:19.908414 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:22:19.908422 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:22:19.908430 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:22:19.908437 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:22:19.908445 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:22:19.908453 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:22:19.908478 systemd-journald[239]: Collecting audit messages is disabled. Feb 13 19:22:19.908497 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:22:19.908507 systemd-journald[239]: Journal started Feb 13 19:22:19.908525 systemd-journald[239]: Runtime Journal (/run/log/journal/1d3cfc9613f34a5191c973d6701a91a8) is 5.9M, max 47.3M, 41.4M free. Feb 13 19:22:19.899788 systemd-modules-load[240]: Inserted module 'overlay' Feb 13 19:22:19.913200 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:22:19.913234 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:22:19.913244 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:22:19.912795 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:22:19.916795 kernel: Bridge firewalling registered Feb 13 19:22:19.916788 systemd-modules-load[240]: Inserted module 'br_netfilter' Feb 13 19:22:19.917559 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:22:19.919055 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:22:19.920652 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:22:19.923157 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:22:19.931220 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:22:19.933356 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:22:19.936643 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:22:19.938578 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:22:19.945998 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:22:19.947918 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:22:19.957289 dracut-cmdline[275]: dracut-dracut-053 Feb 13 19:22:19.959671 dracut-cmdline[275]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=5785d28b783f64f8b8d29b6ea80baf9f88b0129b21e0dd81447612b348e04e7a Feb 13 19:22:19.973892 systemd-resolved[279]: Positive Trust Anchors: Feb 13 19:22:19.973963 systemd-resolved[279]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:22:19.973994 systemd-resolved[279]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:22:19.978523 systemd-resolved[279]: Defaulting to hostname 'linux'. Feb 13 19:22:19.979808 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:22:19.980918 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:22:20.025760 kernel: SCSI subsystem initialized Feb 13 19:22:20.029750 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:22:20.036754 kernel: iscsi: registered transport (tcp) Feb 13 19:22:20.051775 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:22:20.051818 kernel: QLogic iSCSI HBA Driver Feb 13 19:22:20.094672 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:22:20.105975 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:22:20.123256 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:22:20.123303 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:22:20.123325 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:22:20.168765 kernel: raid6: neonx8 gen() 15783 MB/s Feb 13 19:22:20.185753 kernel: raid6: neonx4 gen() 15634 MB/s Feb 13 19:22:20.202756 kernel: raid6: neonx2 gen() 13253 MB/s Feb 13 19:22:20.219760 kernel: raid6: neonx1 gen() 10486 MB/s Feb 13 19:22:20.236765 kernel: raid6: int64x8 gen() 6950 MB/s Feb 13 19:22:20.253754 kernel: raid6: int64x4 gen() 7338 MB/s Feb 13 19:22:20.270755 kernel: raid6: int64x2 gen() 6123 MB/s Feb 13 19:22:20.287762 kernel: raid6: int64x1 gen() 5055 MB/s Feb 13 19:22:20.287790 kernel: raid6: using algorithm neonx8 gen() 15783 MB/s Feb 13 19:22:20.304763 kernel: raid6: .... xor() 11906 MB/s, rmw enabled Feb 13 19:22:20.304780 kernel: raid6: using neon recovery algorithm Feb 13 19:22:20.310945 kernel: xor: measuring software checksum speed Feb 13 19:22:20.310971 kernel: 8regs : 19816 MB/sec Feb 13 19:22:20.311959 kernel: 32regs : 19636 MB/sec Feb 13 19:22:20.311972 kernel: arm64_neon : 27123 MB/sec Feb 13 19:22:20.311981 kernel: xor: using function: arm64_neon (27123 MB/sec) Feb 13 19:22:20.365766 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:22:20.376797 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:22:20.393994 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:22:20.404938 systemd-udevd[462]: Using default interface naming scheme 'v255'. Feb 13 19:22:20.408099 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:22:20.421943 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:22:20.433790 dracut-pre-trigger[469]: rd.md=0: removing MD RAID activation Feb 13 19:22:20.463814 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:22:20.474928 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:22:20.515916 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:22:20.525467 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:22:20.536685 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:22:20.540324 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:22:20.541635 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:22:20.543364 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:22:20.548872 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:22:20.560102 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:22:20.569888 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:22:20.575625 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Feb 13 19:22:20.578893 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 19:22:20.578991 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:22:20.579011 kernel: GPT:9289727 != 19775487 Feb 13 19:22:20.579020 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:22:20.579029 kernel: GPT:9289727 != 19775487 Feb 13 19:22:20.579037 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:22:20.579048 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:22:20.570008 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:22:20.580402 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:22:20.581951 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:22:20.582091 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:22:20.583817 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:22:20.590770 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (517) Feb 13 19:22:20.594760 kernel: BTRFS: device fsid 44fbcf53-fa5f-4fd4-b434-f067731b9a44 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (508) Feb 13 19:22:20.592099 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:22:20.605308 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 19:22:20.607242 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:22:20.612547 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 19:22:20.616805 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:22:20.622919 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 19:22:20.623818 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 19:22:20.635896 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:22:20.637776 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:22:20.643538 disk-uuid[552]: Primary Header is updated. Feb 13 19:22:20.643538 disk-uuid[552]: Secondary Entries is updated. Feb 13 19:22:20.643538 disk-uuid[552]: Secondary Header is updated. Feb 13 19:22:20.647268 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:22:20.659381 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:22:21.657768 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:22:21.660331 disk-uuid[555]: The operation has completed successfully. Feb 13 19:22:21.677934 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:22:21.678030 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:22:21.699962 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:22:21.702821 sh[575]: Success Feb 13 19:22:21.716188 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 19:22:21.749840 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:22:21.762022 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:22:21.763528 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:22:21.773761 kernel: BTRFS info (device dm-0): first mount of filesystem 44fbcf53-fa5f-4fd4-b434-f067731b9a44 Feb 13 19:22:21.773809 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:22:21.773820 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:22:21.773831 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:22:21.774806 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:22:21.778167 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:22:21.779264 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:22:21.791928 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:22:21.793297 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:22:21.802155 kernel: BTRFS info (device vda6): first mount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 19:22:21.802207 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:22:21.802218 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:22:21.803773 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:22:21.811406 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:22:21.812760 kernel: BTRFS info (device vda6): last unmount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 19:22:21.819785 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:22:21.825890 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:22:21.891935 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:22:21.901938 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:22:21.926319 ignition[668]: Ignition 2.20.0 Feb 13 19:22:21.926330 ignition[668]: Stage: fetch-offline Feb 13 19:22:21.926363 ignition[668]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:22:21.926371 ignition[668]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:22:21.926610 ignition[668]: parsed url from cmdline: "" Feb 13 19:22:21.926614 ignition[668]: no config URL provided Feb 13 19:22:21.926618 ignition[668]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:22:21.926626 ignition[668]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:22:21.926653 ignition[668]: op(1): [started] loading QEMU firmware config module Feb 13 19:22:21.926657 ignition[668]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 19:22:21.932993 systemd-networkd[768]: lo: Link UP Feb 13 19:22:21.933004 systemd-networkd[768]: lo: Gained carrier Feb 13 19:22:21.933927 systemd-networkd[768]: Enumeration completed Feb 13 19:22:21.934438 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:22:21.934442 systemd-networkd[768]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:22:21.935150 systemd-networkd[768]: eth0: Link UP Feb 13 19:22:21.937228 ignition[668]: op(1): [finished] loading QEMU firmware config module Feb 13 19:22:21.935153 systemd-networkd[768]: eth0: Gained carrier Feb 13 19:22:21.935160 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:22:21.936665 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:22:21.938355 systemd[1]: Reached target network.target - Network. Feb 13 19:22:21.956780 systemd-networkd[768]: eth0: DHCPv4 address 10.0.0.135/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:22:21.980758 ignition[668]: parsing config with SHA512: 0a87a5d7ed8f388b1c72bc245a6ddc7600da9501526f23e7fbede8eec1e4801564735d4becf7d1f8c08aace659a79f5fe39ad333d369570af6ae551341e38420 Feb 13 19:22:21.986714 unknown[668]: fetched base config from "system" Feb 13 19:22:21.986726 unknown[668]: fetched user config from "qemu" Feb 13 19:22:21.987149 ignition[668]: fetch-offline: fetch-offline passed Feb 13 19:22:21.987220 ignition[668]: Ignition finished successfully Feb 13 19:22:21.989050 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:22:21.990095 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 19:22:22.002946 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:22:22.013278 ignition[774]: Ignition 2.20.0 Feb 13 19:22:22.013290 ignition[774]: Stage: kargs Feb 13 19:22:22.013460 ignition[774]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:22:22.013469 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:22:22.014477 ignition[774]: kargs: kargs passed Feb 13 19:22:22.014524 ignition[774]: Ignition finished successfully Feb 13 19:22:22.017319 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:22:22.024919 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:22:22.034444 ignition[782]: Ignition 2.20.0 Feb 13 19:22:22.034454 ignition[782]: Stage: disks Feb 13 19:22:22.034607 ignition[782]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:22:22.034616 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:22:22.035436 ignition[782]: disks: disks passed Feb 13 19:22:22.037790 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:22:22.035480 ignition[782]: Ignition finished successfully Feb 13 19:22:22.038718 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:22:22.039663 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:22:22.041128 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:22:22.042326 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:22:22.043659 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:22:22.061897 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:22:22.071203 systemd-fsck[793]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 19:22:22.075181 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:22:22.076973 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:22:22.123565 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:22:22.124765 kernel: EXT4-fs (vda9): mounted filesystem e24df12d-6575-4a90-bef9-33573b9d63e7 r/w with ordered data mode. Quota mode: none. Feb 13 19:22:22.124651 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:22:22.134832 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:22:22.136345 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:22:22.137340 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:22:22.137413 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:22:22.137458 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:22:22.143560 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (801) Feb 13 19:22:22.143407 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:22:22.145035 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:22:22.148761 kernel: BTRFS info (device vda6): first mount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 19:22:22.148786 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:22:22.148797 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:22:22.148806 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:22:22.150727 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:22:22.193981 initrd-setup-root[825]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:22:22.197999 initrd-setup-root[832]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:22:22.201248 initrd-setup-root[839]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:22:22.204150 initrd-setup-root[846]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:22:22.273559 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:22:22.282863 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:22:22.284199 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:22:22.288758 kernel: BTRFS info (device vda6): last unmount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 19:22:22.302509 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:22:22.306807 ignition[914]: INFO : Ignition 2.20.0 Feb 13 19:22:22.306807 ignition[914]: INFO : Stage: mount Feb 13 19:22:22.306807 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:22:22.306807 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:22:22.311102 ignition[914]: INFO : mount: mount passed Feb 13 19:22:22.311102 ignition[914]: INFO : Ignition finished successfully Feb 13 19:22:22.308196 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:22:22.326919 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:22:22.772550 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:22:22.784935 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:22:22.791136 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (927) Feb 13 19:22:22.791165 kernel: BTRFS info (device vda6): first mount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 19:22:22.791880 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:22:22.791895 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:22:22.794772 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:22:22.795372 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:22:22.820930 ignition[944]: INFO : Ignition 2.20.0 Feb 13 19:22:22.820930 ignition[944]: INFO : Stage: files Feb 13 19:22:22.822192 ignition[944]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:22:22.822192 ignition[944]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:22:22.822192 ignition[944]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:22:22.824728 ignition[944]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:22:22.824728 ignition[944]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:22:22.828426 ignition[944]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:22:22.829495 ignition[944]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:22:22.829495 ignition[944]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:22:22.829076 unknown[944]: wrote ssh authorized keys file for user: core Feb 13 19:22:22.832373 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 19:22:22.832373 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 19:22:22.892201 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 19:22:23.131926 systemd-networkd[768]: eth0: Gained IPv6LL Feb 13 19:22:23.310148 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 19:22:23.311786 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:22:23.311786 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:22:23.311786 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:22:23.311786 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:22:23.311786 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:22:23.311786 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:22:23.311786 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:22:23.311786 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:22:23.311786 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:22:23.311786 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:22:23.311786 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 19:22:23.311786 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 19:22:23.311786 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 19:22:23.311786 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Feb 13 19:22:23.597287 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 19:22:23.801693 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 19:22:23.801693 ignition[944]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 19:22:23.804473 ignition[944]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:22:23.804473 ignition[944]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:22:23.804473 ignition[944]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 19:22:23.804473 ignition[944]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Feb 13 19:22:23.804473 ignition[944]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:22:23.804473 ignition[944]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:22:23.804473 ignition[944]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Feb 13 19:22:23.804473 ignition[944]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 19:22:23.829220 ignition[944]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:22:23.832858 ignition[944]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:22:23.835213 ignition[944]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 19:22:23.835213 ignition[944]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Feb 13 19:22:23.835213 ignition[944]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 19:22:23.835213 ignition[944]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:22:23.835213 ignition[944]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:22:23.835213 ignition[944]: INFO : files: files passed Feb 13 19:22:23.835213 ignition[944]: INFO : Ignition finished successfully Feb 13 19:22:23.835728 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:22:23.847937 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:22:23.850016 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:22:23.851539 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:22:23.851618 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:22:23.856963 initrd-setup-root-after-ignition[972]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 19:22:23.858918 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:22:23.858918 initrd-setup-root-after-ignition[974]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:22:23.861192 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:22:23.861325 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:22:23.863316 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:22:23.872943 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:22:23.893578 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:22:23.893733 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:22:23.895390 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:22:23.896647 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:22:23.898034 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:22:23.898965 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:22:23.913295 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:22:23.915454 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:22:23.927187 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:22:23.928175 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:22:23.929098 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:22:23.930422 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:22:23.930536 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:22:23.932754 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:22:23.934084 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:22:23.935443 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:22:23.936766 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:22:23.938051 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:22:23.939465 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:22:23.940852 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:22:23.942365 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:22:23.943650 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:22:23.945077 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:22:23.946286 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:22:23.946441 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:22:23.948125 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:22:23.949496 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:22:23.950877 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:22:23.954809 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:22:23.955735 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:22:23.955871 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:22:23.957926 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:22:23.958041 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:22:23.959506 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:22:23.960641 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:22:23.963883 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:22:23.964875 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:22:23.966435 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:22:23.967635 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:22:23.967734 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:22:23.968855 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:22:23.968932 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:22:23.970136 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:22:23.970239 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:22:23.971551 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:22:23.971650 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:22:23.981931 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:22:23.982610 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:22:23.982762 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:22:23.985352 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:22:23.986151 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:22:23.986274 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:22:23.987701 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:22:23.987844 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:22:23.992774 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:22:23.992950 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:22:23.999779 ignition[998]: INFO : Ignition 2.20.0 Feb 13 19:22:23.999779 ignition[998]: INFO : Stage: umount Feb 13 19:22:24.001380 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:22:24.001380 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:22:24.001380 ignition[998]: INFO : umount: umount passed Feb 13 19:22:24.001380 ignition[998]: INFO : Ignition finished successfully Feb 13 19:22:24.000392 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:22:24.002481 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:22:24.002583 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:22:24.003974 systemd[1]: Stopped target network.target - Network. Feb 13 19:22:24.004990 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:22:24.005048 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:22:24.006409 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:22:24.006452 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:22:24.007578 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:22:24.007616 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:22:24.008942 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:22:24.008980 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:22:24.010343 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:22:24.011500 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:22:24.020797 systemd-networkd[768]: eth0: DHCPv6 lease lost Feb 13 19:22:24.022155 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:22:24.022278 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:22:24.024485 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:22:24.024611 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:22:24.026664 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:22:24.026750 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:22:24.036899 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:22:24.037573 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:22:24.037635 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:22:24.039081 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:22:24.039119 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:22:24.040483 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:22:24.040523 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:22:24.042159 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:22:24.042200 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:22:24.043618 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:22:24.065192 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:22:24.065363 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:22:24.067286 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:22:24.067457 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:22:24.068806 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:22:24.068902 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:22:24.070478 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:22:24.070540 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:22:24.071900 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:22:24.071932 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:22:24.073163 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:22:24.073207 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:22:24.075242 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:22:24.075284 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:22:24.077321 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:22:24.077362 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:22:24.079409 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:22:24.079450 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:22:24.088907 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:22:24.089677 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:22:24.089767 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:22:24.091395 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:22:24.091451 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:22:24.095802 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:22:24.095905 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:22:24.097545 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:22:24.100932 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:22:24.111869 systemd[1]: Switching root. Feb 13 19:22:24.139775 systemd-journald[239]: Journal stopped Feb 13 19:22:24.846290 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). Feb 13 19:22:24.846341 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:22:24.846356 kernel: SELinux: policy capability open_perms=1 Feb 13 19:22:24.846365 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:22:24.846380 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:22:24.846389 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:22:24.846399 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:22:24.846408 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:22:24.846417 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:22:24.846426 kernel: audit: type=1403 audit(1739474544.297:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:22:24.846437 systemd[1]: Successfully loaded SELinux policy in 31.584ms. Feb 13 19:22:24.846456 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.426ms. Feb 13 19:22:24.846468 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:22:24.846478 systemd[1]: Detected virtualization kvm. Feb 13 19:22:24.846489 systemd[1]: Detected architecture arm64. Feb 13 19:22:24.846500 systemd[1]: Detected first boot. Feb 13 19:22:24.846510 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:22:24.846520 zram_generator::config[1043]: No configuration found. Feb 13 19:22:24.846531 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:22:24.846543 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 19:22:24.846553 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 19:22:24.846563 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 19:22:24.846575 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:22:24.846585 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:22:24.846596 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:22:24.846607 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:22:24.846618 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:22:24.846629 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:22:24.846640 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:22:24.846650 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:22:24.846661 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:22:24.846671 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:22:24.846693 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:22:24.846706 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:22:24.846717 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:22:24.846727 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:22:24.846753 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 19:22:24.846764 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:22:24.846775 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 19:22:24.846785 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 19:22:24.846796 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 19:22:24.846806 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:22:24.846816 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:22:24.846827 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:22:24.846839 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:22:24.846849 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:22:24.846861 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:22:24.846872 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:22:24.846882 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:22:24.846892 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:22:24.846902 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:22:24.846912 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:22:24.846922 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:22:24.846934 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:22:24.846945 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:22:24.846955 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:22:24.846966 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:22:24.846976 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:22:24.846987 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:22:24.846998 systemd[1]: Reached target machines.target - Containers. Feb 13 19:22:24.847008 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:22:24.847018 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:22:24.847030 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:22:24.847040 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:22:24.847051 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:22:24.847061 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:22:24.847071 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:22:24.847082 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:22:24.847092 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:22:24.847104 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:22:24.847115 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 19:22:24.847126 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 19:22:24.847136 kernel: fuse: init (API version 7.39) Feb 13 19:22:24.847146 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 19:22:24.847156 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 19:22:24.847166 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:22:24.847176 kernel: ACPI: bus type drm_connector registered Feb 13 19:22:24.847186 kernel: loop: module loaded Feb 13 19:22:24.847196 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:22:24.847208 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:22:24.847218 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:22:24.847228 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:22:24.847238 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 19:22:24.847249 systemd[1]: Stopped verity-setup.service. Feb 13 19:22:24.847259 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:22:24.847269 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:22:24.847279 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:22:24.847289 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:22:24.847303 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:22:24.847329 systemd-journald[1110]: Collecting audit messages is disabled. Feb 13 19:22:24.847351 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:22:24.847364 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:22:24.847375 systemd-journald[1110]: Journal started Feb 13 19:22:24.847395 systemd-journald[1110]: Runtime Journal (/run/log/journal/1d3cfc9613f34a5191c973d6701a91a8) is 5.9M, max 47.3M, 41.4M free. Feb 13 19:22:24.661716 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:22:24.679320 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 19:22:24.679688 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 19:22:24.849308 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:22:24.850691 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:22:24.851889 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:22:24.852028 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:22:24.853155 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:22:24.853303 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:22:24.854371 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:22:24.854518 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:22:24.855575 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:22:24.855719 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:22:24.856840 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:22:24.856967 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:22:24.858125 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:22:24.858257 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:22:24.859369 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:22:24.860455 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:22:24.861619 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:22:24.873244 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:22:24.886878 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:22:24.888775 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:22:24.889562 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:22:24.889600 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:22:24.891264 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 19:22:24.893400 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:22:24.895617 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:22:24.896512 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:22:24.898126 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:22:24.899808 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:22:24.900712 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:22:24.901850 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:22:24.902689 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:22:24.903966 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:22:24.905790 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:22:24.907647 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:22:24.911726 systemd-journald[1110]: Time spent on flushing to /var/log/journal/1d3cfc9613f34a5191c973d6701a91a8 is 18.168ms for 855 entries. Feb 13 19:22:24.911726 systemd-journald[1110]: System Journal (/var/log/journal/1d3cfc9613f34a5191c973d6701a91a8) is 8.0M, max 195.6M, 187.6M free. Feb 13 19:22:24.934058 systemd-journald[1110]: Received client request to flush runtime journal. Feb 13 19:22:24.934093 kernel: loop0: detected capacity change from 0 to 116808 Feb 13 19:22:24.916808 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:22:24.917926 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:22:24.918909 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:22:24.920168 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:22:24.934960 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:22:24.938217 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:22:24.941497 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:22:24.945631 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:22:24.952772 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:22:24.953506 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:22:24.960057 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 19:22:24.961500 udevadm[1161]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 19:22:24.974652 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:22:24.979935 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:22:24.980757 kernel: loop1: detected capacity change from 0 to 189592 Feb 13 19:22:24.981700 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:22:24.982777 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 19:22:25.003400 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Feb 13 19:22:25.003418 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Feb 13 19:22:25.009937 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:22:25.016768 kernel: loop2: detected capacity change from 0 to 113536 Feb 13 19:22:25.043869 kernel: loop3: detected capacity change from 0 to 116808 Feb 13 19:22:25.048771 kernel: loop4: detected capacity change from 0 to 189592 Feb 13 19:22:25.053774 kernel: loop5: detected capacity change from 0 to 113536 Feb 13 19:22:25.056296 (sd-merge)[1180]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 19:22:25.056770 (sd-merge)[1180]: Merged extensions into '/usr'. Feb 13 19:22:25.060870 systemd[1]: Reloading requested from client PID 1154 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:22:25.060888 systemd[1]: Reloading... Feb 13 19:22:25.101764 zram_generator::config[1205]: No configuration found. Feb 13 19:22:25.180445 ldconfig[1149]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:22:25.222911 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:22:25.258735 systemd[1]: Reloading finished in 197 ms. Feb 13 19:22:25.290777 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:22:25.291896 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:22:25.309957 systemd[1]: Starting ensure-sysext.service... Feb 13 19:22:25.311887 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:22:25.318828 systemd[1]: Reloading requested from client PID 1240 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:22:25.318849 systemd[1]: Reloading... Feb 13 19:22:25.330996 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:22:25.331588 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:22:25.332340 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:22:25.332650 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Feb 13 19:22:25.332819 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Feb 13 19:22:25.335584 systemd-tmpfiles[1241]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:22:25.335719 systemd-tmpfiles[1241]: Skipping /boot Feb 13 19:22:25.346505 systemd-tmpfiles[1241]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:22:25.346669 systemd-tmpfiles[1241]: Skipping /boot Feb 13 19:22:25.356812 zram_generator::config[1267]: No configuration found. Feb 13 19:22:25.453546 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:22:25.488971 systemd[1]: Reloading finished in 169 ms. Feb 13 19:22:25.505818 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:22:25.516183 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:22:25.524118 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:22:25.526448 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:22:25.530094 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:22:25.534097 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:22:25.536550 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:22:25.540468 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:22:25.543523 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:22:25.548054 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:22:25.553895 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:22:25.559058 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:22:25.559948 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:22:25.560700 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:22:25.563917 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:22:25.567118 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:22:25.567340 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:22:25.569707 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:22:25.571215 systemd-udevd[1312]: Using default interface naming scheme 'v255'. Feb 13 19:22:25.573150 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:22:25.573407 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:22:25.582272 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:22:25.599036 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:22:25.602028 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:22:25.607049 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:22:25.608908 augenrules[1344]: No rules Feb 13 19:22:25.608944 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:22:25.612953 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:22:25.617273 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:22:25.618789 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:22:25.620362 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:22:25.620540 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:22:25.621860 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:22:25.623279 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:22:25.624713 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:22:25.624879 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:22:25.626159 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:22:25.626290 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:22:25.627662 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:22:25.627848 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:22:25.631958 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:22:25.642950 systemd[1]: Finished ensure-sysext.service. Feb 13 19:22:25.660960 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:22:25.661842 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:22:25.664412 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:22:25.666967 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:22:25.669070 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:22:25.671637 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:22:25.672561 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:22:25.678927 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:22:25.684100 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 19:22:25.684952 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:22:25.685296 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:22:25.687229 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:22:25.687430 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:22:25.688580 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:22:25.688769 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:22:25.691070 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:22:25.691226 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:22:25.692333 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Feb 13 19:22:25.693561 augenrules[1374]: /sbin/augenrules: No change Feb 13 19:22:25.698539 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:22:25.698704 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:22:25.703181 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:22:25.703270 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:22:25.705761 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1360) Feb 13 19:22:25.707257 augenrules[1410]: No rules Feb 13 19:22:25.708447 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:22:25.708642 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:22:25.745945 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:22:25.755909 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:22:25.771926 systemd-networkd[1387]: lo: Link UP Feb 13 19:22:25.771933 systemd-networkd[1387]: lo: Gained carrier Feb 13 19:22:25.772717 systemd-networkd[1387]: Enumeration completed Feb 13 19:22:25.772869 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:22:25.775291 systemd-networkd[1387]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:22:25.775300 systemd-networkd[1387]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:22:25.775983 systemd-networkd[1387]: eth0: Link UP Feb 13 19:22:25.775993 systemd-networkd[1387]: eth0: Gained carrier Feb 13 19:22:25.776007 systemd-networkd[1387]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:22:25.782728 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:22:25.783861 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 19:22:25.784754 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:22:25.786256 systemd-resolved[1309]: Positive Trust Anchors: Feb 13 19:22:25.786328 systemd-resolved[1309]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:22:25.786360 systemd-resolved[1309]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:22:25.795878 systemd-networkd[1387]: eth0: DHCPv4 address 10.0.0.135/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:22:25.796634 systemd-timesyncd[1392]: Network configuration changed, trying to establish connection. Feb 13 19:22:25.798508 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:22:26.284089 systemd-timesyncd[1392]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 19:22:26.284152 systemd-timesyncd[1392]: Initial clock synchronization to Thu 2025-02-13 19:22:26.283985 UTC. Feb 13 19:22:26.284346 systemd-resolved[1309]: Defaulting to hostname 'linux'. Feb 13 19:22:26.296646 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:22:26.299161 systemd[1]: Reached target network.target - Network. Feb 13 19:22:26.299875 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:22:26.309868 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:22:26.311655 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:22:26.314810 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:22:26.333649 lvm[1429]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:22:26.348881 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:22:26.355108 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:22:26.356583 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:22:26.359489 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:22:26.360445 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:22:26.361453 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:22:26.362682 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:22:26.363624 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:22:26.364500 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:22:26.365545 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:22:26.365605 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:22:26.366260 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:22:26.367877 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:22:26.370195 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:22:26.382767 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:22:26.384978 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:22:26.386416 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:22:26.387452 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:22:26.388245 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:22:26.389025 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:22:26.389057 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:22:26.390063 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:22:26.391902 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:22:26.393177 lvm[1437]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:22:26.395501 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:22:26.399824 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:22:26.400723 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:22:26.406793 jq[1440]: false Feb 13 19:22:26.406826 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:22:26.408867 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 19:22:26.411834 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:22:26.414850 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:22:26.420894 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:22:26.422727 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:22:26.423162 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:22:26.424115 extend-filesystems[1441]: Found loop3 Feb 13 19:22:26.424115 extend-filesystems[1441]: Found loop4 Feb 13 19:22:26.428185 extend-filesystems[1441]: Found loop5 Feb 13 19:22:26.428185 extend-filesystems[1441]: Found vda Feb 13 19:22:26.428185 extend-filesystems[1441]: Found vda1 Feb 13 19:22:26.428185 extend-filesystems[1441]: Found vda2 Feb 13 19:22:26.428185 extend-filesystems[1441]: Found vda3 Feb 13 19:22:26.428185 extend-filesystems[1441]: Found usr Feb 13 19:22:26.428185 extend-filesystems[1441]: Found vda4 Feb 13 19:22:26.428185 extend-filesystems[1441]: Found vda6 Feb 13 19:22:26.428185 extend-filesystems[1441]: Found vda7 Feb 13 19:22:26.428185 extend-filesystems[1441]: Found vda9 Feb 13 19:22:26.428185 extend-filesystems[1441]: Checking size of /dev/vda9 Feb 13 19:22:26.424710 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:22:26.429086 dbus-daemon[1439]: [system] SELinux support is enabled Feb 13 19:22:26.457094 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 19:22:26.457121 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1343) Feb 13 19:22:26.428470 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:22:26.457292 extend-filesystems[1441]: Resized partition /dev/vda9 Feb 13 19:22:26.432181 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:22:26.460208 extend-filesystems[1462]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:22:26.435167 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:22:26.457342 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:22:26.461501 jq[1455]: true Feb 13 19:22:26.457516 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:22:26.457891 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:22:26.458030 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:22:26.462707 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:22:26.463820 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:22:26.473338 update_engine[1451]: I20250213 19:22:26.473186 1451 main.cc:92] Flatcar Update Engine starting Feb 13 19:22:26.475683 update_engine[1451]: I20250213 19:22:26.475590 1451 update_check_scheduler.cc:74] Next update check in 2m20s Feb 13 19:22:26.479036 systemd-logind[1449]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 19:22:26.479272 systemd-logind[1449]: New seat seat0. Feb 13 19:22:26.479316 jq[1465]: true Feb 13 19:22:26.482791 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:22:26.483484 (ntainerd)[1472]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:22:26.491265 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:22:26.496130 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:22:26.496297 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:22:26.498216 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:22:26.498429 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:22:26.502281 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:22:26.507765 tar[1464]: linux-arm64/helm Feb 13 19:22:26.615622 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 19:22:26.720520 locksmithd[1479]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:22:26.892458 extend-filesystems[1462]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 19:22:26.892458 extend-filesystems[1462]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 19:22:26.892458 extend-filesystems[1462]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 19:22:26.896868 extend-filesystems[1441]: Resized filesystem in /dev/vda9 Feb 13 19:22:26.896502 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:22:26.901697 containerd[1472]: time="2025-02-13T19:22:26.892656263Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 19:22:26.901863 sshd_keygen[1463]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:22:26.896707 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:22:26.906677 bash[1493]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:22:26.906511 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:22:26.908186 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 19:22:26.920379 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:22:26.923220 containerd[1472]: time="2025-02-13T19:22:26.923178903Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:22:26.924590 containerd[1472]: time="2025-02-13T19:22:26.924543263Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:22:26.924590 containerd[1472]: time="2025-02-13T19:22:26.924581623Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:22:26.924665 containerd[1472]: time="2025-02-13T19:22:26.924641543Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:22:26.924838 containerd[1472]: time="2025-02-13T19:22:26.924804743Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:22:26.924838 containerd[1472]: time="2025-02-13T19:22:26.924830223Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:22:26.924919 containerd[1472]: time="2025-02-13T19:22:26.924899503Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:22:26.924940 containerd[1472]: time="2025-02-13T19:22:26.924919623Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:22:26.925101 containerd[1472]: time="2025-02-13T19:22:26.925072663Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:22:26.925101 containerd[1472]: time="2025-02-13T19:22:26.925094303Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:22:26.925141 containerd[1472]: time="2025-02-13T19:22:26.925107023Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:22:26.925141 containerd[1472]: time="2025-02-13T19:22:26.925116743Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:22:26.925195 containerd[1472]: time="2025-02-13T19:22:26.925181943Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:22:26.925385 containerd[1472]: time="2025-02-13T19:22:26.925364063Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:22:26.925476 containerd[1472]: time="2025-02-13T19:22:26.925461263Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:22:26.925496 containerd[1472]: time="2025-02-13T19:22:26.925477703Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:22:26.925572 containerd[1472]: time="2025-02-13T19:22:26.925558063Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:22:26.925713 containerd[1472]: time="2025-02-13T19:22:26.925697903Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:22:26.928038 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:22:26.930455 containerd[1472]: time="2025-02-13T19:22:26.929861983Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:22:26.930455 containerd[1472]: time="2025-02-13T19:22:26.929916623Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:22:26.930455 containerd[1472]: time="2025-02-13T19:22:26.929931503Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:22:26.930455 containerd[1472]: time="2025-02-13T19:22:26.929946823Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:22:26.930455 containerd[1472]: time="2025-02-13T19:22:26.929961343Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:22:26.930455 containerd[1472]: time="2025-02-13T19:22:26.930105063Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:22:26.930455 containerd[1472]: time="2025-02-13T19:22:26.930327303Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:22:26.930455 containerd[1472]: time="2025-02-13T19:22:26.930421463Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:22:26.930455 containerd[1472]: time="2025-02-13T19:22:26.930437303Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:22:26.930455 containerd[1472]: time="2025-02-13T19:22:26.930450703Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:22:26.930455 containerd[1472]: time="2025-02-13T19:22:26.930464263Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:22:26.930733 containerd[1472]: time="2025-02-13T19:22:26.930477663Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:22:26.930733 containerd[1472]: time="2025-02-13T19:22:26.930490063Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:22:26.930733 containerd[1472]: time="2025-02-13T19:22:26.930503263Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:22:26.930733 containerd[1472]: time="2025-02-13T19:22:26.930516983Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:22:26.930733 containerd[1472]: time="2025-02-13T19:22:26.930538903Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:22:26.930733 containerd[1472]: time="2025-02-13T19:22:26.930552063Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:22:26.930733 containerd[1472]: time="2025-02-13T19:22:26.930563983Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:22:26.930733 containerd[1472]: time="2025-02-13T19:22:26.930584183Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:22:26.930866 containerd[1472]: time="2025-02-13T19:22:26.930741343Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:22:26.930866 containerd[1472]: time="2025-02-13T19:22:26.930761783Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:22:26.930866 containerd[1472]: time="2025-02-13T19:22:26.930776143Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:22:26.930866 containerd[1472]: time="2025-02-13T19:22:26.930787783Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:22:26.930866 containerd[1472]: time="2025-02-13T19:22:26.930800743Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:22:26.930866 containerd[1472]: time="2025-02-13T19:22:26.930812783Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:22:26.930866 containerd[1472]: time="2025-02-13T19:22:26.930824823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:22:26.930866 containerd[1472]: time="2025-02-13T19:22:26.930836943Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:22:26.930866 containerd[1472]: time="2025-02-13T19:22:26.930850663Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:22:26.930866 containerd[1472]: time="2025-02-13T19:22:26.930861703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:22:26.931035 containerd[1472]: time="2025-02-13T19:22:26.930873783Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:22:26.931035 containerd[1472]: time="2025-02-13T19:22:26.930893343Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:22:26.931035 containerd[1472]: time="2025-02-13T19:22:26.930908303Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:22:26.931035 containerd[1472]: time="2025-02-13T19:22:26.930928543Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:22:26.931035 containerd[1472]: time="2025-02-13T19:22:26.930941783Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:22:26.931035 containerd[1472]: time="2025-02-13T19:22:26.930953343Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:22:26.931210 containerd[1472]: time="2025-02-13T19:22:26.931186783Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:22:26.931234 containerd[1472]: time="2025-02-13T19:22:26.931215743Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:22:26.931234 containerd[1472]: time="2025-02-13T19:22:26.931227063Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:22:26.931388 containerd[1472]: time="2025-02-13T19:22:26.931346383Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:22:26.931388 containerd[1472]: time="2025-02-13T19:22:26.931369703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:22:26.931388 containerd[1472]: time="2025-02-13T19:22:26.931383903Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:22:26.931451 containerd[1472]: time="2025-02-13T19:22:26.931395143Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:22:26.931451 containerd[1472]: time="2025-02-13T19:22:26.931414263Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:22:26.931827 containerd[1472]: time="2025-02-13T19:22:26.931770143Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:22:26.931945 containerd[1472]: time="2025-02-13T19:22:26.931884023Z" level=info msg="Connect containerd service" Feb 13 19:22:26.931945 containerd[1472]: time="2025-02-13T19:22:26.931924863Z" level=info msg="using legacy CRI server" Feb 13 19:22:26.931945 containerd[1472]: time="2025-02-13T19:22:26.931932063Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:22:26.932266 containerd[1472]: time="2025-02-13T19:22:26.932194503Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:22:26.933167 containerd[1472]: time="2025-02-13T19:22:26.933142543Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:22:26.933481 containerd[1472]: time="2025-02-13T19:22:26.933445103Z" level=info msg="Start subscribing containerd event" Feb 13 19:22:26.933521 containerd[1472]: time="2025-02-13T19:22:26.933501423Z" level=info msg="Start recovering state" Feb 13 19:22:26.933898 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:22:26.934493 containerd[1472]: time="2025-02-13T19:22:26.933965863Z" level=info msg="Start event monitor" Feb 13 19:22:26.934493 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:22:26.934572 containerd[1472]: time="2025-02-13T19:22:26.934541863Z" level=info msg="Start snapshots syncer" Feb 13 19:22:26.934572 containerd[1472]: time="2025-02-13T19:22:26.934559623Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:22:26.934572 containerd[1472]: time="2025-02-13T19:22:26.934566463Z" level=info msg="Start streaming server" Feb 13 19:22:26.934750 containerd[1472]: time="2025-02-13T19:22:26.934438383Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:22:26.934798 containerd[1472]: time="2025-02-13T19:22:26.934786583Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:22:26.935297 containerd[1472]: time="2025-02-13T19:22:26.935279023Z" level=info msg="containerd successfully booted in 0.122830s" Feb 13 19:22:26.935683 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:22:26.950373 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:22:26.960625 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:22:26.970957 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:22:26.973022 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 19:22:26.974023 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:22:27.098389 tar[1464]: linux-arm64/LICENSE Feb 13 19:22:27.098389 tar[1464]: linux-arm64/README.md Feb 13 19:22:27.112191 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 19:22:28.284778 systemd-networkd[1387]: eth0: Gained IPv6LL Feb 13 19:22:28.287830 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:22:28.289269 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:22:28.298829 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 19:22:28.301024 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:22:28.302908 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:22:28.317909 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 19:22:28.318794 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 19:22:28.321112 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:22:28.329585 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:22:28.784553 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:22:28.786056 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:22:28.788562 (kubelet)[1552]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:22:28.788768 systemd[1]: Startup finished in 530ms (kernel) + 4.602s (initrd) + 4.043s (userspace) = 9.176s. Feb 13 19:22:29.204848 kubelet[1552]: E0213 19:22:29.204732 1552 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:22:29.206789 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:22:29.206922 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:22:32.851341 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:22:32.852682 systemd[1]: Started sshd@0-10.0.0.135:22-10.0.0.1:54422.service - OpenSSH per-connection server daemon (10.0.0.1:54422). Feb 13 19:22:32.914628 sshd[1565]: Accepted publickey for core from 10.0.0.1 port 54422 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:22:32.918435 sshd-session[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:22:32.927082 systemd-logind[1449]: New session 1 of user core. Feb 13 19:22:32.928169 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:22:32.938894 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:22:32.948265 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:22:32.950658 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:22:32.957119 (systemd)[1569]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:22:33.038444 systemd[1569]: Queued start job for default target default.target. Feb 13 19:22:33.049540 systemd[1569]: Created slice app.slice - User Application Slice. Feb 13 19:22:33.049586 systemd[1569]: Reached target paths.target - Paths. Feb 13 19:22:33.049617 systemd[1569]: Reached target timers.target - Timers. Feb 13 19:22:33.050824 systemd[1569]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:22:33.061458 systemd[1569]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:22:33.061537 systemd[1569]: Reached target sockets.target - Sockets. Feb 13 19:22:33.061550 systemd[1569]: Reached target basic.target - Basic System. Feb 13 19:22:33.061628 systemd[1569]: Reached target default.target - Main User Target. Feb 13 19:22:33.061658 systemd[1569]: Startup finished in 99ms. Feb 13 19:22:33.061918 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:22:33.063580 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:22:33.128389 systemd[1]: Started sshd@1-10.0.0.135:22-10.0.0.1:54432.service - OpenSSH per-connection server daemon (10.0.0.1:54432). Feb 13 19:22:33.167365 sshd[1580]: Accepted publickey for core from 10.0.0.1 port 54432 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:22:33.168551 sshd-session[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:22:33.172484 systemd-logind[1449]: New session 2 of user core. Feb 13 19:22:33.180786 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:22:33.232514 sshd[1582]: Connection closed by 10.0.0.1 port 54432 Feb 13 19:22:33.232837 sshd-session[1580]: pam_unix(sshd:session): session closed for user core Feb 13 19:22:33.247005 systemd[1]: sshd@1-10.0.0.135:22-10.0.0.1:54432.service: Deactivated successfully. Feb 13 19:22:33.248411 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:22:33.250731 systemd-logind[1449]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:22:33.251980 systemd[1]: Started sshd@2-10.0.0.135:22-10.0.0.1:54440.service - OpenSSH per-connection server daemon (10.0.0.1:54440). Feb 13 19:22:33.253967 systemd-logind[1449]: Removed session 2. Feb 13 19:22:33.291261 sshd[1587]: Accepted publickey for core from 10.0.0.1 port 54440 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:22:33.292569 sshd-session[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:22:33.296344 systemd-logind[1449]: New session 3 of user core. Feb 13 19:22:33.312776 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:22:33.361652 sshd[1589]: Connection closed by 10.0.0.1 port 54440 Feb 13 19:22:33.361583 sshd-session[1587]: pam_unix(sshd:session): session closed for user core Feb 13 19:22:33.371128 systemd[1]: sshd@2-10.0.0.135:22-10.0.0.1:54440.service: Deactivated successfully. Feb 13 19:22:33.373008 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:22:33.375798 systemd-logind[1449]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:22:33.377293 systemd[1]: Started sshd@3-10.0.0.135:22-10.0.0.1:54446.service - OpenSSH per-connection server daemon (10.0.0.1:54446). Feb 13 19:22:33.378207 systemd-logind[1449]: Removed session 3. Feb 13 19:22:33.419784 sshd[1594]: Accepted publickey for core from 10.0.0.1 port 54446 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:22:33.421041 sshd-session[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:22:33.426112 systemd-logind[1449]: New session 4 of user core. Feb 13 19:22:33.434776 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:22:33.489985 sshd[1596]: Connection closed by 10.0.0.1 port 54446 Feb 13 19:22:33.490016 sshd-session[1594]: pam_unix(sshd:session): session closed for user core Feb 13 19:22:33.497032 systemd[1]: sshd@3-10.0.0.135:22-10.0.0.1:54446.service: Deactivated successfully. Feb 13 19:22:33.498711 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:22:33.500202 systemd-logind[1449]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:22:33.501531 systemd[1]: Started sshd@4-10.0.0.135:22-10.0.0.1:54456.service - OpenSSH per-connection server daemon (10.0.0.1:54456). Feb 13 19:22:33.502480 systemd-logind[1449]: Removed session 4. Feb 13 19:22:33.543279 sshd[1601]: Accepted publickey for core from 10.0.0.1 port 54456 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:22:33.544576 sshd-session[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:22:33.548573 systemd-logind[1449]: New session 5 of user core. Feb 13 19:22:33.558768 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:22:33.625682 sudo[1604]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 19:22:33.625953 sudo[1604]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:22:33.639518 sudo[1604]: pam_unix(sudo:session): session closed for user root Feb 13 19:22:33.641083 sshd[1603]: Connection closed by 10.0.0.1 port 54456 Feb 13 19:22:33.641766 sshd-session[1601]: pam_unix(sshd:session): session closed for user core Feb 13 19:22:33.654251 systemd[1]: sshd@4-10.0.0.135:22-10.0.0.1:54456.service: Deactivated successfully. Feb 13 19:22:33.656102 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:22:33.657673 systemd-logind[1449]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:22:33.659785 systemd[1]: Started sshd@5-10.0.0.135:22-10.0.0.1:54460.service - OpenSSH per-connection server daemon (10.0.0.1:54460). Feb 13 19:22:33.660748 systemd-logind[1449]: Removed session 5. Feb 13 19:22:33.700342 sshd[1609]: Accepted publickey for core from 10.0.0.1 port 54460 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:22:33.701639 sshd-session[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:22:33.705650 systemd-logind[1449]: New session 6 of user core. Feb 13 19:22:33.712759 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:22:33.763087 sudo[1613]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 19:22:33.763343 sudo[1613]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:22:33.766674 sudo[1613]: pam_unix(sudo:session): session closed for user root Feb 13 19:22:33.771293 sudo[1612]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 19:22:33.771569 sudo[1612]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:22:33.790891 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:22:33.814396 augenrules[1635]: No rules Feb 13 19:22:33.815714 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:22:33.816710 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:22:33.817891 sudo[1612]: pam_unix(sudo:session): session closed for user root Feb 13 19:22:33.819283 sshd[1611]: Connection closed by 10.0.0.1 port 54460 Feb 13 19:22:33.819664 sshd-session[1609]: pam_unix(sshd:session): session closed for user core Feb 13 19:22:33.829096 systemd[1]: sshd@5-10.0.0.135:22-10.0.0.1:54460.service: Deactivated successfully. Feb 13 19:22:33.830708 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:22:33.832001 systemd-logind[1449]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:22:33.849993 systemd[1]: Started sshd@6-10.0.0.135:22-10.0.0.1:54464.service - OpenSSH per-connection server daemon (10.0.0.1:54464). Feb 13 19:22:33.850924 systemd-logind[1449]: Removed session 6. Feb 13 19:22:33.885320 sshd[1643]: Accepted publickey for core from 10.0.0.1 port 54464 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:22:33.886620 sshd-session[1643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:22:33.890662 systemd-logind[1449]: New session 7 of user core. Feb 13 19:22:33.897777 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:22:33.950340 sudo[1646]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:22:33.951251 sudo[1646]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:22:34.272874 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 19:22:34.272944 (dockerd)[1667]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 19:22:34.518630 dockerd[1667]: time="2025-02-13T19:22:34.518279383Z" level=info msg="Starting up" Feb 13 19:22:34.778189 dockerd[1667]: time="2025-02-13T19:22:34.778123503Z" level=info msg="Loading containers: start." Feb 13 19:22:34.929618 kernel: Initializing XFRM netlink socket Feb 13 19:22:34.996526 systemd-networkd[1387]: docker0: Link UP Feb 13 19:22:35.031091 dockerd[1667]: time="2025-02-13T19:22:35.030965983Z" level=info msg="Loading containers: done." Feb 13 19:22:35.047721 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3355192366-merged.mount: Deactivated successfully. Feb 13 19:22:35.049642 dockerd[1667]: time="2025-02-13T19:22:35.049372623Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 19:22:35.049642 dockerd[1667]: time="2025-02-13T19:22:35.049473983Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Feb 13 19:22:35.049642 dockerd[1667]: time="2025-02-13T19:22:35.049587663Z" level=info msg="Daemon has completed initialization" Feb 13 19:22:35.081329 dockerd[1667]: time="2025-02-13T19:22:35.081267903Z" level=info msg="API listen on /run/docker.sock" Feb 13 19:22:35.081449 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 19:22:35.597617 containerd[1472]: time="2025-02-13T19:22:35.597563903Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\"" Feb 13 19:22:36.209856 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount670373227.mount: Deactivated successfully. Feb 13 19:22:37.813378 containerd[1472]: time="2025-02-13T19:22:37.813316183Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:22:37.813880 containerd[1472]: time="2025-02-13T19:22:37.813831943Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.6: active requests=0, bytes read=25620377" Feb 13 19:22:37.814615 containerd[1472]: time="2025-02-13T19:22:37.814343383Z" level=info msg="ImageCreate event name:\"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:22:37.817258 containerd[1472]: time="2025-02-13T19:22:37.817228103Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:22:37.818708 containerd[1472]: time="2025-02-13T19:22:37.818655623Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.6\" with image id \"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\", size \"25617175\" in 2.22103108s" Feb 13 19:22:37.818708 containerd[1472]: time="2025-02-13T19:22:37.818691423Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\" returns image reference \"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\"" Feb 13 19:22:37.819356 containerd[1472]: time="2025-02-13T19:22:37.819286023Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\"" Feb 13 19:22:39.100061 containerd[1472]: time="2025-02-13T19:22:39.100008023Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:22:39.100654 containerd[1472]: time="2025-02-13T19:22:39.100612783Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.6: active requests=0, bytes read=22471775" Feb 13 19:22:39.101326 containerd[1472]: time="2025-02-13T19:22:39.101277023Z" level=info msg="ImageCreate event name:\"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:22:39.104482 containerd[1472]: time="2025-02-13T19:22:39.104436543Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:22:39.105831 containerd[1472]: time="2025-02-13T19:22:39.105695983Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.6\" with image id \"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\", size \"23875502\" in 1.28637488s" Feb 13 19:22:39.105831 containerd[1472]: time="2025-02-13T19:22:39.105727823Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\" returns image reference \"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\"" Feb 13 19:22:39.106203 containerd[1472]: time="2025-02-13T19:22:39.106166063Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\"" Feb 13 19:22:39.457315 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 19:22:39.467834 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:22:39.558059 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:22:39.562685 (kubelet)[1932]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:22:39.600480 kubelet[1932]: E0213 19:22:39.600417 1932 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:22:39.603402 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:22:39.603563 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:22:40.819757 containerd[1472]: time="2025-02-13T19:22:40.819697543Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:22:40.821467 containerd[1472]: time="2025-02-13T19:22:40.821408023Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.6: active requests=0, bytes read=17024542" Feb 13 19:22:40.822338 containerd[1472]: time="2025-02-13T19:22:40.822304783Z" level=info msg="ImageCreate event name:\"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:22:40.825921 containerd[1472]: time="2025-02-13T19:22:40.825882943Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:22:40.826861 containerd[1472]: time="2025-02-13T19:22:40.826821703Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.6\" with image id \"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\", size \"18428287\" in 1.72062364s" Feb 13 19:22:40.826861 containerd[1472]: time="2025-02-13T19:22:40.826859063Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\" returns image reference \"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\"" Feb 13 19:22:40.828015 containerd[1472]: time="2025-02-13T19:22:40.827847463Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\"" Feb 13 19:22:41.762203 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1591631637.mount: Deactivated successfully. Feb 13 19:22:41.972840 containerd[1472]: time="2025-02-13T19:22:41.972672783Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:22:41.973764 containerd[1472]: time="2025-02-13T19:22:41.973503703Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.6: active requests=0, bytes read=26769258" Feb 13 19:22:41.974361 containerd[1472]: time="2025-02-13T19:22:41.974324143Z" level=info msg="ImageCreate event name:\"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:22:41.976404 containerd[1472]: time="2025-02-13T19:22:41.976348743Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:22:41.977061 containerd[1472]: time="2025-02-13T19:22:41.976982863Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.6\" with image id \"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\", repo tag \"registry.k8s.io/kube-proxy:v1.31.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\", size \"26768275\" in 1.14910564s" Feb 13 19:22:41.977061 containerd[1472]: time="2025-02-13T19:22:41.977013023Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\" returns image reference \"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\"" Feb 13 19:22:41.977823 containerd[1472]: time="2025-02-13T19:22:41.977782023Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 19:22:42.610764 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2454708486.mount: Deactivated successfully. Feb 13 19:22:43.289794 containerd[1472]: time="2025-02-13T19:22:43.289738183Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:22:43.290285 containerd[1472]: time="2025-02-13T19:22:43.290235743Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Feb 13 19:22:43.291052 containerd[1472]: time="2025-02-13T19:22:43.291022783Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:22:43.294635 containerd[1472]: time="2025-02-13T19:22:43.294467903Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:22:43.295234 containerd[1472]: time="2025-02-13T19:22:43.295205343Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.31739364s" Feb 13 19:22:43.295279 containerd[1472]: time="2025-02-13T19:22:43.295234303Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 19:22:43.295906 containerd[1472]: time="2025-02-13T19:22:43.295688303Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 19:22:43.755630 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount670033575.mount: Deactivated successfully. Feb 13 19:22:43.759372 containerd[1472]: time="2025-02-13T19:22:43.759322863Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:22:43.760489 containerd[1472]: time="2025-02-13T19:22:43.760289983Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Feb 13 19:22:43.761103 containerd[1472]: time="2025-02-13T19:22:43.761073743Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:22:43.763457 containerd[1472]: time="2025-02-13T19:22:43.763417663Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:22:43.764737 containerd[1472]: time="2025-02-13T19:22:43.764646063Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 468.922ms" Feb 13 19:22:43.764737 containerd[1472]: time="2025-02-13T19:22:43.764674783Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Feb 13 19:22:43.765372 containerd[1472]: time="2025-02-13T19:22:43.765151423Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Feb 13 19:22:44.331650 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1058573231.mount: Deactivated successfully. Feb 13 19:22:46.887116 containerd[1472]: time="2025-02-13T19:22:46.887055623Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:22:46.893345 containerd[1472]: time="2025-02-13T19:22:46.893271183Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406427" Feb 13 19:22:46.932848 containerd[1472]: time="2025-02-13T19:22:46.932807903Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:22:46.937312 containerd[1472]: time="2025-02-13T19:22:46.937249023Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:22:46.938715 containerd[1472]: time="2025-02-13T19:22:46.938578463Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 3.17339708s" Feb 13 19:22:46.938715 containerd[1472]: time="2025-02-13T19:22:46.938621463Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Feb 13 19:22:49.853956 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 19:22:49.868003 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:22:50.004532 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:22:50.007914 (kubelet)[2085]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:22:50.044876 kubelet[2085]: E0213 19:22:50.044820 2085 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:22:50.047263 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:22:50.047421 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:22:52.274176 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:22:52.288095 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:22:52.311576 systemd[1]: Reloading requested from client PID 2102 ('systemctl') (unit session-7.scope)... Feb 13 19:22:52.311612 systemd[1]: Reloading... Feb 13 19:22:52.379676 zram_generator::config[2141]: No configuration found. Feb 13 19:22:52.627201 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:22:52.679186 systemd[1]: Reloading finished in 367 ms. Feb 13 19:22:52.721486 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:22:52.724402 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:22:52.724604 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:22:52.726947 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:22:52.820803 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:22:52.824934 (kubelet)[2188]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:22:52.858684 kubelet[2188]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:22:52.858684 kubelet[2188]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:22:52.858684 kubelet[2188]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:22:52.859031 kubelet[2188]: I0213 19:22:52.858849 2188 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:22:53.473179 kubelet[2188]: I0213 19:22:53.473123 2188 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 19:22:53.473179 kubelet[2188]: I0213 19:22:53.473161 2188 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:22:53.473440 kubelet[2188]: I0213 19:22:53.473408 2188 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 19:22:53.522947 kubelet[2188]: E0213 19:22:53.522906 2188 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.135:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.135:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:22:53.524064 kubelet[2188]: I0213 19:22:53.523921 2188 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:22:53.529630 kubelet[2188]: E0213 19:22:53.529571 2188 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:22:53.529630 kubelet[2188]: I0213 19:22:53.529630 2188 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:22:53.532954 kubelet[2188]: I0213 19:22:53.532911 2188 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:22:53.533712 kubelet[2188]: I0213 19:22:53.533690 2188 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 19:22:53.533872 kubelet[2188]: I0213 19:22:53.533839 2188 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:22:53.534031 kubelet[2188]: I0213 19:22:53.533873 2188 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:22:53.534172 kubelet[2188]: I0213 19:22:53.534162 2188 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:22:53.534198 kubelet[2188]: I0213 19:22:53.534174 2188 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 19:22:53.534367 kubelet[2188]: I0213 19:22:53.534348 2188 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:22:53.536841 kubelet[2188]: I0213 19:22:53.536313 2188 kubelet.go:408] "Attempting to sync node with API server" Feb 13 19:22:53.536841 kubelet[2188]: I0213 19:22:53.536341 2188 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:22:53.536841 kubelet[2188]: I0213 19:22:53.536431 2188 kubelet.go:314] "Adding apiserver pod source" Feb 13 19:22:53.536841 kubelet[2188]: I0213 19:22:53.536441 2188 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:22:53.539115 kubelet[2188]: W0213 19:22:53.538933 2188 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.135:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Feb 13 19:22:53.539115 kubelet[2188]: E0213 19:22:53.539001 2188 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.135:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.135:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:22:53.539115 kubelet[2188]: W0213 19:22:53.539007 2188 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.135:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Feb 13 19:22:53.539115 kubelet[2188]: E0213 19:22:53.539058 2188 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.135:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.135:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:22:53.541518 kubelet[2188]: I0213 19:22:53.541464 2188 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:22:53.546228 kubelet[2188]: I0213 19:22:53.546194 2188 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:22:53.546853 kubelet[2188]: W0213 19:22:53.546824 2188 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:22:53.547508 kubelet[2188]: I0213 19:22:53.547487 2188 server.go:1269] "Started kubelet" Feb 13 19:22:53.548336 kubelet[2188]: I0213 19:22:53.547946 2188 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:22:53.548336 kubelet[2188]: I0213 19:22:53.548127 2188 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:22:53.548656 kubelet[2188]: I0213 19:22:53.548409 2188 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:22:53.549889 kubelet[2188]: I0213 19:22:53.549843 2188 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:22:53.549889 kubelet[2188]: I0213 19:22:53.549852 2188 server.go:460] "Adding debug handlers to kubelet server" Feb 13 19:22:53.550851 kubelet[2188]: I0213 19:22:53.550437 2188 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:22:53.551251 kubelet[2188]: E0213 19:22:53.551224 2188 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:22:53.551416 kubelet[2188]: I0213 19:22:53.551402 2188 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 19:22:53.551591 kubelet[2188]: I0213 19:22:53.551571 2188 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 19:22:53.551666 kubelet[2188]: I0213 19:22:53.551650 2188 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:22:53.552032 kubelet[2188]: W0213 19:22:53.551930 2188 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.135:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Feb 13 19:22:53.552032 kubelet[2188]: E0213 19:22:53.551987 2188 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.135:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.135:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:22:53.552093 kubelet[2188]: E0213 19:22:53.552044 2188 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.135:6443: connect: connection refused" interval="200ms" Feb 13 19:22:53.552216 kubelet[2188]: E0213 19:22:53.550760 2188 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.135:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.135:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823dae8941edc47 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 19:22:53.547461703 +0000 UTC m=+0.719629441,LastTimestamp:2025-02-13 19:22:53.547461703 +0000 UTC m=+0.719629441,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 19:22:53.552524 kubelet[2188]: I0213 19:22:53.552484 2188 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:22:53.552802 kubelet[2188]: I0213 19:22:53.552549 2188 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:22:53.552802 kubelet[2188]: E0213 19:22:53.552623 2188 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:22:53.554561 kubelet[2188]: I0213 19:22:53.553955 2188 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:22:53.563537 kubelet[2188]: I0213 19:22:53.563481 2188 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:22:53.564672 kubelet[2188]: I0213 19:22:53.564436 2188 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:22:53.564672 kubelet[2188]: I0213 19:22:53.564458 2188 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:22:53.564672 kubelet[2188]: I0213 19:22:53.564475 2188 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 19:22:53.564672 kubelet[2188]: E0213 19:22:53.564510 2188 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:22:53.567227 kubelet[2188]: I0213 19:22:53.567152 2188 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:22:53.567227 kubelet[2188]: I0213 19:22:53.567169 2188 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:22:53.567227 kubelet[2188]: I0213 19:22:53.567187 2188 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:22:53.568523 kubelet[2188]: W0213 19:22:53.568464 2188 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.135:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Feb 13 19:22:53.568610 kubelet[2188]: E0213 19:22:53.568529 2188 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.135:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.135:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:22:53.634682 kubelet[2188]: I0213 19:22:53.634643 2188 policy_none.go:49] "None policy: Start" Feb 13 19:22:53.635480 kubelet[2188]: I0213 19:22:53.635462 2188 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:22:53.635528 kubelet[2188]: I0213 19:22:53.635516 2188 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:22:53.641645 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 19:22:53.652231 kubelet[2188]: E0213 19:22:53.652207 2188 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:22:53.660406 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 19:22:53.663211 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 19:22:53.664860 kubelet[2188]: E0213 19:22:53.664832 2188 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 19:22:53.676392 kubelet[2188]: I0213 19:22:53.676356 2188 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:22:53.677013 kubelet[2188]: I0213 19:22:53.676568 2188 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:22:53.677013 kubelet[2188]: I0213 19:22:53.676587 2188 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:22:53.677013 kubelet[2188]: I0213 19:22:53.676879 2188 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:22:53.677988 kubelet[2188]: E0213 19:22:53.677967 2188 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 19:22:53.752783 kubelet[2188]: E0213 19:22:53.752662 2188 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.135:6443: connect: connection refused" interval="400ms" Feb 13 19:22:53.778882 kubelet[2188]: I0213 19:22:53.778854 2188 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 19:22:53.779444 kubelet[2188]: E0213 19:22:53.779414 2188 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.135:6443/api/v1/nodes\": dial tcp 10.0.0.135:6443: connect: connection refused" node="localhost" Feb 13 19:22:53.872660 systemd[1]: Created slice kubepods-burstable-pod98eb2295280bc6da80e83f7636be329c.slice - libcontainer container kubepods-burstable-pod98eb2295280bc6da80e83f7636be329c.slice. Feb 13 19:22:53.891619 systemd[1]: Created slice kubepods-burstable-pod04cca2c455deeb5da380812dcab224d8.slice - libcontainer container kubepods-burstable-pod04cca2c455deeb5da380812dcab224d8.slice. Feb 13 19:22:53.903148 systemd[1]: Created slice kubepods-burstable-pod05b6721ecb9d8eba1688c08664f14a96.slice - libcontainer container kubepods-burstable-pod05b6721ecb9d8eba1688c08664f14a96.slice. Feb 13 19:22:53.980908 kubelet[2188]: I0213 19:22:53.980873 2188 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 19:22:53.981240 kubelet[2188]: E0213 19:22:53.981194 2188 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.135:6443/api/v1/nodes\": dial tcp 10.0.0.135:6443: connect: connection refused" node="localhost" Feb 13 19:22:54.053184 kubelet[2188]: I0213 19:22:54.053094 2188 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/04cca2c455deeb5da380812dcab224d8-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"04cca2c455deeb5da380812dcab224d8\") " pod="kube-system/kube-scheduler-localhost" Feb 13 19:22:54.053184 kubelet[2188]: I0213 19:22:54.053129 2188 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/05b6721ecb9d8eba1688c08664f14a96-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"05b6721ecb9d8eba1688c08664f14a96\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:22:54.053184 kubelet[2188]: I0213 19:22:54.053149 2188 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:22:54.053184 kubelet[2188]: I0213 19:22:54.053165 2188 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:22:54.053318 kubelet[2188]: I0213 19:22:54.053193 2188 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:22:54.053318 kubelet[2188]: I0213 19:22:54.053208 2188 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:22:54.053318 kubelet[2188]: I0213 19:22:54.053223 2188 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:22:54.053318 kubelet[2188]: I0213 19:22:54.053238 2188 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/05b6721ecb9d8eba1688c08664f14a96-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"05b6721ecb9d8eba1688c08664f14a96\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:22:54.053318 kubelet[2188]: I0213 19:22:54.053253 2188 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/05b6721ecb9d8eba1688c08664f14a96-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"05b6721ecb9d8eba1688c08664f14a96\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:22:54.153775 kubelet[2188]: E0213 19:22:54.153717 2188 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.135:6443: connect: connection refused" interval="800ms" Feb 13 19:22:54.190104 kubelet[2188]: E0213 19:22:54.190071 2188 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:22:54.190857 containerd[1472]: time="2025-02-13T19:22:54.190819943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:98eb2295280bc6da80e83f7636be329c,Namespace:kube-system,Attempt:0,}" Feb 13 19:22:54.194005 kubelet[2188]: E0213 19:22:54.193979 2188 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:22:54.194439 containerd[1472]: time="2025-02-13T19:22:54.194400983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:04cca2c455deeb5da380812dcab224d8,Namespace:kube-system,Attempt:0,}" Feb 13 19:22:54.205855 kubelet[2188]: E0213 19:22:54.205830 2188 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:22:54.206296 containerd[1472]: time="2025-02-13T19:22:54.206262503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:05b6721ecb9d8eba1688c08664f14a96,Namespace:kube-system,Attempt:0,}" Feb 13 19:22:54.383412 kubelet[2188]: I0213 19:22:54.383291 2188 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 19:22:54.383892 kubelet[2188]: E0213 19:22:54.383633 2188 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.135:6443/api/v1/nodes\": dial tcp 10.0.0.135:6443: connect: connection refused" node="localhost" Feb 13 19:22:54.558817 kubelet[2188]: W0213 19:22:54.558752 2188 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.135:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Feb 13 19:22:54.558952 kubelet[2188]: E0213 19:22:54.558821 2188 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.135:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.135:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:22:54.668031 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4175366404.mount: Deactivated successfully. Feb 13 19:22:54.672711 containerd[1472]: time="2025-02-13T19:22:54.672583623Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:22:54.674196 containerd[1472]: time="2025-02-13T19:22:54.674134623Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:22:54.675941 containerd[1472]: time="2025-02-13T19:22:54.675881703Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Feb 13 19:22:54.676648 containerd[1472]: time="2025-02-13T19:22:54.676542223Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:22:54.678052 containerd[1472]: time="2025-02-13T19:22:54.677991383Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:22:54.679214 containerd[1472]: time="2025-02-13T19:22:54.679129423Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:22:54.680802 containerd[1472]: time="2025-02-13T19:22:54.680775423Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:22:54.683041 containerd[1472]: time="2025-02-13T19:22:54.683014943Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 492.11468ms" Feb 13 19:22:54.683609 containerd[1472]: time="2025-02-13T19:22:54.683568143Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:22:54.685950 containerd[1472]: time="2025-02-13T19:22:54.685146463Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 490.68024ms" Feb 13 19:22:54.692052 containerd[1472]: time="2025-02-13T19:22:54.692018143Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 485.68256ms" Feb 13 19:22:54.839656 containerd[1472]: time="2025-02-13T19:22:54.839517823Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:22:54.839656 containerd[1472]: time="2025-02-13T19:22:54.839587823Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:22:54.839656 containerd[1472]: time="2025-02-13T19:22:54.839614103Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:22:54.840020 containerd[1472]: time="2025-02-13T19:22:54.839571503Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:22:54.840020 containerd[1472]: time="2025-02-13T19:22:54.839721863Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:22:54.840182 containerd[1472]: time="2025-02-13T19:22:54.840022903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:22:54.840557 containerd[1472]: time="2025-02-13T19:22:54.840485823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:22:54.840557 containerd[1472]: time="2025-02-13T19:22:54.840251903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:22:54.840557 containerd[1472]: time="2025-02-13T19:22:54.839324463Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:22:54.840557 containerd[1472]: time="2025-02-13T19:22:54.840275183Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:22:54.840557 containerd[1472]: time="2025-02-13T19:22:54.840309743Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:22:54.840557 containerd[1472]: time="2025-02-13T19:22:54.840466303Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:22:54.865781 systemd[1]: Started cri-containerd-004d78cde59c12557bd66c927d359da64c8f2a9e6755d6184e1def9d5f7bb213.scope - libcontainer container 004d78cde59c12557bd66c927d359da64c8f2a9e6755d6184e1def9d5f7bb213. Feb 13 19:22:54.867008 systemd[1]: Started cri-containerd-23966ec267cb33053077efa393d034c235deb29556b6018408b96ac4acf13ccb.scope - libcontainer container 23966ec267cb33053077efa393d034c235deb29556b6018408b96ac4acf13ccb. Feb 13 19:22:54.869041 systemd[1]: Started cri-containerd-d05cb6b765288d08f2172f73a550a64024b2ae2240b30473cc23d313b0ba61da.scope - libcontainer container d05cb6b765288d08f2172f73a550a64024b2ae2240b30473cc23d313b0ba61da. Feb 13 19:22:54.898991 containerd[1472]: time="2025-02-13T19:22:54.898892143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:04cca2c455deeb5da380812dcab224d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"004d78cde59c12557bd66c927d359da64c8f2a9e6755d6184e1def9d5f7bb213\"" Feb 13 19:22:54.899858 containerd[1472]: time="2025-02-13T19:22:54.899832463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:05b6721ecb9d8eba1688c08664f14a96,Namespace:kube-system,Attempt:0,} returns sandbox id \"23966ec267cb33053077efa393d034c235deb29556b6018408b96ac4acf13ccb\"" Feb 13 19:22:54.900878 kubelet[2188]: E0213 19:22:54.900735 2188 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:22:54.900878 kubelet[2188]: E0213 19:22:54.900766 2188 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:22:54.902202 containerd[1472]: time="2025-02-13T19:22:54.902168063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:98eb2295280bc6da80e83f7636be329c,Namespace:kube-system,Attempt:0,} returns sandbox id \"d05cb6b765288d08f2172f73a550a64024b2ae2240b30473cc23d313b0ba61da\"" Feb 13 19:22:54.902852 kubelet[2188]: E0213 19:22:54.902792 2188 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:22:54.903552 containerd[1472]: time="2025-02-13T19:22:54.903420103Z" level=info msg="CreateContainer within sandbox \"004d78cde59c12557bd66c927d359da64c8f2a9e6755d6184e1def9d5f7bb213\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 19:22:54.903790 containerd[1472]: time="2025-02-13T19:22:54.903502383Z" level=info msg="CreateContainer within sandbox \"23966ec267cb33053077efa393d034c235deb29556b6018408b96ac4acf13ccb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 19:22:54.904705 containerd[1472]: time="2025-02-13T19:22:54.904674863Z" level=info msg="CreateContainer within sandbox \"d05cb6b765288d08f2172f73a550a64024b2ae2240b30473cc23d313b0ba61da\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 19:22:54.920691 kubelet[2188]: W0213 19:22:54.920562 2188 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.135:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Feb 13 19:22:54.920691 kubelet[2188]: E0213 19:22:54.920642 2188 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.135:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.135:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:22:54.922132 containerd[1472]: time="2025-02-13T19:22:54.922089343Z" level=info msg="CreateContainer within sandbox \"004d78cde59c12557bd66c927d359da64c8f2a9e6755d6184e1def9d5f7bb213\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"306ec800223aaa6ebab49c72ca4b6652019f14f06052cceeb111bdd2436a2519\"" Feb 13 19:22:54.922866 containerd[1472]: time="2025-02-13T19:22:54.922840783Z" level=info msg="StartContainer for \"306ec800223aaa6ebab49c72ca4b6652019f14f06052cceeb111bdd2436a2519\"" Feb 13 19:22:54.923776 containerd[1472]: time="2025-02-13T19:22:54.923644143Z" level=info msg="CreateContainer within sandbox \"d05cb6b765288d08f2172f73a550a64024b2ae2240b30473cc23d313b0ba61da\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a9dbc78cb6b6796562613c6ea720dda10f1ba461e566eb96af160bd0cc55e62d\"" Feb 13 19:22:54.924091 containerd[1472]: time="2025-02-13T19:22:54.924037863Z" level=info msg="StartContainer for \"a9dbc78cb6b6796562613c6ea720dda10f1ba461e566eb96af160bd0cc55e62d\"" Feb 13 19:22:54.924412 containerd[1472]: time="2025-02-13T19:22:54.924385183Z" level=info msg="CreateContainer within sandbox \"23966ec267cb33053077efa393d034c235deb29556b6018408b96ac4acf13ccb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5bd1acc4f216eef677743b32891a19bd1f396be402f509479ac98baf0120be17\"" Feb 13 19:22:54.924824 containerd[1472]: time="2025-02-13T19:22:54.924801783Z" level=info msg="StartContainer for \"5bd1acc4f216eef677743b32891a19bd1f396be402f509479ac98baf0120be17\"" Feb 13 19:22:54.949769 systemd[1]: Started cri-containerd-a9dbc78cb6b6796562613c6ea720dda10f1ba461e566eb96af160bd0cc55e62d.scope - libcontainer container a9dbc78cb6b6796562613c6ea720dda10f1ba461e566eb96af160bd0cc55e62d. Feb 13 19:22:54.954964 kubelet[2188]: E0213 19:22:54.954917 2188 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.135:6443: connect: connection refused" interval="1.6s" Feb 13 19:22:54.963843 systemd[1]: Started cri-containerd-306ec800223aaa6ebab49c72ca4b6652019f14f06052cceeb111bdd2436a2519.scope - libcontainer container 306ec800223aaa6ebab49c72ca4b6652019f14f06052cceeb111bdd2436a2519. Feb 13 19:22:54.964955 systemd[1]: Started cri-containerd-5bd1acc4f216eef677743b32891a19bd1f396be402f509479ac98baf0120be17.scope - libcontainer container 5bd1acc4f216eef677743b32891a19bd1f396be402f509479ac98baf0120be17. Feb 13 19:22:55.001789 containerd[1472]: time="2025-02-13T19:22:55.001743823Z" level=info msg="StartContainer for \"a9dbc78cb6b6796562613c6ea720dda10f1ba461e566eb96af160bd0cc55e62d\" returns successfully" Feb 13 19:22:55.016527 containerd[1472]: time="2025-02-13T19:22:55.016486423Z" level=info msg="StartContainer for \"5bd1acc4f216eef677743b32891a19bd1f396be402f509479ac98baf0120be17\" returns successfully" Feb 13 19:22:55.016708 containerd[1472]: time="2025-02-13T19:22:55.016583863Z" level=info msg="StartContainer for \"306ec800223aaa6ebab49c72ca4b6652019f14f06052cceeb111bdd2436a2519\" returns successfully" Feb 13 19:22:55.063272 kubelet[2188]: W0213 19:22:55.063176 2188 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.135:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Feb 13 19:22:55.063272 kubelet[2188]: E0213 19:22:55.063243 2188 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.135:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.135:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:22:55.091135 kubelet[2188]: W0213 19:22:55.090978 2188 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.135:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.135:6443: connect: connection refused Feb 13 19:22:55.091135 kubelet[2188]: E0213 19:22:55.091099 2188 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.135:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.135:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:22:55.184776 kubelet[2188]: I0213 19:22:55.184674 2188 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 19:22:55.573801 kubelet[2188]: E0213 19:22:55.573704 2188 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:22:55.575661 kubelet[2188]: E0213 19:22:55.575640 2188 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:22:55.576745 kubelet[2188]: E0213 19:22:55.576725 2188 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:22:56.581077 kubelet[2188]: E0213 19:22:56.581039 2188 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:22:56.614382 kubelet[2188]: E0213 19:22:56.614336 2188 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 13 19:22:56.707018 kubelet[2188]: I0213 19:22:56.706981 2188 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Feb 13 19:22:56.707018 kubelet[2188]: E0213 19:22:56.707023 2188 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Feb 13 19:22:56.717870 kubelet[2188]: E0213 19:22:56.717836 2188 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:22:56.817985 kubelet[2188]: E0213 19:22:56.817944 2188 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:22:56.901214 kubelet[2188]: E0213 19:22:56.900871 2188 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:22:56.918895 kubelet[2188]: E0213 19:22:56.918870 2188 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:22:57.019498 kubelet[2188]: E0213 19:22:57.019457 2188 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:22:57.120530 kubelet[2188]: E0213 19:22:57.120475 2188 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:22:57.221501 kubelet[2188]: E0213 19:22:57.221460 2188 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:22:57.322490 kubelet[2188]: E0213 19:22:57.322449 2188 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:22:57.422990 kubelet[2188]: E0213 19:22:57.422948 2188 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:22:57.523683 kubelet[2188]: E0213 19:22:57.523567 2188 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:22:57.623966 kubelet[2188]: E0213 19:22:57.623932 2188 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:22:58.539581 kubelet[2188]: I0213 19:22:58.539515 2188 apiserver.go:52] "Watching apiserver" Feb 13 19:22:58.552651 kubelet[2188]: I0213 19:22:58.552616 2188 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 19:22:58.792004 systemd[1]: Reloading requested from client PID 2464 ('systemctl') (unit session-7.scope)... Feb 13 19:22:58.792021 systemd[1]: Reloading... Feb 13 19:22:58.855630 zram_generator::config[2506]: No configuration found. Feb 13 19:22:58.935946 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:22:59.000792 systemd[1]: Reloading finished in 208 ms. Feb 13 19:22:59.036472 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:22:59.046160 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:22:59.046390 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:22:59.046431 systemd[1]: kubelet.service: Consumed 1.062s CPU time, 117.1M memory peak, 0B memory swap peak. Feb 13 19:22:59.059057 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:22:59.155256 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:22:59.158217 (kubelet)[2544]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:22:59.197399 kubelet[2544]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:22:59.197399 kubelet[2544]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:22:59.197399 kubelet[2544]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:22:59.197813 kubelet[2544]: I0213 19:22:59.197457 2544 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:22:59.210425 kubelet[2544]: I0213 19:22:59.210367 2544 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 19:22:59.210425 kubelet[2544]: I0213 19:22:59.210397 2544 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:22:59.210990 kubelet[2544]: I0213 19:22:59.210807 2544 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 19:22:59.214782 kubelet[2544]: I0213 19:22:59.214755 2544 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 19:22:59.218328 kubelet[2544]: I0213 19:22:59.218295 2544 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:22:59.222410 kubelet[2544]: E0213 19:22:59.222385 2544 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:22:59.223013 kubelet[2544]: I0213 19:22:59.222488 2544 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:22:59.224665 kubelet[2544]: I0213 19:22:59.224647 2544 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:22:59.224784 kubelet[2544]: I0213 19:22:59.224772 2544 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 19:22:59.224890 kubelet[2544]: I0213 19:22:59.224867 2544 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:22:59.225047 kubelet[2544]: I0213 19:22:59.224896 2544 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:22:59.225119 kubelet[2544]: I0213 19:22:59.225058 2544 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:22:59.225119 kubelet[2544]: I0213 19:22:59.225066 2544 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 19:22:59.225119 kubelet[2544]: I0213 19:22:59.225099 2544 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:22:59.225199 kubelet[2544]: I0213 19:22:59.225184 2544 kubelet.go:408] "Attempting to sync node with API server" Feb 13 19:22:59.225231 kubelet[2544]: I0213 19:22:59.225200 2544 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:22:59.225231 kubelet[2544]: I0213 19:22:59.225222 2544 kubelet.go:314] "Adding apiserver pod source" Feb 13 19:22:59.225272 kubelet[2544]: I0213 19:22:59.225235 2544 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:22:59.227030 kubelet[2544]: I0213 19:22:59.226035 2544 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:22:59.227030 kubelet[2544]: I0213 19:22:59.226692 2544 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:22:59.227177 kubelet[2544]: I0213 19:22:59.227155 2544 server.go:1269] "Started kubelet" Feb 13 19:22:59.227480 kubelet[2544]: I0213 19:22:59.227380 2544 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:22:59.233603 kubelet[2544]: I0213 19:22:59.228433 2544 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:22:59.233603 kubelet[2544]: I0213 19:22:59.228729 2544 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:22:59.233603 kubelet[2544]: I0213 19:22:59.229508 2544 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:22:59.233603 kubelet[2544]: I0213 19:22:59.232676 2544 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:22:59.235170 kubelet[2544]: E0213 19:22:59.235138 2544 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:22:59.235241 kubelet[2544]: I0213 19:22:59.235178 2544 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 19:22:59.235379 kubelet[2544]: I0213 19:22:59.235356 2544 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 19:22:59.235518 kubelet[2544]: I0213 19:22:59.235501 2544 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:22:59.239604 kubelet[2544]: E0213 19:22:59.237250 2544 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:22:59.239604 kubelet[2544]: I0213 19:22:59.237635 2544 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:22:59.239604 kubelet[2544]: I0213 19:22:59.237829 2544 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:22:59.243950 kubelet[2544]: I0213 19:22:59.243913 2544 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:22:59.244796 kubelet[2544]: I0213 19:22:59.244774 2544 server.go:460] "Adding debug handlers to kubelet server" Feb 13 19:22:59.244902 kubelet[2544]: I0213 19:22:59.244868 2544 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:22:59.244902 kubelet[2544]: I0213 19:22:59.244898 2544 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:22:59.244963 kubelet[2544]: I0213 19:22:59.244916 2544 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 19:22:59.244990 kubelet[2544]: E0213 19:22:59.244956 2544 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:22:59.252994 kubelet[2544]: I0213 19:22:59.252531 2544 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:22:59.282485 kubelet[2544]: I0213 19:22:59.282455 2544 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:22:59.282695 kubelet[2544]: I0213 19:22:59.282680 2544 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:22:59.282788 kubelet[2544]: I0213 19:22:59.282777 2544 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:22:59.282971 kubelet[2544]: I0213 19:22:59.282954 2544 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 19:22:59.283044 kubelet[2544]: I0213 19:22:59.283022 2544 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 19:22:59.283091 kubelet[2544]: I0213 19:22:59.283083 2544 policy_none.go:49] "None policy: Start" Feb 13 19:22:59.283760 kubelet[2544]: I0213 19:22:59.283742 2544 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:22:59.283890 kubelet[2544]: I0213 19:22:59.283881 2544 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:22:59.284085 kubelet[2544]: I0213 19:22:59.284070 2544 state_mem.go:75] "Updated machine memory state" Feb 13 19:22:59.287630 kubelet[2544]: I0213 19:22:59.287609 2544 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:22:59.288577 kubelet[2544]: I0213 19:22:59.287960 2544 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:22:59.288936 kubelet[2544]: I0213 19:22:59.288891 2544 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:22:59.289692 kubelet[2544]: I0213 19:22:59.289667 2544 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:22:59.393518 kubelet[2544]: I0213 19:22:59.393421 2544 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 19:22:59.400389 kubelet[2544]: I0213 19:22:59.400359 2544 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Feb 13 19:22:59.400500 kubelet[2544]: I0213 19:22:59.400443 2544 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Feb 13 19:22:59.436971 kubelet[2544]: I0213 19:22:59.436886 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:22:59.436971 kubelet[2544]: I0213 19:22:59.436934 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:22:59.436971 kubelet[2544]: I0213 19:22:59.436955 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:22:59.436971 kubelet[2544]: I0213 19:22:59.436971 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:22:59.436971 kubelet[2544]: I0213 19:22:59.436988 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:22:59.437236 kubelet[2544]: I0213 19:22:59.437003 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/04cca2c455deeb5da380812dcab224d8-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"04cca2c455deeb5da380812dcab224d8\") " pod="kube-system/kube-scheduler-localhost" Feb 13 19:22:59.437236 kubelet[2544]: I0213 19:22:59.437017 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/05b6721ecb9d8eba1688c08664f14a96-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"05b6721ecb9d8eba1688c08664f14a96\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:22:59.437236 kubelet[2544]: I0213 19:22:59.437034 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/05b6721ecb9d8eba1688c08664f14a96-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"05b6721ecb9d8eba1688c08664f14a96\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:22:59.437236 kubelet[2544]: I0213 19:22:59.437050 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/05b6721ecb9d8eba1688c08664f14a96-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"05b6721ecb9d8eba1688c08664f14a96\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:22:59.653386 kubelet[2544]: E0213 19:22:59.653201 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:22:59.654250 kubelet[2544]: E0213 19:22:59.654216 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:22:59.654250 kubelet[2544]: E0213 19:22:59.654247 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:23:00.225667 kubelet[2544]: I0213 19:23:00.225623 2544 apiserver.go:52] "Watching apiserver" Feb 13 19:23:00.235705 kubelet[2544]: I0213 19:23:00.235660 2544 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 19:23:00.267190 kubelet[2544]: E0213 19:23:00.267101 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:23:00.268325 kubelet[2544]: E0213 19:23:00.268291 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:23:00.271786 kubelet[2544]: E0213 19:23:00.271677 2544 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 19:23:00.271920 kubelet[2544]: E0213 19:23:00.271905 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:23:00.287515 kubelet[2544]: I0213 19:23:00.287266 2544 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.287248845 podStartE2EDuration="1.287248845s" podCreationTimestamp="2025-02-13 19:22:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:23:00.287151764 +0000 UTC m=+1.126121382" watchObservedRunningTime="2025-02-13 19:23:00.287248845 +0000 UTC m=+1.126218463" Feb 13 19:23:00.294358 kubelet[2544]: I0213 19:23:00.294272 2544 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.294207497 podStartE2EDuration="1.294207497s" podCreationTimestamp="2025-02-13 19:22:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:23:00.293479452 +0000 UTC m=+1.132449070" watchObservedRunningTime="2025-02-13 19:23:00.294207497 +0000 UTC m=+1.133177115" Feb 13 19:23:01.268474 kubelet[2544]: E0213 19:23:01.268435 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:23:01.388202 kubelet[2544]: E0213 19:23:01.388167 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:23:03.815018 kubelet[2544]: I0213 19:23:03.814970 2544 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 19:23:03.815578 kubelet[2544]: I0213 19:23:03.815465 2544 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 19:23:03.815628 containerd[1472]: time="2025-02-13T19:23:03.815270348Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:23:03.915099 sudo[1646]: pam_unix(sudo:session): session closed for user root Feb 13 19:23:03.916230 sshd[1645]: Connection closed by 10.0.0.1 port 54464 Feb 13 19:23:03.917084 sshd-session[1643]: pam_unix(sshd:session): session closed for user core Feb 13 19:23:03.921645 systemd[1]: sshd@6-10.0.0.135:22-10.0.0.1:54464.service: Deactivated successfully. Feb 13 19:23:03.923554 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:23:03.923857 systemd[1]: session-7.scope: Consumed 7.261s CPU time, 155.4M memory peak, 0B memory swap peak. Feb 13 19:23:03.925022 systemd-logind[1449]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:23:03.926548 systemd-logind[1449]: Removed session 7. Feb 13 19:23:03.991319 kubelet[2544]: E0213 19:23:03.991222 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:23:04.470614 kubelet[2544]: I0213 19:23:04.470470 2544 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=5.470291269 podStartE2EDuration="5.470291269s" podCreationTimestamp="2025-02-13 19:22:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:23:00.30392309 +0000 UTC m=+1.142892708" watchObservedRunningTime="2025-02-13 19:23:04.470291269 +0000 UTC m=+5.309260887" Feb 13 19:23:04.479429 systemd[1]: Created slice kubepods-besteffort-pod457c3625_b67d_45aa_a91e_364e05283d43.slice - libcontainer container kubepods-besteffort-pod457c3625_b67d_45aa_a91e_364e05283d43.slice. Feb 13 19:23:04.566462 kubelet[2544]: I0213 19:23:04.566408 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/457c3625-b67d-45aa-a91e-364e05283d43-lib-modules\") pod \"kube-proxy-rq8lg\" (UID: \"457c3625-b67d-45aa-a91e-364e05283d43\") " pod="kube-system/kube-proxy-rq8lg" Feb 13 19:23:04.569046 kubelet[2544]: I0213 19:23:04.566470 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/457c3625-b67d-45aa-a91e-364e05283d43-kube-proxy\") pod \"kube-proxy-rq8lg\" (UID: \"457c3625-b67d-45aa-a91e-364e05283d43\") " pod="kube-system/kube-proxy-rq8lg" Feb 13 19:23:04.569046 kubelet[2544]: I0213 19:23:04.566735 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/457c3625-b67d-45aa-a91e-364e05283d43-xtables-lock\") pod \"kube-proxy-rq8lg\" (UID: \"457c3625-b67d-45aa-a91e-364e05283d43\") " pod="kube-system/kube-proxy-rq8lg" Feb 13 19:23:04.569046 kubelet[2544]: I0213 19:23:04.566765 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxdd9\" (UniqueName: \"kubernetes.io/projected/457c3625-b67d-45aa-a91e-364e05283d43-kube-api-access-hxdd9\") pod \"kube-proxy-rq8lg\" (UID: \"457c3625-b67d-45aa-a91e-364e05283d43\") " pod="kube-system/kube-proxy-rq8lg" Feb 13 19:23:04.790966 kubelet[2544]: E0213 19:23:04.790856 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:23:04.791792 containerd[1472]: time="2025-02-13T19:23:04.791745699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rq8lg,Uid:457c3625-b67d-45aa-a91e-364e05283d43,Namespace:kube-system,Attempt:0,}" Feb 13 19:23:04.813758 containerd[1472]: time="2025-02-13T19:23:04.813663347Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:23:04.813758 containerd[1472]: time="2025-02-13T19:23:04.813719107Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:23:04.813758 containerd[1472]: time="2025-02-13T19:23:04.813735027Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:23:04.813908 containerd[1472]: time="2025-02-13T19:23:04.813816627Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:23:04.833762 systemd[1]: Started cri-containerd-aeac8e78a469b6a2732e76a300fea6cce96536bb445235f946e5a0488425f09d.scope - libcontainer container aeac8e78a469b6a2732e76a300fea6cce96536bb445235f946e5a0488425f09d. Feb 13 19:23:04.852183 containerd[1472]: time="2025-02-13T19:23:04.852110290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rq8lg,Uid:457c3625-b67d-45aa-a91e-364e05283d43,Namespace:kube-system,Attempt:0,} returns sandbox id \"aeac8e78a469b6a2732e76a300fea6cce96536bb445235f946e5a0488425f09d\"" Feb 13 19:23:04.852921 kubelet[2544]: E0213 19:23:04.852805 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:23:04.855819 containerd[1472]: time="2025-02-13T19:23:04.855383229Z" level=info msg="CreateContainer within sandbox \"aeac8e78a469b6a2732e76a300fea6cce96536bb445235f946e5a0488425f09d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:23:04.880018 containerd[1472]: time="2025-02-13T19:23:04.879940532Z" level=info msg="CreateContainer within sandbox \"aeac8e78a469b6a2732e76a300fea6cce96536bb445235f946e5a0488425f09d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e23ecde8f933fb737f43ea65eb063fc9cfed21ef7910756a3428ad284913344b\"" Feb 13 19:23:04.881326 containerd[1472]: time="2025-02-13T19:23:04.881148859Z" level=info msg="StartContainer for \"e23ecde8f933fb737f43ea65eb063fc9cfed21ef7910756a3428ad284913344b\"" Feb 13 19:23:04.897821 systemd[1]: Created slice kubepods-besteffort-pod2573860b_5889_47bf_a5ef_c631840068a6.slice - libcontainer container kubepods-besteffort-pod2573860b_5889_47bf_a5ef_c631840068a6.slice. Feb 13 19:23:04.913792 systemd[1]: Started cri-containerd-e23ecde8f933fb737f43ea65eb063fc9cfed21ef7910756a3428ad284913344b.scope - libcontainer container e23ecde8f933fb737f43ea65eb063fc9cfed21ef7910756a3428ad284913344b. Feb 13 19:23:04.936392 containerd[1472]: time="2025-02-13T19:23:04.936354140Z" level=info msg="StartContainer for \"e23ecde8f933fb737f43ea65eb063fc9cfed21ef7910756a3428ad284913344b\" returns successfully" Feb 13 19:23:04.971762 kubelet[2544]: I0213 19:23:04.971719 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2573860b-5889-47bf-a5ef-c631840068a6-var-lib-calico\") pod \"tigera-operator-76c4976dd7-kj7tg\" (UID: \"2573860b-5889-47bf-a5ef-c631840068a6\") " pod="tigera-operator/tigera-operator-76c4976dd7-kj7tg" Feb 13 19:23:04.971892 kubelet[2544]: I0213 19:23:04.971780 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95ztx\" (UniqueName: \"kubernetes.io/projected/2573860b-5889-47bf-a5ef-c631840068a6-kube-api-access-95ztx\") pod \"tigera-operator-76c4976dd7-kj7tg\" (UID: \"2573860b-5889-47bf-a5ef-c631840068a6\") " pod="tigera-operator/tigera-operator-76c4976dd7-kj7tg" Feb 13 19:23:05.201489 containerd[1472]: time="2025-02-13T19:23:05.201422609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-kj7tg,Uid:2573860b-5889-47bf-a5ef-c631840068a6,Namespace:tigera-operator,Attempt:0,}" Feb 13 19:23:05.220108 containerd[1472]: time="2025-02-13T19:23:05.219982030Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:23:05.220108 containerd[1472]: time="2025-02-13T19:23:05.220058351Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:23:05.220108 containerd[1472]: time="2025-02-13T19:23:05.220079791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:23:05.220378 containerd[1472]: time="2025-02-13T19:23:05.220164351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:23:05.234756 systemd[1]: Started cri-containerd-f53b8266f077b0d1463c01880b8068b51c6bc3229b9e4c719488b01652f18d93.scope - libcontainer container f53b8266f077b0d1463c01880b8068b51c6bc3229b9e4c719488b01652f18d93. Feb 13 19:23:05.265092 containerd[1472]: time="2025-02-13T19:23:05.265053996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-kj7tg,Uid:2573860b-5889-47bf-a5ef-c631840068a6,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f53b8266f077b0d1463c01880b8068b51c6bc3229b9e4c719488b01652f18d93\"" Feb 13 19:23:05.267967 containerd[1472]: time="2025-02-13T19:23:05.267869451Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Feb 13 19:23:05.278764 kubelet[2544]: E0213 19:23:05.278276 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:23:07.078090 kubelet[2544]: E0213 19:23:07.078020 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:23:07.096771 kubelet[2544]: I0213 19:23:07.096715 2544 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rq8lg" podStartSLOduration=3.096697381 podStartE2EDuration="3.096697381s" podCreationTimestamp="2025-02-13 19:23:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:23:05.28968421 +0000 UTC m=+6.128653828" watchObservedRunningTime="2025-02-13 19:23:07.096697381 +0000 UTC m=+7.935666999" Feb 13 19:23:07.280897 kubelet[2544]: E0213 19:23:07.280862 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:23:07.790261 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount587637788.mount: Deactivated successfully. Feb 13 19:23:08.143859 containerd[1472]: time="2025-02-13T19:23:08.143731036Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:23:08.144461 containerd[1472]: time="2025-02-13T19:23:08.144412719Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19124160" Feb 13 19:23:08.145187 containerd[1472]: time="2025-02-13T19:23:08.145153163Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:23:08.147356 containerd[1472]: time="2025-02-13T19:23:08.147316132Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:23:08.149021 containerd[1472]: time="2025-02-13T19:23:08.148988180Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 2.881009608s" Feb 13 19:23:08.149021 containerd[1472]: time="2025-02-13T19:23:08.149019780Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Feb 13 19:23:08.156758 containerd[1472]: time="2025-02-13T19:23:08.156711454Z" level=info msg="CreateContainer within sandbox \"f53b8266f077b0d1463c01880b8068b51c6bc3229b9e4c719488b01652f18d93\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 13 19:23:08.197870 containerd[1472]: time="2025-02-13T19:23:08.197825199Z" level=info msg="CreateContainer within sandbox \"f53b8266f077b0d1463c01880b8068b51c6bc3229b9e4c719488b01652f18d93\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"eca8d4d4225c43226ddfbee2b33ebe834f418bb3ea891a8af49819a650359461\"" Feb 13 19:23:08.198396 containerd[1472]: time="2025-02-13T19:23:08.198304641Z" level=info msg="StartContainer for \"eca8d4d4225c43226ddfbee2b33ebe834f418bb3ea891a8af49819a650359461\"" Feb 13 19:23:08.226749 systemd[1]: Started cri-containerd-eca8d4d4225c43226ddfbee2b33ebe834f418bb3ea891a8af49819a650359461.scope - libcontainer container eca8d4d4225c43226ddfbee2b33ebe834f418bb3ea891a8af49819a650359461. Feb 13 19:23:08.273267 containerd[1472]: time="2025-02-13T19:23:08.273206018Z" level=info msg="StartContainer for \"eca8d4d4225c43226ddfbee2b33ebe834f418bb3ea891a8af49819a650359461\" returns successfully" Feb 13 19:23:08.313354 kubelet[2544]: I0213 19:23:08.313258 2544 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4976dd7-kj7tg" podStartSLOduration=1.425716519 podStartE2EDuration="4.313241358s" podCreationTimestamp="2025-02-13 19:23:04 +0000 UTC" firstStartedPulling="2025-02-13 19:23:05.266138362 +0000 UTC m=+6.105107980" lastFinishedPulling="2025-02-13 19:23:08.153663241 +0000 UTC m=+8.992632819" observedRunningTime="2025-02-13 19:23:08.313046117 +0000 UTC m=+9.152015735" watchObservedRunningTime="2025-02-13 19:23:08.313241358 +0000 UTC m=+9.152210976" Feb 13 19:23:11.406163 kubelet[2544]: E0213 19:23:11.406110 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:23:11.617259 update_engine[1451]: I20250213 19:23:11.616804 1451 update_attempter.cc:509] Updating boot flags... Feb 13 19:23:11.643958 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2948) Feb 13 19:23:11.699651 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2949) Feb 13 19:23:11.736637 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2949) Feb 13 19:23:11.887445 systemd[1]: Created slice kubepods-besteffort-pod83a039e3_89e3_4c97_8eed_c0203ce3aafc.slice - libcontainer container kubepods-besteffort-pod83a039e3_89e3_4c97_8eed_c0203ce3aafc.slice. Feb 13 19:23:11.916228 kubelet[2544]: I0213 19:23:11.916166 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/83a039e3-89e3-4c97-8eed-c0203ce3aafc-typha-certs\") pod \"calico-typha-5bc6bdbd4d-qhxwc\" (UID: \"83a039e3-89e3-4c97-8eed-c0203ce3aafc\") " pod="calico-system/calico-typha-5bc6bdbd4d-qhxwc" Feb 13 19:23:11.916383 kubelet[2544]: I0213 19:23:11.916258 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83a039e3-89e3-4c97-8eed-c0203ce3aafc-tigera-ca-bundle\") pod \"calico-typha-5bc6bdbd4d-qhxwc\" (UID: \"83a039e3-89e3-4c97-8eed-c0203ce3aafc\") " pod="calico-system/calico-typha-5bc6bdbd4d-qhxwc" Feb 13 19:23:11.916383 kubelet[2544]: I0213 19:23:11.916283 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfrmg\" (UniqueName: \"kubernetes.io/projected/83a039e3-89e3-4c97-8eed-c0203ce3aafc-kube-api-access-zfrmg\") pod \"calico-typha-5bc6bdbd4d-qhxwc\" (UID: \"83a039e3-89e3-4c97-8eed-c0203ce3aafc\") " pod="calico-system/calico-typha-5bc6bdbd4d-qhxwc" Feb 13 19:23:11.935483 systemd[1]: Created slice kubepods-besteffort-pod105c813f_a9f3_4fd9_87cb_9194647f9332.slice - libcontainer container kubepods-besteffort-pod105c813f_a9f3_4fd9_87cb_9194647f9332.slice. Feb 13 19:23:12.019266 kubelet[2544]: I0213 19:23:12.017455 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/105c813f-a9f3-4fd9-87cb-9194647f9332-tigera-ca-bundle\") pod \"calico-node-d4xt4\" (UID: \"105c813f-a9f3-4fd9-87cb-9194647f9332\") " pod="calico-system/calico-node-d4xt4" Feb 13 19:23:12.019266 kubelet[2544]: I0213 19:23:12.017506 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/105c813f-a9f3-4fd9-87cb-9194647f9332-cni-net-dir\") pod \"calico-node-d4xt4\" (UID: \"105c813f-a9f3-4fd9-87cb-9194647f9332\") " pod="calico-system/calico-node-d4xt4" Feb 13 19:23:12.019266 kubelet[2544]: I0213 19:23:12.017542 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7pcn\" (UniqueName: \"kubernetes.io/projected/105c813f-a9f3-4fd9-87cb-9194647f9332-kube-api-access-f7pcn\") pod \"calico-node-d4xt4\" (UID: \"105c813f-a9f3-4fd9-87cb-9194647f9332\") " pod="calico-system/calico-node-d4xt4" Feb 13 19:23:12.019266 kubelet[2544]: I0213 19:23:12.017565 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/105c813f-a9f3-4fd9-87cb-9194647f9332-xtables-lock\") pod \"calico-node-d4xt4\" (UID: \"105c813f-a9f3-4fd9-87cb-9194647f9332\") " pod="calico-system/calico-node-d4xt4" Feb 13 19:23:12.019266 kubelet[2544]: I0213 19:23:12.017583 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/105c813f-a9f3-4fd9-87cb-9194647f9332-var-run-calico\") pod \"calico-node-d4xt4\" (UID: \"105c813f-a9f3-4fd9-87cb-9194647f9332\") " pod="calico-system/calico-node-d4xt4" Feb 13 19:23:12.020079 kubelet[2544]: I0213 19:23:12.017614 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/105c813f-a9f3-4fd9-87cb-9194647f9332-cni-bin-dir\") pod \"calico-node-d4xt4\" (UID: \"105c813f-a9f3-4fd9-87cb-9194647f9332\") " pod="calico-system/calico-node-d4xt4" Feb 13 19:23:12.020079 kubelet[2544]: I0213 19:23:12.017634 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/105c813f-a9f3-4fd9-87cb-9194647f9332-var-lib-calico\") pod \"calico-node-d4xt4\" (UID: \"105c813f-a9f3-4fd9-87cb-9194647f9332\") " pod="calico-system/calico-node-d4xt4" Feb 13 19:23:12.020079 kubelet[2544]: I0213 19:23:12.017662 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/105c813f-a9f3-4fd9-87cb-9194647f9332-node-certs\") pod \"calico-node-d4xt4\" (UID: \"105c813f-a9f3-4fd9-87cb-9194647f9332\") " pod="calico-system/calico-node-d4xt4" Feb 13 19:23:12.020079 kubelet[2544]: I0213 19:23:12.017677 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/105c813f-a9f3-4fd9-87cb-9194647f9332-lib-modules\") pod \"calico-node-d4xt4\" (UID: \"105c813f-a9f3-4fd9-87cb-9194647f9332\") " pod="calico-system/calico-node-d4xt4" Feb 13 19:23:12.020079 kubelet[2544]: I0213 19:23:12.017694 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/105c813f-a9f3-4fd9-87cb-9194647f9332-policysync\") pod \"calico-node-d4xt4\" (UID: \"105c813f-a9f3-4fd9-87cb-9194647f9332\") " pod="calico-system/calico-node-d4xt4" Feb 13 19:23:12.020583 kubelet[2544]: I0213 19:23:12.017708 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/105c813f-a9f3-4fd9-87cb-9194647f9332-cni-log-dir\") pod \"calico-node-d4xt4\" (UID: \"105c813f-a9f3-4fd9-87cb-9194647f9332\") " pod="calico-system/calico-node-d4xt4" Feb 13 19:23:12.020583 kubelet[2544]: I0213 19:23:12.017747 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/105c813f-a9f3-4fd9-87cb-9194647f9332-flexvol-driver-host\") pod \"calico-node-d4xt4\" (UID: \"105c813f-a9f3-4fd9-87cb-9194647f9332\") " pod="calico-system/calico-node-d4xt4" Feb 13 19:23:12.041525 kubelet[2544]: E0213 19:23:12.040609 2544 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dtfv4" podUID="278e43f1-bd8c-4a43-8396-436ddaca249b" Feb 13 19:23:12.117985 kubelet[2544]: I0213 19:23:12.117950 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/278e43f1-bd8c-4a43-8396-436ddaca249b-kubelet-dir\") pod \"csi-node-driver-dtfv4\" (UID: \"278e43f1-bd8c-4a43-8396-436ddaca249b\") " pod="calico-system/csi-node-driver-dtfv4" Feb 13 19:23:12.118212 kubelet[2544]: I0213 19:23:12.118194 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/278e43f1-bd8c-4a43-8396-436ddaca249b-socket-dir\") pod \"csi-node-driver-dtfv4\" (UID: \"278e43f1-bd8c-4a43-8396-436ddaca249b\") " pod="calico-system/csi-node-driver-dtfv4" Feb 13 19:23:12.118292 kubelet[2544]: I0213 19:23:12.118278 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/278e43f1-bd8c-4a43-8396-436ddaca249b-varrun\") pod \"csi-node-driver-dtfv4\" (UID: \"278e43f1-bd8c-4a43-8396-436ddaca249b\") " pod="calico-system/csi-node-driver-dtfv4" Feb 13 19:23:12.118409 kubelet[2544]: I0213 19:23:12.118392 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8dcm\" (UniqueName: \"kubernetes.io/projected/278e43f1-bd8c-4a43-8396-436ddaca249b-kube-api-access-f8dcm\") pod \"csi-node-driver-dtfv4\" (UID: \"278e43f1-bd8c-4a43-8396-436ddaca249b\") " pod="calico-system/csi-node-driver-dtfv4" Feb 13 19:23:12.118562 kubelet[2544]: I0213 19:23:12.118516 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/278e43f1-bd8c-4a43-8396-436ddaca249b-registration-dir\") pod \"csi-node-driver-dtfv4\" (UID: \"278e43f1-bd8c-4a43-8396-436ddaca249b\") " pod="calico-system/csi-node-driver-dtfv4" Feb 13 19:23:12.126212 kubelet[2544]: E0213 19:23:12.126186 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:12.126212 kubelet[2544]: W0213 19:23:12.126207 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:12.126392 kubelet[2544]: E0213 19:23:12.126234 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:12.126440 kubelet[2544]: E0213 19:23:12.126427 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:12.126440 kubelet[2544]: W0213 19:23:12.126438 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:12.126560 kubelet[2544]: E0213 19:23:12.126493 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:12.126619 kubelet[2544]: E0213 19:23:12.126591 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:12.126619 kubelet[2544]: W0213 19:23:12.126616 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:12.126709 kubelet[2544]: E0213 19:23:12.126689 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:12.126854 kubelet[2544]: E0213 19:23:12.126765 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:12.126854 kubelet[2544]: W0213 19:23:12.126775 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:12.126854 kubelet[2544]: E0213 19:23:12.126783 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:12.134300 kubelet[2544]: E0213 19:23:12.134154 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:12.134300 kubelet[2544]: W0213 19:23:12.134199 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:12.134300 kubelet[2544]: E0213 19:23:12.134218 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:12.203099 kubelet[2544]: E0213 19:23:12.202986 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:23:12.204773 containerd[1472]: time="2025-02-13T19:23:12.203978296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5bc6bdbd4d-qhxwc,Uid:83a039e3-89e3-4c97-8eed-c0203ce3aafc,Namespace:calico-system,Attempt:0,}" Feb 13 19:23:12.219980 kubelet[2544]: E0213 19:23:12.219951 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:12.220231 kubelet[2544]: W0213 19:23:12.220117 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:12.220231 kubelet[2544]: E0213 19:23:12.220143 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:12.220554 kubelet[2544]: E0213 19:23:12.220540 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:12.220711 kubelet[2544]: W0213 19:23:12.220639 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:12.220711 kubelet[2544]: E0213 19:23:12.220667 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:12.221123 kubelet[2544]: E0213 19:23:12.221043 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:12.221123 kubelet[2544]: W0213 19:23:12.221058 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:12.221123 kubelet[2544]: E0213 19:23:12.221074 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:12.221361 kubelet[2544]: E0213 19:23:12.221308 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:12.221361 kubelet[2544]: W0213 19:23:12.221359 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:12.221469 kubelet[2544]: E0213 19:23:12.221380 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:12.221807 kubelet[2544]: E0213 19:23:12.221622 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:12.221807 kubelet[2544]: W0213 19:23:12.221633 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:12.221807 kubelet[2544]: E0213 19:23:12.221644 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:12.222129 kubelet[2544]: E0213 19:23:12.221832 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:12.222129 kubelet[2544]: W0213 19:23:12.221840 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:12.222129 kubelet[2544]: E0213 19:23:12.221857 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:12.223541 kubelet[2544]: E0213 19:23:12.223522 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:12.223707 kubelet[2544]: W0213 19:23:12.223634 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:12.223707 kubelet[2544]: E0213 19:23:12.223661 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:12.224056 kubelet[2544]: E0213 19:23:12.223960 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:12.224056 kubelet[2544]: W0213 19:23:12.223971 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:12.224056 kubelet[2544]: E0213 19:23:12.224016 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:12.224523 kubelet[2544]: E0213 19:23:12.224342 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:12.224523 kubelet[2544]: W0213 19:23:12.224357 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:12.224523 kubelet[2544]: E0213 19:23:12.224429 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:12.225872 kubelet[2544]: E0213 19:23:12.225726 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:12.225872 kubelet[2544]: W0213 19:23:12.225742 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:12.225872 kubelet[2544]: E0213 19:23:12.225859 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:12.226286 kubelet[2544]: E0213 19:23:12.226195 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:12.226286 kubelet[2544]: W0213 19:23:12.226210 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:12.226286 kubelet[2544]: E0213 19:23:12.226254 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:12.230684 kubelet[2544]: E0213 19:23:12.229028 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:12.230684 kubelet[2544]: W0213 19:23:12.229047 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:12.230684 kubelet[2544]: E0213 19:23:12.229198 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:12.230684 kubelet[2544]: E0213 19:23:12.229269 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:12.230684 kubelet[2544]: W0213 19:23:12.229278 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:12.230684 kubelet[2544]: E0213 19:23:12.229528 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:12.230684 kubelet[2544]: W0213 19:23:12.229539 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:12.230684 kubelet[2544]: E0213 19:23:12.229746 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:12.230684 kubelet[2544]: W0213 19:23:12.229755 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:12.230684 kubelet[2544]: E0213 19:23:12.229770 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:12.230684 kubelet[2544]: E0213 19:23:12.229982 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:12.230975 kubelet[2544]: W0213 19:23:12.229990 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:12.230975 kubelet[2544]: E0213 19:23:12.230000 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:12.230975 kubelet[2544]: E0213 19:23:12.230264 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:12.230975 kubelet[2544]: W0213 19:23:12.230274 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:12.230975 kubelet[2544]: E0213 19:23:12.230285 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:12.230975 kubelet[2544]: E0213 19:23:12.230310 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:12.230975 kubelet[2544]: E0213 19:23:12.230815 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:12.231318 kubelet[2544]: E0213 19:23:12.231177 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:12.231318 kubelet[2544]: W0213 19:23:12.231192 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:12.231318 kubelet[2544]: E0213 19:23:12.231207 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:12.231672 kubelet[2544]: E0213 19:23:12.231555 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:12.231672 kubelet[2544]: W0213 19:23:12.231570 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:12.231672 kubelet[2544]: E0213 19:23:12.231583 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:12.233241 kubelet[2544]: E0213 19:23:12.233145 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:12.233241 kubelet[2544]: W0213 19:23:12.233162 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:12.233241 kubelet[2544]: E0213 19:23:12.233183 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:12.234089 kubelet[2544]: E0213 19:23:12.233954 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:12.234089 kubelet[2544]: W0213 19:23:12.233970 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:12.234089 kubelet[2544]: E0213 19:23:12.233988 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:12.236239 kubelet[2544]: E0213 19:23:12.236154 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:12.236561 kubelet[2544]: W0213 19:23:12.236410 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:12.236561 kubelet[2544]: E0213 19:23:12.236434 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:12.236991 kubelet[2544]: E0213 19:23:12.236947 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:12.236991 kubelet[2544]: W0213 19:23:12.236961 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:12.237404 kubelet[2544]: E0213 19:23:12.237196 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:12.237936 kubelet[2544]: E0213 19:23:12.237731 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:12.237936 kubelet[2544]: W0213 19:23:12.237745 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:12.237936 kubelet[2544]: E0213 19:23:12.237764 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:12.238512 kubelet[2544]: E0213 19:23:12.238320 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:12.238512 kubelet[2544]: W0213 19:23:12.238345 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:12.238512 kubelet[2544]: E0213 19:23:12.238358 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:12.239949 kubelet[2544]: E0213 19:23:12.238928 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:23:12.240452 containerd[1472]: time="2025-02-13T19:23:12.240332902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-d4xt4,Uid:105c813f-a9f3-4fd9-87cb-9194647f9332,Namespace:calico-system,Attempt:0,}" Feb 13 19:23:12.242187 containerd[1472]: time="2025-02-13T19:23:12.241158945Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:23:12.242187 containerd[1472]: time="2025-02-13T19:23:12.241987948Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:23:12.242187 containerd[1472]: time="2025-02-13T19:23:12.242000628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:23:12.242187 containerd[1472]: time="2025-02-13T19:23:12.242093989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:23:12.255997 kubelet[2544]: E0213 19:23:12.255920 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:12.255997 kubelet[2544]: W0213 19:23:12.255940 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:12.255997 kubelet[2544]: E0213 19:23:12.255958 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:12.266778 systemd[1]: Started cri-containerd-103013729a0514800fbf5f93bb2265fa53b8f85ddd3d5bfed87b907bf78f80b4.scope - libcontainer container 103013729a0514800fbf5f93bb2265fa53b8f85ddd3d5bfed87b907bf78f80b4. Feb 13 19:23:12.271738 containerd[1472]: time="2025-02-13T19:23:12.271554011Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:23:12.271738 containerd[1472]: time="2025-02-13T19:23:12.271627691Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:23:12.271738 containerd[1472]: time="2025-02-13T19:23:12.271642091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:23:12.272989 containerd[1472]: time="2025-02-13T19:23:12.272928816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:23:12.299770 systemd[1]: Started cri-containerd-dfc66efa2c85e7615af6c450b93d9f5537de9116d158af82a6df5fb90b91d477.scope - libcontainer container dfc66efa2c85e7615af6c450b93d9f5537de9116d158af82a6df5fb90b91d477. Feb 13 19:23:12.309266 containerd[1472]: time="2025-02-13T19:23:12.309202262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5bc6bdbd4d-qhxwc,Uid:83a039e3-89e3-4c97-8eed-c0203ce3aafc,Namespace:calico-system,Attempt:0,} returns sandbox id \"103013729a0514800fbf5f93bb2265fa53b8f85ddd3d5bfed87b907bf78f80b4\"" Feb 13 19:23:12.310102 kubelet[2544]: E0213 19:23:12.310075 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:23:12.311506 containerd[1472]: time="2025-02-13T19:23:12.311472989Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Feb 13 19:23:12.326836 containerd[1472]: time="2025-02-13T19:23:12.326792363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-d4xt4,Uid:105c813f-a9f3-4fd9-87cb-9194647f9332,Namespace:calico-system,Attempt:0,} returns sandbox id \"dfc66efa2c85e7615af6c450b93d9f5537de9116d158af82a6df5fb90b91d477\"" Feb 13 19:23:12.327618 kubelet[2544]: E0213 19:23:12.327538 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:23:13.462440 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2120475469.mount: Deactivated successfully. Feb 13 19:23:13.836056 containerd[1472]: time="2025-02-13T19:23:13.835670019Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:23:13.836056 containerd[1472]: time="2025-02-13T19:23:13.835862860Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29231308" Feb 13 19:23:13.839327 containerd[1472]: time="2025-02-13T19:23:13.838983310Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:23:13.841764 containerd[1472]: time="2025-02-13T19:23:13.841714639Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:23:13.842568 containerd[1472]: time="2025-02-13T19:23:13.842334641Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 1.530825412s" Feb 13 19:23:13.842568 containerd[1472]: time="2025-02-13T19:23:13.842382521Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Feb 13 19:23:13.843582 containerd[1472]: time="2025-02-13T19:23:13.843555405Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 19:23:13.852269 containerd[1472]: time="2025-02-13T19:23:13.851503951Z" level=info msg="CreateContainer within sandbox \"103013729a0514800fbf5f93bb2265fa53b8f85ddd3d5bfed87b907bf78f80b4\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 13 19:23:13.885231 containerd[1472]: time="2025-02-13T19:23:13.885166940Z" level=info msg="CreateContainer within sandbox \"103013729a0514800fbf5f93bb2265fa53b8f85ddd3d5bfed87b907bf78f80b4\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b9b13f7aa598dbe66097be4d1d1c1fba198805e079124e6e885d3b525bd61b69\"" Feb 13 19:23:13.888306 containerd[1472]: time="2025-02-13T19:23:13.888264190Z" level=info msg="StartContainer for \"b9b13f7aa598dbe66097be4d1d1c1fba198805e079124e6e885d3b525bd61b69\"" Feb 13 19:23:13.917819 systemd[1]: Started cri-containerd-b9b13f7aa598dbe66097be4d1d1c1fba198805e079124e6e885d3b525bd61b69.scope - libcontainer container b9b13f7aa598dbe66097be4d1d1c1fba198805e079124e6e885d3b525bd61b69. Feb 13 19:23:13.955056 containerd[1472]: time="2025-02-13T19:23:13.955014727Z" level=info msg="StartContainer for \"b9b13f7aa598dbe66097be4d1d1c1fba198805e079124e6e885d3b525bd61b69\" returns successfully" Feb 13 19:23:13.999627 kubelet[2544]: E0213 19:23:13.999421 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:23:14.029049 kubelet[2544]: E0213 19:23:14.028475 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:14.029049 kubelet[2544]: W0213 19:23:14.028981 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:14.029049 kubelet[2544]: E0213 19:23:14.029008 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:14.029672 kubelet[2544]: E0213 19:23:14.029307 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:14.029672 kubelet[2544]: W0213 19:23:14.029329 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:14.029672 kubelet[2544]: E0213 19:23:14.029342 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:14.029672 kubelet[2544]: E0213 19:23:14.029496 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:14.029672 kubelet[2544]: W0213 19:23:14.029505 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:14.029672 kubelet[2544]: E0213 19:23:14.029514 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:14.029804 kubelet[2544]: E0213 19:23:14.029790 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:14.029804 kubelet[2544]: W0213 19:23:14.029800 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:14.029853 kubelet[2544]: E0213 19:23:14.029808 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:14.030511 kubelet[2544]: E0213 19:23:14.030357 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:14.030511 kubelet[2544]: W0213 19:23:14.030371 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:14.030511 kubelet[2544]: E0213 19:23:14.030382 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:14.030677 kubelet[2544]: E0213 19:23:14.030567 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:14.030677 kubelet[2544]: W0213 19:23:14.030581 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:14.030677 kubelet[2544]: E0213 19:23:14.030589 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:14.030767 kubelet[2544]: E0213 19:23:14.030748 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:14.030767 kubelet[2544]: W0213 19:23:14.030763 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:14.030914 kubelet[2544]: E0213 19:23:14.030772 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:14.031065 kubelet[2544]: E0213 19:23:14.031049 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:14.031065 kubelet[2544]: W0213 19:23:14.031060 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:14.031123 kubelet[2544]: E0213 19:23:14.031067 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:14.031313 kubelet[2544]: E0213 19:23:14.031298 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:14.031313 kubelet[2544]: W0213 19:23:14.031312 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:14.031387 kubelet[2544]: E0213 19:23:14.031330 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:14.031526 kubelet[2544]: E0213 19:23:14.031512 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:14.033609 kubelet[2544]: W0213 19:23:14.031525 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:14.033609 kubelet[2544]: E0213 19:23:14.031711 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:14.033609 kubelet[2544]: E0213 19:23:14.031953 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:14.033609 kubelet[2544]: W0213 19:23:14.031963 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:14.033609 kubelet[2544]: E0213 19:23:14.031972 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:14.033609 kubelet[2544]: E0213 19:23:14.032122 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:14.033609 kubelet[2544]: W0213 19:23:14.032129 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:14.033609 kubelet[2544]: E0213 19:23:14.032137 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:14.033609 kubelet[2544]: E0213 19:23:14.032263 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:14.033609 kubelet[2544]: W0213 19:23:14.032271 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:14.033880 kubelet[2544]: E0213 19:23:14.032280 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:14.033880 kubelet[2544]: E0213 19:23:14.032423 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:14.033880 kubelet[2544]: W0213 19:23:14.032431 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:14.033880 kubelet[2544]: E0213 19:23:14.032438 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:14.033880 kubelet[2544]: E0213 19:23:14.032566 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:14.033880 kubelet[2544]: W0213 19:23:14.032573 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:14.033880 kubelet[2544]: E0213 19:23:14.032579 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:14.245229 kubelet[2544]: E0213 19:23:14.245178 2544 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dtfv4" podUID="278e43f1-bd8c-4a43-8396-436ddaca249b" Feb 13 19:23:14.312192 kubelet[2544]: E0213 19:23:14.312120 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:23:14.321903 kubelet[2544]: I0213 19:23:14.321443 2544 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5bc6bdbd4d-qhxwc" podStartSLOduration=1.788771398 podStartE2EDuration="3.321427495s" podCreationTimestamp="2025-02-13 19:23:11 +0000 UTC" firstStartedPulling="2025-02-13 19:23:12.310699267 +0000 UTC m=+13.149668885" lastFinishedPulling="2025-02-13 19:23:13.843355364 +0000 UTC m=+14.682324982" observedRunningTime="2025-02-13 19:23:14.321241414 +0000 UTC m=+15.160211032" watchObservedRunningTime="2025-02-13 19:23:14.321427495 +0000 UTC m=+15.160397113" Feb 13 19:23:14.334794 kubelet[2544]: E0213 19:23:14.334741 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:14.334794 kubelet[2544]: W0213 19:23:14.334767 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:14.334794 kubelet[2544]: E0213 19:23:14.334786 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:14.335004 kubelet[2544]: E0213 19:23:14.334945 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:14.335004 kubelet[2544]: W0213 19:23:14.334954 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:14.335004 kubelet[2544]: E0213 19:23:14.334962 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:14.335119 kubelet[2544]: E0213 19:23:14.335105 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:14.335119 kubelet[2544]: W0213 19:23:14.335115 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:14.335181 kubelet[2544]: E0213 19:23:14.335124 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:14.335272 kubelet[2544]: E0213 19:23:14.335261 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:14.335272 kubelet[2544]: W0213 19:23:14.335270 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:14.335342 kubelet[2544]: E0213 19:23:14.335278 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:14.335436 kubelet[2544]: E0213 19:23:14.335424 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:14.335436 kubelet[2544]: W0213 19:23:14.335435 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:14.335498 kubelet[2544]: E0213 19:23:14.335444 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:14.335579 kubelet[2544]: E0213 19:23:14.335567 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:14.335579 kubelet[2544]: W0213 19:23:14.335577 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:14.335650 kubelet[2544]: E0213 19:23:14.335584 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:14.335735 kubelet[2544]: E0213 19:23:14.335723 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:14.335735 kubelet[2544]: W0213 19:23:14.335733 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:14.335800 kubelet[2544]: E0213 19:23:14.335740 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:14.335878 kubelet[2544]: E0213 19:23:14.335867 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:14.335878 kubelet[2544]: W0213 19:23:14.335878 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:14.335937 kubelet[2544]: E0213 19:23:14.335886 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:14.336044 kubelet[2544]: E0213 19:23:14.336028 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:14.336044 kubelet[2544]: W0213 19:23:14.336038 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:14.336096 kubelet[2544]: E0213 19:23:14.336046 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:14.336178 kubelet[2544]: E0213 19:23:14.336167 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:14.336178 kubelet[2544]: W0213 19:23:14.336176 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:14.336237 kubelet[2544]: E0213 19:23:14.336184 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:14.336316 kubelet[2544]: E0213 19:23:14.336304 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:14.336316 kubelet[2544]: W0213 19:23:14.336314 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:14.336382 kubelet[2544]: E0213 19:23:14.336329 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:14.336466 kubelet[2544]: E0213 19:23:14.336454 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:14.336466 kubelet[2544]: W0213 19:23:14.336463 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:14.336522 kubelet[2544]: E0213 19:23:14.336479 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:14.336631 kubelet[2544]: E0213 19:23:14.336620 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:14.336631 kubelet[2544]: W0213 19:23:14.336630 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:14.336692 kubelet[2544]: E0213 19:23:14.336638 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:14.336784 kubelet[2544]: E0213 19:23:14.336772 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:14.336784 kubelet[2544]: W0213 19:23:14.336782 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:14.336838 kubelet[2544]: E0213 19:23:14.336790 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:14.336922 kubelet[2544]: E0213 19:23:14.336911 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:14.336922 kubelet[2544]: W0213 19:23:14.336920 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:14.336983 kubelet[2544]: E0213 19:23:14.336927 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:14.347376 kubelet[2544]: E0213 19:23:14.347347 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:14.347376 kubelet[2544]: W0213 19:23:14.347368 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:14.347376 kubelet[2544]: E0213 19:23:14.347384 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:14.347599 kubelet[2544]: E0213 19:23:14.347583 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:14.347646 kubelet[2544]: W0213 19:23:14.347605 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:14.347646 kubelet[2544]: E0213 19:23:14.347621 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:14.347851 kubelet[2544]: E0213 19:23:14.347835 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:14.347851 kubelet[2544]: W0213 19:23:14.347847 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:14.347914 kubelet[2544]: E0213 19:23:14.347863 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:14.348077 kubelet[2544]: E0213 19:23:14.348064 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:14.348077 kubelet[2544]: W0213 19:23:14.348074 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:14.348145 kubelet[2544]: E0213 19:23:14.348089 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:14.348252 kubelet[2544]: E0213 19:23:14.348240 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:14.348252 kubelet[2544]: W0213 19:23:14.348251 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:14.348294 kubelet[2544]: E0213 19:23:14.348264 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:14.348442 kubelet[2544]: E0213 19:23:14.348416 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:14.348442 kubelet[2544]: W0213 19:23:14.348441 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:14.348498 kubelet[2544]: E0213 19:23:14.348455 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:14.348650 kubelet[2544]: E0213 19:23:14.348638 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:14.348650 kubelet[2544]: W0213 19:23:14.348649 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:14.348705 kubelet[2544]: E0213 19:23:14.348674 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:14.348796 kubelet[2544]: E0213 19:23:14.348784 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:14.348796 kubelet[2544]: W0213 19:23:14.348795 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:14.348866 kubelet[2544]: E0213 19:23:14.348830 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:14.348982 kubelet[2544]: E0213 19:23:14.348969 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:14.349015 kubelet[2544]: W0213 19:23:14.348991 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:14.349015 kubelet[2544]: E0213 19:23:14.349006 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:14.349208 kubelet[2544]: E0213 19:23:14.349192 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:14.349251 kubelet[2544]: W0213 19:23:14.349208 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:14.349251 kubelet[2544]: E0213 19:23:14.349226 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:14.349409 kubelet[2544]: E0213 19:23:14.349397 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:14.349409 kubelet[2544]: W0213 19:23:14.349409 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:14.349466 kubelet[2544]: E0213 19:23:14.349421 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:14.349565 kubelet[2544]: E0213 19:23:14.349553 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:14.349565 kubelet[2544]: W0213 19:23:14.349562 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:14.349649 kubelet[2544]: E0213 19:23:14.349574 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:14.349790 kubelet[2544]: E0213 19:23:14.349779 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:14.349790 kubelet[2544]: W0213 19:23:14.349789 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:14.349842 kubelet[2544]: E0213 19:23:14.349805 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:14.350239 kubelet[2544]: E0213 19:23:14.350124 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:14.350239 kubelet[2544]: W0213 19:23:14.350143 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:14.350239 kubelet[2544]: E0213 19:23:14.350162 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:14.350428 kubelet[2544]: E0213 19:23:14.350414 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:14.350481 kubelet[2544]: W0213 19:23:14.350470 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:14.350656 kubelet[2544]: E0213 19:23:14.350528 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:14.350778 kubelet[2544]: E0213 19:23:14.350764 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:14.350834 kubelet[2544]: W0213 19:23:14.350822 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:14.350902 kubelet[2544]: E0213 19:23:14.350886 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:14.351122 kubelet[2544]: E0213 19:23:14.351099 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:14.351122 kubelet[2544]: W0213 19:23:14.351113 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:14.351122 kubelet[2544]: E0213 19:23:14.351129 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:14.351324 kubelet[2544]: E0213 19:23:14.351303 2544 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:23:14.351324 kubelet[2544]: W0213 19:23:14.351314 2544 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:23:14.351378 kubelet[2544]: E0213 19:23:14.351333 2544 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:23:15.004634 containerd[1472]: time="2025-02-13T19:23:15.004400377Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:23:15.005158 containerd[1472]: time="2025-02-13T19:23:15.005122899Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5117811" Feb 13 19:23:15.006135 containerd[1472]: time="2025-02-13T19:23:15.006089422Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:23:15.008423 containerd[1472]: time="2025-02-13T19:23:15.008166108Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:23:15.008910 containerd[1472]: time="2025-02-13T19:23:15.008888910Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.165198425s" Feb 13 19:23:15.008956 containerd[1472]: time="2025-02-13T19:23:15.008915990Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Feb 13 19:23:15.011055 containerd[1472]: time="2025-02-13T19:23:15.010917556Z" level=info msg="CreateContainer within sandbox \"dfc66efa2c85e7615af6c450b93d9f5537de9116d158af82a6df5fb90b91d477\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 19:23:15.026662 containerd[1472]: time="2025-02-13T19:23:15.025296117Z" level=info msg="CreateContainer within sandbox \"dfc66efa2c85e7615af6c450b93d9f5537de9116d158af82a6df5fb90b91d477\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"4a10202938c618866163fdd197f6a95fd3abf43035637e64fcf3184ecbe3d588\"" Feb 13 19:23:15.027215 containerd[1472]: time="2025-02-13T19:23:15.027190323Z" level=info msg="StartContainer for \"4a10202938c618866163fdd197f6a95fd3abf43035637e64fcf3184ecbe3d588\"" Feb 13 19:23:15.062794 systemd[1]: Started cri-containerd-4a10202938c618866163fdd197f6a95fd3abf43035637e64fcf3184ecbe3d588.scope - libcontainer container 4a10202938c618866163fdd197f6a95fd3abf43035637e64fcf3184ecbe3d588. Feb 13 19:23:15.087178 containerd[1472]: time="2025-02-13T19:23:15.085583050Z" level=info msg="StartContainer for \"4a10202938c618866163fdd197f6a95fd3abf43035637e64fcf3184ecbe3d588\" returns successfully" Feb 13 19:23:15.123666 systemd[1]: cri-containerd-4a10202938c618866163fdd197f6a95fd3abf43035637e64fcf3184ecbe3d588.scope: Deactivated successfully. Feb 13 19:23:15.144160 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a10202938c618866163fdd197f6a95fd3abf43035637e64fcf3184ecbe3d588-rootfs.mount: Deactivated successfully. Feb 13 19:23:15.215812 containerd[1472]: time="2025-02-13T19:23:15.211258169Z" level=info msg="shim disconnected" id=4a10202938c618866163fdd197f6a95fd3abf43035637e64fcf3184ecbe3d588 namespace=k8s.io Feb 13 19:23:15.215812 containerd[1472]: time="2025-02-13T19:23:15.215807662Z" level=warning msg="cleaning up after shim disconnected" id=4a10202938c618866163fdd197f6a95fd3abf43035637e64fcf3184ecbe3d588 namespace=k8s.io Feb 13 19:23:15.215812 containerd[1472]: time="2025-02-13T19:23:15.215821422Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:23:15.314858 kubelet[2544]: I0213 19:23:15.314734 2544 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:23:15.315547 kubelet[2544]: E0213 19:23:15.315327 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:23:15.315547 kubelet[2544]: E0213 19:23:15.315410 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:23:15.316755 containerd[1472]: time="2025-02-13T19:23:15.316722071Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 19:23:16.246192 kubelet[2544]: E0213 19:23:16.246143 2544 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dtfv4" podUID="278e43f1-bd8c-4a43-8396-436ddaca249b" Feb 13 19:23:18.246171 kubelet[2544]: E0213 19:23:18.246116 2544 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dtfv4" podUID="278e43f1-bd8c-4a43-8396-436ddaca249b" Feb 13 19:23:19.414878 containerd[1472]: time="2025-02-13T19:23:19.414832574Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:23:19.416088 containerd[1472]: time="2025-02-13T19:23:19.415476015Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Feb 13 19:23:19.416088 containerd[1472]: time="2025-02-13T19:23:19.416038576Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:23:19.418282 containerd[1472]: time="2025-02-13T19:23:19.418253181Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:23:19.419419 containerd[1472]: time="2025-02-13T19:23:19.419275064Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 4.102512433s" Feb 13 19:23:19.419419 containerd[1472]: time="2025-02-13T19:23:19.419314064Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Feb 13 19:23:19.422083 containerd[1472]: time="2025-02-13T19:23:19.422041030Z" level=info msg="CreateContainer within sandbox \"dfc66efa2c85e7615af6c450b93d9f5537de9116d158af82a6df5fb90b91d477\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 19:23:19.506800 containerd[1472]: time="2025-02-13T19:23:19.506755177Z" level=info msg="CreateContainer within sandbox \"dfc66efa2c85e7615af6c450b93d9f5537de9116d158af82a6df5fb90b91d477\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ef837e934c8445f8be9e5e219eb87c1f996ed0a014c30c49a84182de48d447cf\"" Feb 13 19:23:19.508525 containerd[1472]: time="2025-02-13T19:23:19.508143940Z" level=info msg="StartContainer for \"ef837e934c8445f8be9e5e219eb87c1f996ed0a014c30c49a84182de48d447cf\"" Feb 13 19:23:19.538843 systemd[1]: Started cri-containerd-ef837e934c8445f8be9e5e219eb87c1f996ed0a014c30c49a84182de48d447cf.scope - libcontainer container ef837e934c8445f8be9e5e219eb87c1f996ed0a014c30c49a84182de48d447cf. Feb 13 19:23:19.570163 containerd[1472]: time="2025-02-13T19:23:19.570022237Z" level=info msg="StartContainer for \"ef837e934c8445f8be9e5e219eb87c1f996ed0a014c30c49a84182de48d447cf\" returns successfully" Feb 13 19:23:20.091880 containerd[1472]: time="2025-02-13T19:23:20.091830417Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:23:20.093801 systemd[1]: cri-containerd-ef837e934c8445f8be9e5e219eb87c1f996ed0a014c30c49a84182de48d447cf.scope: Deactivated successfully. Feb 13 19:23:20.111125 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef837e934c8445f8be9e5e219eb87c1f996ed0a014c30c49a84182de48d447cf-rootfs.mount: Deactivated successfully. Feb 13 19:23:20.128939 containerd[1472]: time="2025-02-13T19:23:20.128881934Z" level=info msg="shim disconnected" id=ef837e934c8445f8be9e5e219eb87c1f996ed0a014c30c49a84182de48d447cf namespace=k8s.io Feb 13 19:23:20.128939 containerd[1472]: time="2025-02-13T19:23:20.128937814Z" level=warning msg="cleaning up after shim disconnected" id=ef837e934c8445f8be9e5e219eb87c1f996ed0a014c30c49a84182de48d447cf namespace=k8s.io Feb 13 19:23:20.128939 containerd[1472]: time="2025-02-13T19:23:20.128948214Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:23:20.135805 kubelet[2544]: I0213 19:23:20.135768 2544 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Feb 13 19:23:20.169421 systemd[1]: Created slice kubepods-burstable-pod11ebe699_3307_4e28_ac6a_e555af8a982c.slice - libcontainer container kubepods-burstable-pod11ebe699_3307_4e28_ac6a_e555af8a982c.slice. Feb 13 19:23:20.179931 systemd[1]: Created slice kubepods-burstable-podbeb760d9_f48b_4afe_876e_eb78778e0f0b.slice - libcontainer container kubepods-burstable-podbeb760d9_f48b_4afe_876e_eb78778e0f0b.slice. Feb 13 19:23:20.185522 systemd[1]: Created slice kubepods-besteffort-pod400c37f6_81a9_403c_9b2f_cc1d18ee97aa.slice - libcontainer container kubepods-besteffort-pod400c37f6_81a9_403c_9b2f_cc1d18ee97aa.slice. Feb 13 19:23:20.191526 systemd[1]: Created slice kubepods-besteffort-pod90b2170d_a844_4e38_873e_8af01cba6fe0.slice - libcontainer container kubepods-besteffort-pod90b2170d_a844_4e38_873e_8af01cba6fe0.slice. Feb 13 19:23:20.196686 systemd[1]: Created slice kubepods-besteffort-podbd7b8e08_eba9_4ff4_a84a_26b9405284a6.slice - libcontainer container kubepods-besteffort-podbd7b8e08_eba9_4ff4_a84a_26b9405284a6.slice. Feb 13 19:23:20.250678 systemd[1]: Created slice kubepods-besteffort-pod278e43f1_bd8c_4a43_8396_436ddaca249b.slice - libcontainer container kubepods-besteffort-pod278e43f1_bd8c_4a43_8396_436ddaca249b.slice. Feb 13 19:23:20.253232 containerd[1472]: time="2025-02-13T19:23:20.253175911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dtfv4,Uid:278e43f1-bd8c-4a43-8396-436ddaca249b,Namespace:calico-system,Attempt:0,}" Feb 13 19:23:20.295747 kubelet[2544]: I0213 19:23:20.295563 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29nlh\" (UniqueName: \"kubernetes.io/projected/90b2170d-a844-4e38-873e-8af01cba6fe0-kube-api-access-29nlh\") pod \"calico-apiserver-5b44d967f-hpx7w\" (UID: \"90b2170d-a844-4e38-873e-8af01cba6fe0\") " pod="calico-apiserver/calico-apiserver-5b44d967f-hpx7w" Feb 13 19:23:20.295747 kubelet[2544]: I0213 19:23:20.295625 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/beb760d9-f48b-4afe-876e-eb78778e0f0b-config-volume\") pod \"coredns-6f6b679f8f-slhww\" (UID: \"beb760d9-f48b-4afe-876e-eb78778e0f0b\") " pod="kube-system/coredns-6f6b679f8f-slhww" Feb 13 19:23:20.295747 kubelet[2544]: I0213 19:23:20.295647 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plvs2\" (UniqueName: \"kubernetes.io/projected/11ebe699-3307-4e28-ac6a-e555af8a982c-kube-api-access-plvs2\") pod \"coredns-6f6b679f8f-8shn2\" (UID: \"11ebe699-3307-4e28-ac6a-e555af8a982c\") " pod="kube-system/coredns-6f6b679f8f-8shn2" Feb 13 19:23:20.295747 kubelet[2544]: I0213 19:23:20.295666 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd7b8e08-eba9-4ff4-a84a-26b9405284a6-tigera-ca-bundle\") pod \"calico-kube-controllers-58dfd6696-g69sl\" (UID: \"bd7b8e08-eba9-4ff4-a84a-26b9405284a6\") " pod="calico-system/calico-kube-controllers-58dfd6696-g69sl" Feb 13 19:23:20.295747 kubelet[2544]: I0213 19:23:20.295683 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mxd9\" (UniqueName: \"kubernetes.io/projected/bd7b8e08-eba9-4ff4-a84a-26b9405284a6-kube-api-access-6mxd9\") pod \"calico-kube-controllers-58dfd6696-g69sl\" (UID: \"bd7b8e08-eba9-4ff4-a84a-26b9405284a6\") " pod="calico-system/calico-kube-controllers-58dfd6696-g69sl" Feb 13 19:23:20.296015 kubelet[2544]: I0213 19:23:20.295762 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c78rj\" (UniqueName: \"kubernetes.io/projected/beb760d9-f48b-4afe-876e-eb78778e0f0b-kube-api-access-c78rj\") pod \"coredns-6f6b679f8f-slhww\" (UID: \"beb760d9-f48b-4afe-876e-eb78778e0f0b\") " pod="kube-system/coredns-6f6b679f8f-slhww" Feb 13 19:23:20.296046 kubelet[2544]: I0213 19:23:20.296031 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vpk7\" (UniqueName: \"kubernetes.io/projected/400c37f6-81a9-403c-9b2f-cc1d18ee97aa-kube-api-access-6vpk7\") pod \"calico-apiserver-5b44d967f-p6w96\" (UID: \"400c37f6-81a9-403c-9b2f-cc1d18ee97aa\") " pod="calico-apiserver/calico-apiserver-5b44d967f-p6w96" Feb 13 19:23:20.296073 kubelet[2544]: I0213 19:23:20.296056 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/11ebe699-3307-4e28-ac6a-e555af8a982c-config-volume\") pod \"coredns-6f6b679f8f-8shn2\" (UID: \"11ebe699-3307-4e28-ac6a-e555af8a982c\") " pod="kube-system/coredns-6f6b679f8f-8shn2" Feb 13 19:23:20.325164 kubelet[2544]: I0213 19:23:20.296073 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/90b2170d-a844-4e38-873e-8af01cba6fe0-calico-apiserver-certs\") pod \"calico-apiserver-5b44d967f-hpx7w\" (UID: \"90b2170d-a844-4e38-873e-8af01cba6fe0\") " pod="calico-apiserver/calico-apiserver-5b44d967f-hpx7w" Feb 13 19:23:20.325164 kubelet[2544]: I0213 19:23:20.325086 2544 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/400c37f6-81a9-403c-9b2f-cc1d18ee97aa-calico-apiserver-certs\") pod \"calico-apiserver-5b44d967f-p6w96\" (UID: \"400c37f6-81a9-403c-9b2f-cc1d18ee97aa\") " pod="calico-apiserver/calico-apiserver-5b44d967f-p6w96" Feb 13 19:23:20.326018 kubelet[2544]: E0213 19:23:20.325958 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:23:20.326717 containerd[1472]: time="2025-02-13T19:23:20.326664103Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 19:23:20.443260 containerd[1472]: time="2025-02-13T19:23:20.443131104Z" level=error msg="Failed to destroy network for sandbox \"50fc7b5a1212a7f5c0f58d01289dd6423491fe5b48f97ad072a542ba7bf3a842\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:20.449127 containerd[1472]: time="2025-02-13T19:23:20.448945956Z" level=error msg="encountered an error cleaning up failed sandbox \"50fc7b5a1212a7f5c0f58d01289dd6423491fe5b48f97ad072a542ba7bf3a842\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:20.449127 containerd[1472]: time="2025-02-13T19:23:20.449029237Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dtfv4,Uid:278e43f1-bd8c-4a43-8396-436ddaca249b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"50fc7b5a1212a7f5c0f58d01289dd6423491fe5b48f97ad072a542ba7bf3a842\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:20.449274 kubelet[2544]: E0213 19:23:20.449230 2544 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50fc7b5a1212a7f5c0f58d01289dd6423491fe5b48f97ad072a542ba7bf3a842\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:20.449333 kubelet[2544]: E0213 19:23:20.449288 2544 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50fc7b5a1212a7f5c0f58d01289dd6423491fe5b48f97ad072a542ba7bf3a842\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dtfv4" Feb 13 19:23:20.449333 kubelet[2544]: E0213 19:23:20.449315 2544 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50fc7b5a1212a7f5c0f58d01289dd6423491fe5b48f97ad072a542ba7bf3a842\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dtfv4" Feb 13 19:23:20.449395 kubelet[2544]: E0213 19:23:20.449349 2544 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dtfv4_calico-system(278e43f1-bd8c-4a43-8396-436ddaca249b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dtfv4_calico-system(278e43f1-bd8c-4a43-8396-436ddaca249b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"50fc7b5a1212a7f5c0f58d01289dd6423491fe5b48f97ad072a542ba7bf3a842\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dtfv4" podUID="278e43f1-bd8c-4a43-8396-436ddaca249b" Feb 13 19:23:20.477416 kubelet[2544]: E0213 19:23:20.477152 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:23:20.478683 containerd[1472]: time="2025-02-13T19:23:20.478628658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-8shn2,Uid:11ebe699-3307-4e28-ac6a-e555af8a982c,Namespace:kube-system,Attempt:0,}" Feb 13 19:23:20.483119 kubelet[2544]: E0213 19:23:20.483058 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:23:20.486556 containerd[1472]: time="2025-02-13T19:23:20.485782753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-slhww,Uid:beb760d9-f48b-4afe-876e-eb78778e0f0b,Namespace:kube-system,Attempt:0,}" Feb 13 19:23:20.488365 containerd[1472]: time="2025-02-13T19:23:20.488317478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b44d967f-p6w96,Uid:400c37f6-81a9-403c-9b2f-cc1d18ee97aa,Namespace:calico-apiserver,Attempt:0,}" Feb 13 19:23:20.501029 containerd[1472]: time="2025-02-13T19:23:20.500853624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58dfd6696-g69sl,Uid:bd7b8e08-eba9-4ff4-a84a-26b9405284a6,Namespace:calico-system,Attempt:0,}" Feb 13 19:23:20.502131 containerd[1472]: time="2025-02-13T19:23:20.502088986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b44d967f-hpx7w,Uid:90b2170d-a844-4e38-873e-8af01cba6fe0,Namespace:calico-apiserver,Attempt:0,}" Feb 13 19:23:20.579384 containerd[1472]: time="2025-02-13T19:23:20.579183386Z" level=error msg="Failed to destroy network for sandbox \"9238bb01d809702cf71989d83c983b9c52956861a8e39fcabd6dd1ed7ee329dd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:20.580749 containerd[1472]: time="2025-02-13T19:23:20.580713949Z" level=error msg="encountered an error cleaning up failed sandbox \"9238bb01d809702cf71989d83c983b9c52956861a8e39fcabd6dd1ed7ee329dd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:20.581047 containerd[1472]: time="2025-02-13T19:23:20.580917630Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-8shn2,Uid:11ebe699-3307-4e28-ac6a-e555af8a982c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9238bb01d809702cf71989d83c983b9c52956861a8e39fcabd6dd1ed7ee329dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:20.581526 kubelet[2544]: E0213 19:23:20.581490 2544 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9238bb01d809702cf71989d83c983b9c52956861a8e39fcabd6dd1ed7ee329dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:20.581643 kubelet[2544]: E0213 19:23:20.581545 2544 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9238bb01d809702cf71989d83c983b9c52956861a8e39fcabd6dd1ed7ee329dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-8shn2" Feb 13 19:23:20.581643 kubelet[2544]: E0213 19:23:20.581564 2544 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9238bb01d809702cf71989d83c983b9c52956861a8e39fcabd6dd1ed7ee329dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-8shn2" Feb 13 19:23:20.581771 kubelet[2544]: E0213 19:23:20.581645 2544 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-8shn2_kube-system(11ebe699-3307-4e28-ac6a-e555af8a982c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-8shn2_kube-system(11ebe699-3307-4e28-ac6a-e555af8a982c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9238bb01d809702cf71989d83c983b9c52956861a8e39fcabd6dd1ed7ee329dd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-8shn2" podUID="11ebe699-3307-4e28-ac6a-e555af8a982c" Feb 13 19:23:20.622318 containerd[1472]: time="2025-02-13T19:23:20.622266475Z" level=error msg="Failed to destroy network for sandbox \"dd11beb574d3fbe9971dfab17c87cd30120129847d48f855e440b73909b823f2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:20.623266 containerd[1472]: time="2025-02-13T19:23:20.622832197Z" level=error msg="encountered an error cleaning up failed sandbox \"dd11beb574d3fbe9971dfab17c87cd30120129847d48f855e440b73909b823f2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:20.623266 containerd[1472]: time="2025-02-13T19:23:20.622891117Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58dfd6696-g69sl,Uid:bd7b8e08-eba9-4ff4-a84a-26b9405284a6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dd11beb574d3fbe9971dfab17c87cd30120129847d48f855e440b73909b823f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:20.623920 kubelet[2544]: E0213 19:23:20.623522 2544 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd11beb574d3fbe9971dfab17c87cd30120129847d48f855e440b73909b823f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:20.623920 kubelet[2544]: E0213 19:23:20.623580 2544 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd11beb574d3fbe9971dfab17c87cd30120129847d48f855e440b73909b823f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-58dfd6696-g69sl" Feb 13 19:23:20.623920 kubelet[2544]: E0213 19:23:20.623612 2544 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd11beb574d3fbe9971dfab17c87cd30120129847d48f855e440b73909b823f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-58dfd6696-g69sl" Feb 13 19:23:20.624085 kubelet[2544]: E0213 19:23:20.623659 2544 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-58dfd6696-g69sl_calico-system(bd7b8e08-eba9-4ff4-a84a-26b9405284a6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-58dfd6696-g69sl_calico-system(bd7b8e08-eba9-4ff4-a84a-26b9405284a6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dd11beb574d3fbe9971dfab17c87cd30120129847d48f855e440b73909b823f2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-58dfd6696-g69sl" podUID="bd7b8e08-eba9-4ff4-a84a-26b9405284a6" Feb 13 19:23:20.624381 containerd[1472]: time="2025-02-13T19:23:20.624351280Z" level=error msg="Failed to destroy network for sandbox \"7a00c42bb521f7bef7a32da102cdf4d4a156bae2448cfacf8ccc9caa13e70634\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:20.624839 containerd[1472]: time="2025-02-13T19:23:20.624809761Z" level=error msg="encountered an error cleaning up failed sandbox \"7a00c42bb521f7bef7a32da102cdf4d4a156bae2448cfacf8ccc9caa13e70634\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:20.624962 containerd[1472]: time="2025-02-13T19:23:20.624940001Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-slhww,Uid:beb760d9-f48b-4afe-876e-eb78778e0f0b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7a00c42bb521f7bef7a32da102cdf4d4a156bae2448cfacf8ccc9caa13e70634\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:20.625149 kubelet[2544]: E0213 19:23:20.625127 2544 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a00c42bb521f7bef7a32da102cdf4d4a156bae2448cfacf8ccc9caa13e70634\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:20.625265 kubelet[2544]: E0213 19:23:20.625245 2544 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a00c42bb521f7bef7a32da102cdf4d4a156bae2448cfacf8ccc9caa13e70634\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-slhww" Feb 13 19:23:20.625339 kubelet[2544]: E0213 19:23:20.625322 2544 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a00c42bb521f7bef7a32da102cdf4d4a156bae2448cfacf8ccc9caa13e70634\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-slhww" Feb 13 19:23:20.625449 kubelet[2544]: E0213 19:23:20.625421 2544 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-slhww_kube-system(beb760d9-f48b-4afe-876e-eb78778e0f0b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-slhww_kube-system(beb760d9-f48b-4afe-876e-eb78778e0f0b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7a00c42bb521f7bef7a32da102cdf4d4a156bae2448cfacf8ccc9caa13e70634\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-slhww" podUID="beb760d9-f48b-4afe-876e-eb78778e0f0b" Feb 13 19:23:20.628483 containerd[1472]: time="2025-02-13T19:23:20.628439648Z" level=error msg="Failed to destroy network for sandbox \"9886a774b5ddfc176ce9f280222642978f1f162c12b106b23d0358bf9b5a6fe4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:20.628800 containerd[1472]: time="2025-02-13T19:23:20.628769009Z" level=error msg="encountered an error cleaning up failed sandbox \"9886a774b5ddfc176ce9f280222642978f1f162c12b106b23d0358bf9b5a6fe4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:20.628854 containerd[1472]: time="2025-02-13T19:23:20.628820169Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b44d967f-p6w96,Uid:400c37f6-81a9-403c-9b2f-cc1d18ee97aa,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9886a774b5ddfc176ce9f280222642978f1f162c12b106b23d0358bf9b5a6fe4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:20.629417 kubelet[2544]: E0213 19:23:20.629165 2544 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9886a774b5ddfc176ce9f280222642978f1f162c12b106b23d0358bf9b5a6fe4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:20.629417 kubelet[2544]: E0213 19:23:20.629207 2544 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9886a774b5ddfc176ce9f280222642978f1f162c12b106b23d0358bf9b5a6fe4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b44d967f-p6w96" Feb 13 19:23:20.629417 kubelet[2544]: E0213 19:23:20.629224 2544 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9886a774b5ddfc176ce9f280222642978f1f162c12b106b23d0358bf9b5a6fe4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b44d967f-p6w96" Feb 13 19:23:20.629527 kubelet[2544]: E0213 19:23:20.629251 2544 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b44d967f-p6w96_calico-apiserver(400c37f6-81a9-403c-9b2f-cc1d18ee97aa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b44d967f-p6w96_calico-apiserver(400c37f6-81a9-403c-9b2f-cc1d18ee97aa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9886a774b5ddfc176ce9f280222642978f1f162c12b106b23d0358bf9b5a6fe4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b44d967f-p6w96" podUID="400c37f6-81a9-403c-9b2f-cc1d18ee97aa" Feb 13 19:23:20.630805 containerd[1472]: time="2025-02-13T19:23:20.630773293Z" level=error msg="Failed to destroy network for sandbox \"79cc9bd063882a3af3beda81b96a936ae1bf51ac45f2a1def3f15fdb84de30f2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:20.631475 containerd[1472]: time="2025-02-13T19:23:20.631429454Z" level=error msg="encountered an error cleaning up failed sandbox \"79cc9bd063882a3af3beda81b96a936ae1bf51ac45f2a1def3f15fdb84de30f2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:20.631514 containerd[1472]: time="2025-02-13T19:23:20.631488814Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b44d967f-hpx7w,Uid:90b2170d-a844-4e38-873e-8af01cba6fe0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"79cc9bd063882a3af3beda81b96a936ae1bf51ac45f2a1def3f15fdb84de30f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:20.631724 kubelet[2544]: E0213 19:23:20.631695 2544 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79cc9bd063882a3af3beda81b96a936ae1bf51ac45f2a1def3f15fdb84de30f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:20.631773 kubelet[2544]: E0213 19:23:20.631734 2544 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79cc9bd063882a3af3beda81b96a936ae1bf51ac45f2a1def3f15fdb84de30f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b44d967f-hpx7w" Feb 13 19:23:20.631773 kubelet[2544]: E0213 19:23:20.631749 2544 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79cc9bd063882a3af3beda81b96a936ae1bf51ac45f2a1def3f15fdb84de30f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b44d967f-hpx7w" Feb 13 19:23:20.631822 kubelet[2544]: E0213 19:23:20.631787 2544 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b44d967f-hpx7w_calico-apiserver(90b2170d-a844-4e38-873e-8af01cba6fe0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b44d967f-hpx7w_calico-apiserver(90b2170d-a844-4e38-873e-8af01cba6fe0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"79cc9bd063882a3af3beda81b96a936ae1bf51ac45f2a1def3f15fdb84de30f2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b44d967f-hpx7w" podUID="90b2170d-a844-4e38-873e-8af01cba6fe0" Feb 13 19:23:21.328654 kubelet[2544]: I0213 19:23:21.328620 2544 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50fc7b5a1212a7f5c0f58d01289dd6423491fe5b48f97ad072a542ba7bf3a842" Feb 13 19:23:21.330659 containerd[1472]: time="2025-02-13T19:23:21.329382418Z" level=info msg="StopPodSandbox for \"50fc7b5a1212a7f5c0f58d01289dd6423491fe5b48f97ad072a542ba7bf3a842\"" Feb 13 19:23:21.330659 containerd[1472]: time="2025-02-13T19:23:21.330068539Z" level=info msg="StopPodSandbox for \"dd11beb574d3fbe9971dfab17c87cd30120129847d48f855e440b73909b823f2\"" Feb 13 19:23:21.330919 kubelet[2544]: I0213 19:23:21.329620 2544 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd11beb574d3fbe9971dfab17c87cd30120129847d48f855e440b73909b823f2" Feb 13 19:23:21.335502 containerd[1472]: time="2025-02-13T19:23:21.334243027Z" level=info msg="Ensure that sandbox dd11beb574d3fbe9971dfab17c87cd30120129847d48f855e440b73909b823f2 in task-service has been cleanup successfully" Feb 13 19:23:21.335677 containerd[1472]: time="2025-02-13T19:23:21.335642590Z" level=info msg="TearDown network for sandbox \"dd11beb574d3fbe9971dfab17c87cd30120129847d48f855e440b73909b823f2\" successfully" Feb 13 19:23:21.335781 containerd[1472]: time="2025-02-13T19:23:21.335756190Z" level=info msg="StopPodSandbox for \"dd11beb574d3fbe9971dfab17c87cd30120129847d48f855e440b73909b823f2\" returns successfully" Feb 13 19:23:21.336880 containerd[1472]: time="2025-02-13T19:23:21.335671150Z" level=info msg="Ensure that sandbox 50fc7b5a1212a7f5c0f58d01289dd6423491fe5b48f97ad072a542ba7bf3a842 in task-service has been cleanup successfully" Feb 13 19:23:21.337023 containerd[1472]: time="2025-02-13T19:23:21.337004592Z" level=info msg="TearDown network for sandbox \"50fc7b5a1212a7f5c0f58d01289dd6423491fe5b48f97ad072a542ba7bf3a842\" successfully" Feb 13 19:23:21.337166 containerd[1472]: time="2025-02-13T19:23:21.337147713Z" level=info msg="StopPodSandbox for \"50fc7b5a1212a7f5c0f58d01289dd6423491fe5b48f97ad072a542ba7bf3a842\" returns successfully" Feb 13 19:23:21.344202 containerd[1472]: time="2025-02-13T19:23:21.344083006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58dfd6696-g69sl,Uid:bd7b8e08-eba9-4ff4-a84a-26b9405284a6,Namespace:calico-system,Attempt:1,}" Feb 13 19:23:21.344331 containerd[1472]: time="2025-02-13T19:23:21.344236166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dtfv4,Uid:278e43f1-bd8c-4a43-8396-436ddaca249b,Namespace:calico-system,Attempt:1,}" Feb 13 19:23:21.345299 kubelet[2544]: I0213 19:23:21.344679 2544 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a00c42bb521f7bef7a32da102cdf4d4a156bae2448cfacf8ccc9caa13e70634" Feb 13 19:23:21.345493 containerd[1472]: time="2025-02-13T19:23:21.345279168Z" level=info msg="StopPodSandbox for \"7a00c42bb521f7bef7a32da102cdf4d4a156bae2448cfacf8ccc9caa13e70634\"" Feb 13 19:23:21.345676 containerd[1472]: time="2025-02-13T19:23:21.345642249Z" level=info msg="Ensure that sandbox 7a00c42bb521f7bef7a32da102cdf4d4a156bae2448cfacf8ccc9caa13e70634 in task-service has been cleanup successfully" Feb 13 19:23:21.345834 containerd[1472]: time="2025-02-13T19:23:21.345809769Z" level=info msg="TearDown network for sandbox \"7a00c42bb521f7bef7a32da102cdf4d4a156bae2448cfacf8ccc9caa13e70634\" successfully" Feb 13 19:23:21.345834 containerd[1472]: time="2025-02-13T19:23:21.345828489Z" level=info msg="StopPodSandbox for \"7a00c42bb521f7bef7a32da102cdf4d4a156bae2448cfacf8ccc9caa13e70634\" returns successfully" Feb 13 19:23:21.346405 kubelet[2544]: E0213 19:23:21.346352 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:23:21.347352 containerd[1472]: time="2025-02-13T19:23:21.347316252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-slhww,Uid:beb760d9-f48b-4afe-876e-eb78778e0f0b,Namespace:kube-system,Attempt:1,}" Feb 13 19:23:21.348072 kubelet[2544]: I0213 19:23:21.347470 2544 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9238bb01d809702cf71989d83c983b9c52956861a8e39fcabd6dd1ed7ee329dd" Feb 13 19:23:21.348134 containerd[1472]: time="2025-02-13T19:23:21.348041734Z" level=info msg="StopPodSandbox for \"9238bb01d809702cf71989d83c983b9c52956861a8e39fcabd6dd1ed7ee329dd\"" Feb 13 19:23:21.348808 containerd[1472]: time="2025-02-13T19:23:21.348778735Z" level=info msg="Ensure that sandbox 9238bb01d809702cf71989d83c983b9c52956861a8e39fcabd6dd1ed7ee329dd in task-service has been cleanup successfully" Feb 13 19:23:21.349662 containerd[1472]: time="2025-02-13T19:23:21.349634257Z" level=info msg="TearDown network for sandbox \"9238bb01d809702cf71989d83c983b9c52956861a8e39fcabd6dd1ed7ee329dd\" successfully" Feb 13 19:23:21.349662 containerd[1472]: time="2025-02-13T19:23:21.349657977Z" level=info msg="StopPodSandbox for \"9238bb01d809702cf71989d83c983b9c52956861a8e39fcabd6dd1ed7ee329dd\" returns successfully" Feb 13 19:23:21.349951 kubelet[2544]: E0213 19:23:21.349930 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:23:21.350109 kubelet[2544]: I0213 19:23:21.350086 2544 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79cc9bd063882a3af3beda81b96a936ae1bf51ac45f2a1def3f15fdb84de30f2" Feb 13 19:23:21.350579 containerd[1472]: time="2025-02-13T19:23:21.350481858Z" level=info msg="StopPodSandbox for \"79cc9bd063882a3af3beda81b96a936ae1bf51ac45f2a1def3f15fdb84de30f2\"" Feb 13 19:23:21.351002 containerd[1472]: time="2025-02-13T19:23:21.350721259Z" level=info msg="Ensure that sandbox 79cc9bd063882a3af3beda81b96a936ae1bf51ac45f2a1def3f15fdb84de30f2 in task-service has been cleanup successfully" Feb 13 19:23:21.351088 containerd[1472]: time="2025-02-13T19:23:21.351061060Z" level=info msg="TearDown network for sandbox \"79cc9bd063882a3af3beda81b96a936ae1bf51ac45f2a1def3f15fdb84de30f2\" successfully" Feb 13 19:23:21.351088 containerd[1472]: time="2025-02-13T19:23:21.351079180Z" level=info msg="StopPodSandbox for \"79cc9bd063882a3af3beda81b96a936ae1bf51ac45f2a1def3f15fdb84de30f2\" returns successfully" Feb 13 19:23:21.351528 containerd[1472]: time="2025-02-13T19:23:21.351499220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-8shn2,Uid:11ebe699-3307-4e28-ac6a-e555af8a982c,Namespace:kube-system,Attempt:1,}" Feb 13 19:23:21.351947 containerd[1472]: time="2025-02-13T19:23:21.351827021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b44d967f-hpx7w,Uid:90b2170d-a844-4e38-873e-8af01cba6fe0,Namespace:calico-apiserver,Attempt:1,}" Feb 13 19:23:21.352535 kubelet[2544]: I0213 19:23:21.352512 2544 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9886a774b5ddfc176ce9f280222642978f1f162c12b106b23d0358bf9b5a6fe4" Feb 13 19:23:21.353722 containerd[1472]: time="2025-02-13T19:23:21.353567744Z" level=info msg="StopPodSandbox for \"9886a774b5ddfc176ce9f280222642978f1f162c12b106b23d0358bf9b5a6fe4\"" Feb 13 19:23:21.353808 containerd[1472]: time="2025-02-13T19:23:21.353788145Z" level=info msg="Ensure that sandbox 9886a774b5ddfc176ce9f280222642978f1f162c12b106b23d0358bf9b5a6fe4 in task-service has been cleanup successfully" Feb 13 19:23:21.354024 containerd[1472]: time="2025-02-13T19:23:21.354001065Z" level=info msg="TearDown network for sandbox \"9886a774b5ddfc176ce9f280222642978f1f162c12b106b23d0358bf9b5a6fe4\" successfully" Feb 13 19:23:21.354140 containerd[1472]: time="2025-02-13T19:23:21.354119666Z" level=info msg="StopPodSandbox for \"9886a774b5ddfc176ce9f280222642978f1f162c12b106b23d0358bf9b5a6fe4\" returns successfully" Feb 13 19:23:21.354903 containerd[1472]: time="2025-02-13T19:23:21.354856867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b44d967f-p6w96,Uid:400c37f6-81a9-403c-9b2f-cc1d18ee97aa,Namespace:calico-apiserver,Attempt:1,}" Feb 13 19:23:21.469536 containerd[1472]: time="2025-02-13T19:23:21.469471810Z" level=error msg="Failed to destroy network for sandbox \"9c587b1c1873a4da0b2c59ad781386932a84f435d9277e390c1c647867dcf9c6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:21.470693 containerd[1472]: time="2025-02-13T19:23:21.470376691Z" level=error msg="encountered an error cleaning up failed sandbox \"9c587b1c1873a4da0b2c59ad781386932a84f435d9277e390c1c647867dcf9c6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:21.470693 containerd[1472]: time="2025-02-13T19:23:21.470448811Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58dfd6696-g69sl,Uid:bd7b8e08-eba9-4ff4-a84a-26b9405284a6,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"9c587b1c1873a4da0b2c59ad781386932a84f435d9277e390c1c647867dcf9c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:21.470841 kubelet[2544]: E0213 19:23:21.470660 2544 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c587b1c1873a4da0b2c59ad781386932a84f435d9277e390c1c647867dcf9c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:21.470841 kubelet[2544]: E0213 19:23:21.470717 2544 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c587b1c1873a4da0b2c59ad781386932a84f435d9277e390c1c647867dcf9c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-58dfd6696-g69sl" Feb 13 19:23:21.470841 kubelet[2544]: E0213 19:23:21.470737 2544 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c587b1c1873a4da0b2c59ad781386932a84f435d9277e390c1c647867dcf9c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-58dfd6696-g69sl" Feb 13 19:23:21.470922 kubelet[2544]: E0213 19:23:21.470774 2544 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-58dfd6696-g69sl_calico-system(bd7b8e08-eba9-4ff4-a84a-26b9405284a6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-58dfd6696-g69sl_calico-system(bd7b8e08-eba9-4ff4-a84a-26b9405284a6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9c587b1c1873a4da0b2c59ad781386932a84f435d9277e390c1c647867dcf9c6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-58dfd6696-g69sl" podUID="bd7b8e08-eba9-4ff4-a84a-26b9405284a6" Feb 13 19:23:21.471336 containerd[1472]: time="2025-02-13T19:23:21.471306173Z" level=error msg="Failed to destroy network for sandbox \"096da91385a3c8839c09bc5456755139dacce2271f80a0a43253c557ac7e758d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:21.471865 containerd[1472]: time="2025-02-13T19:23:21.471825254Z" level=error msg="encountered an error cleaning up failed sandbox \"096da91385a3c8839c09bc5456755139dacce2271f80a0a43253c557ac7e758d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:21.471997 containerd[1472]: time="2025-02-13T19:23:21.471975134Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-slhww,Uid:beb760d9-f48b-4afe-876e-eb78778e0f0b,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"096da91385a3c8839c09bc5456755139dacce2271f80a0a43253c557ac7e758d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:21.473959 kubelet[2544]: E0213 19:23:21.473654 2544 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"096da91385a3c8839c09bc5456755139dacce2271f80a0a43253c557ac7e758d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:21.473959 kubelet[2544]: E0213 19:23:21.473753 2544 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"096da91385a3c8839c09bc5456755139dacce2271f80a0a43253c557ac7e758d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-slhww" Feb 13 19:23:21.473959 kubelet[2544]: E0213 19:23:21.473773 2544 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"096da91385a3c8839c09bc5456755139dacce2271f80a0a43253c557ac7e758d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-slhww" Feb 13 19:23:21.474217 kubelet[2544]: E0213 19:23:21.473839 2544 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-slhww_kube-system(beb760d9-f48b-4afe-876e-eb78778e0f0b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-slhww_kube-system(beb760d9-f48b-4afe-876e-eb78778e0f0b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"096da91385a3c8839c09bc5456755139dacce2271f80a0a43253c557ac7e758d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-slhww" podUID="beb760d9-f48b-4afe-876e-eb78778e0f0b" Feb 13 19:23:21.490870 systemd[1]: run-netns-cni\x2d6f613582\x2d28bf\x2d33d7\x2d7bfc\x2dce7bdc442a37.mount: Deactivated successfully. Feb 13 19:23:21.490967 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dd11beb574d3fbe9971dfab17c87cd30120129847d48f855e440b73909b823f2-shm.mount: Deactivated successfully. Feb 13 19:23:21.491022 systemd[1]: run-netns-cni\x2d63240af7\x2d3836\x2dc595\x2df2ea\x2db0c0268d604a.mount: Deactivated successfully. Feb 13 19:23:21.491066 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7a00c42bb521f7bef7a32da102cdf4d4a156bae2448cfacf8ccc9caa13e70634-shm.mount: Deactivated successfully. Feb 13 19:23:21.491115 systemd[1]: run-netns-cni\x2decef11da\x2d1e91\x2d9a33\x2d6d54\x2dbc18da08d3cd.mount: Deactivated successfully. Feb 13 19:23:21.491162 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9238bb01d809702cf71989d83c983b9c52956861a8e39fcabd6dd1ed7ee329dd-shm.mount: Deactivated successfully. Feb 13 19:23:21.491208 systemd[1]: run-netns-cni\x2d992cae3b\x2d0766\x2d2c6d\x2d2e2e\x2d26a1c25da577.mount: Deactivated successfully. Feb 13 19:23:21.507212 containerd[1472]: time="2025-02-13T19:23:21.506831162Z" level=error msg="Failed to destroy network for sandbox \"b9dc128c66258cebc145fb5927188ffde0a7a2ce1077cde74816d4066ba9c705\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:21.508985 containerd[1472]: time="2025-02-13T19:23:21.508934566Z" level=error msg="encountered an error cleaning up failed sandbox \"b9dc128c66258cebc145fb5927188ffde0a7a2ce1077cde74816d4066ba9c705\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:21.509145 containerd[1472]: time="2025-02-13T19:23:21.509007726Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dtfv4,Uid:278e43f1-bd8c-4a43-8396-436ddaca249b,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"b9dc128c66258cebc145fb5927188ffde0a7a2ce1077cde74816d4066ba9c705\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:21.509243 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b9dc128c66258cebc145fb5927188ffde0a7a2ce1077cde74816d4066ba9c705-shm.mount: Deactivated successfully. Feb 13 19:23:21.510012 kubelet[2544]: E0213 19:23:21.509936 2544 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9dc128c66258cebc145fb5927188ffde0a7a2ce1077cde74816d4066ba9c705\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:21.510012 kubelet[2544]: E0213 19:23:21.509992 2544 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9dc128c66258cebc145fb5927188ffde0a7a2ce1077cde74816d4066ba9c705\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dtfv4" Feb 13 19:23:21.510012 kubelet[2544]: E0213 19:23:21.510011 2544 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9dc128c66258cebc145fb5927188ffde0a7a2ce1077cde74816d4066ba9c705\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dtfv4" Feb 13 19:23:21.510224 kubelet[2544]: E0213 19:23:21.510049 2544 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dtfv4_calico-system(278e43f1-bd8c-4a43-8396-436ddaca249b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dtfv4_calico-system(278e43f1-bd8c-4a43-8396-436ddaca249b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b9dc128c66258cebc145fb5927188ffde0a7a2ce1077cde74816d4066ba9c705\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dtfv4" podUID="278e43f1-bd8c-4a43-8396-436ddaca249b" Feb 13 19:23:21.513414 containerd[1472]: time="2025-02-13T19:23:21.513356295Z" level=error msg="Failed to destroy network for sandbox \"a4926452b9909d806af29b6189b194ae88d64402549efa6a8aa8157676ffeab6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:21.514893 containerd[1472]: time="2025-02-13T19:23:21.514509817Z" level=error msg="encountered an error cleaning up failed sandbox \"a4926452b9909d806af29b6189b194ae88d64402549efa6a8aa8157676ffeab6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:21.515103 containerd[1472]: time="2025-02-13T19:23:21.514930498Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b44d967f-p6w96,Uid:400c37f6-81a9-403c-9b2f-cc1d18ee97aa,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"a4926452b9909d806af29b6189b194ae88d64402549efa6a8aa8157676ffeab6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:21.515801 kubelet[2544]: E0213 19:23:21.515265 2544 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4926452b9909d806af29b6189b194ae88d64402549efa6a8aa8157676ffeab6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:21.515801 kubelet[2544]: E0213 19:23:21.515325 2544 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4926452b9909d806af29b6189b194ae88d64402549efa6a8aa8157676ffeab6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b44d967f-p6w96" Feb 13 19:23:21.515801 kubelet[2544]: E0213 19:23:21.515344 2544 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4926452b9909d806af29b6189b194ae88d64402549efa6a8aa8157676ffeab6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b44d967f-p6w96" Feb 13 19:23:21.515461 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a4926452b9909d806af29b6189b194ae88d64402549efa6a8aa8157676ffeab6-shm.mount: Deactivated successfully. Feb 13 19:23:21.515960 kubelet[2544]: E0213 19:23:21.515380 2544 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b44d967f-p6w96_calico-apiserver(400c37f6-81a9-403c-9b2f-cc1d18ee97aa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b44d967f-p6w96_calico-apiserver(400c37f6-81a9-403c-9b2f-cc1d18ee97aa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a4926452b9909d806af29b6189b194ae88d64402549efa6a8aa8157676ffeab6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b44d967f-p6w96" podUID="400c37f6-81a9-403c-9b2f-cc1d18ee97aa" Feb 13 19:23:21.529130 containerd[1472]: time="2025-02-13T19:23:21.529080405Z" level=error msg="Failed to destroy network for sandbox \"f08c19b9a372aabce29703f12b709d1b46d9e7576abfa63f9ca341a1ee693428\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:21.529553 containerd[1472]: time="2025-02-13T19:23:21.529522726Z" level=error msg="encountered an error cleaning up failed sandbox \"f08c19b9a372aabce29703f12b709d1b46d9e7576abfa63f9ca341a1ee693428\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:21.529688 containerd[1472]: time="2025-02-13T19:23:21.529585726Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b44d967f-hpx7w,Uid:90b2170d-a844-4e38-873e-8af01cba6fe0,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"f08c19b9a372aabce29703f12b709d1b46d9e7576abfa63f9ca341a1ee693428\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:21.531757 kubelet[2544]: E0213 19:23:21.529867 2544 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f08c19b9a372aabce29703f12b709d1b46d9e7576abfa63f9ca341a1ee693428\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:21.531757 kubelet[2544]: E0213 19:23:21.529928 2544 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f08c19b9a372aabce29703f12b709d1b46d9e7576abfa63f9ca341a1ee693428\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b44d967f-hpx7w" Feb 13 19:23:21.531757 kubelet[2544]: E0213 19:23:21.530005 2544 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f08c19b9a372aabce29703f12b709d1b46d9e7576abfa63f9ca341a1ee693428\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b44d967f-hpx7w" Feb 13 19:23:21.530991 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f08c19b9a372aabce29703f12b709d1b46d9e7576abfa63f9ca341a1ee693428-shm.mount: Deactivated successfully. Feb 13 19:23:21.532093 kubelet[2544]: E0213 19:23:21.530147 2544 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b44d967f-hpx7w_calico-apiserver(90b2170d-a844-4e38-873e-8af01cba6fe0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b44d967f-hpx7w_calico-apiserver(90b2170d-a844-4e38-873e-8af01cba6fe0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f08c19b9a372aabce29703f12b709d1b46d9e7576abfa63f9ca341a1ee693428\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b44d967f-hpx7w" podUID="90b2170d-a844-4e38-873e-8af01cba6fe0" Feb 13 19:23:21.536853 containerd[1472]: time="2025-02-13T19:23:21.536819780Z" level=error msg="Failed to destroy network for sandbox \"1a65ecae1d3e89163ada04189f06c243b3b5a17116154cd6cbede404e3524543\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:21.537246 containerd[1472]: time="2025-02-13T19:23:21.537217341Z" level=error msg="encountered an error cleaning up failed sandbox \"1a65ecae1d3e89163ada04189f06c243b3b5a17116154cd6cbede404e3524543\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:21.537388 containerd[1472]: time="2025-02-13T19:23:21.537365261Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-8shn2,Uid:11ebe699-3307-4e28-ac6a-e555af8a982c,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"1a65ecae1d3e89163ada04189f06c243b3b5a17116154cd6cbede404e3524543\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:21.537719 kubelet[2544]: E0213 19:23:21.537687 2544 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a65ecae1d3e89163ada04189f06c243b3b5a17116154cd6cbede404e3524543\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:21.537882 kubelet[2544]: E0213 19:23:21.537737 2544 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a65ecae1d3e89163ada04189f06c243b3b5a17116154cd6cbede404e3524543\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-8shn2" Feb 13 19:23:21.537882 kubelet[2544]: E0213 19:23:21.537758 2544 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a65ecae1d3e89163ada04189f06c243b3b5a17116154cd6cbede404e3524543\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-8shn2" Feb 13 19:23:21.537882 kubelet[2544]: E0213 19:23:21.537801 2544 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-8shn2_kube-system(11ebe699-3307-4e28-ac6a-e555af8a982c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-8shn2_kube-system(11ebe699-3307-4e28-ac6a-e555af8a982c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1a65ecae1d3e89163ada04189f06c243b3b5a17116154cd6cbede404e3524543\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-8shn2" podUID="11ebe699-3307-4e28-ac6a-e555af8a982c" Feb 13 19:23:21.539722 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1a65ecae1d3e89163ada04189f06c243b3b5a17116154cd6cbede404e3524543-shm.mount: Deactivated successfully. Feb 13 19:23:22.356132 kubelet[2544]: I0213 19:23:22.355987 2544 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f08c19b9a372aabce29703f12b709d1b46d9e7576abfa63f9ca341a1ee693428" Feb 13 19:23:22.356845 containerd[1472]: time="2025-02-13T19:23:22.356513129Z" level=info msg="StopPodSandbox for \"f08c19b9a372aabce29703f12b709d1b46d9e7576abfa63f9ca341a1ee693428\"" Feb 13 19:23:22.357014 containerd[1472]: time="2025-02-13T19:23:22.356989850Z" level=info msg="Ensure that sandbox f08c19b9a372aabce29703f12b709d1b46d9e7576abfa63f9ca341a1ee693428 in task-service has been cleanup successfully" Feb 13 19:23:22.357275 containerd[1472]: time="2025-02-13T19:23:22.357236850Z" level=info msg="TearDown network for sandbox \"f08c19b9a372aabce29703f12b709d1b46d9e7576abfa63f9ca341a1ee693428\" successfully" Feb 13 19:23:22.357275 containerd[1472]: time="2025-02-13T19:23:22.357255410Z" level=info msg="StopPodSandbox for \"f08c19b9a372aabce29703f12b709d1b46d9e7576abfa63f9ca341a1ee693428\" returns successfully" Feb 13 19:23:22.357846 containerd[1472]: time="2025-02-13T19:23:22.357729731Z" level=info msg="StopPodSandbox for \"79cc9bd063882a3af3beda81b96a936ae1bf51ac45f2a1def3f15fdb84de30f2\"" Feb 13 19:23:22.358171 containerd[1472]: time="2025-02-13T19:23:22.358049011Z" level=info msg="TearDown network for sandbox \"79cc9bd063882a3af3beda81b96a936ae1bf51ac45f2a1def3f15fdb84de30f2\" successfully" Feb 13 19:23:22.358171 containerd[1472]: time="2025-02-13T19:23:22.358071092Z" level=info msg="StopPodSandbox for \"79cc9bd063882a3af3beda81b96a936ae1bf51ac45f2a1def3f15fdb84de30f2\" returns successfully" Feb 13 19:23:22.358439 kubelet[2544]: I0213 19:23:22.358389 2544 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4926452b9909d806af29b6189b194ae88d64402549efa6a8aa8157676ffeab6" Feb 13 19:23:22.359377 containerd[1472]: time="2025-02-13T19:23:22.359334734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b44d967f-hpx7w,Uid:90b2170d-a844-4e38-873e-8af01cba6fe0,Namespace:calico-apiserver,Attempt:2,}" Feb 13 19:23:22.359838 containerd[1472]: time="2025-02-13T19:23:22.359763375Z" level=info msg="StopPodSandbox for \"a4926452b9909d806af29b6189b194ae88d64402549efa6a8aa8157676ffeab6\"" Feb 13 19:23:22.360096 containerd[1472]: time="2025-02-13T19:23:22.360057535Z" level=info msg="Ensure that sandbox a4926452b9909d806af29b6189b194ae88d64402549efa6a8aa8157676ffeab6 in task-service has been cleanup successfully" Feb 13 19:23:22.361073 containerd[1472]: time="2025-02-13T19:23:22.360297776Z" level=info msg="TearDown network for sandbox \"a4926452b9909d806af29b6189b194ae88d64402549efa6a8aa8157676ffeab6\" successfully" Feb 13 19:23:22.361073 containerd[1472]: time="2025-02-13T19:23:22.360317496Z" level=info msg="StopPodSandbox for \"a4926452b9909d806af29b6189b194ae88d64402549efa6a8aa8157676ffeab6\" returns successfully" Feb 13 19:23:22.361073 containerd[1472]: time="2025-02-13T19:23:22.360888937Z" level=info msg="StopPodSandbox for \"9886a774b5ddfc176ce9f280222642978f1f162c12b106b23d0358bf9b5a6fe4\"" Feb 13 19:23:22.361073 containerd[1472]: time="2025-02-13T19:23:22.360971617Z" level=info msg="TearDown network for sandbox \"9886a774b5ddfc176ce9f280222642978f1f162c12b106b23d0358bf9b5a6fe4\" successfully" Feb 13 19:23:22.361073 containerd[1472]: time="2025-02-13T19:23:22.360980897Z" level=info msg="StopPodSandbox for \"9886a774b5ddfc176ce9f280222642978f1f162c12b106b23d0358bf9b5a6fe4\" returns successfully" Feb 13 19:23:22.361491 containerd[1472]: time="2025-02-13T19:23:22.361460618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b44d967f-p6w96,Uid:400c37f6-81a9-403c-9b2f-cc1d18ee97aa,Namespace:calico-apiserver,Attempt:2,}" Feb 13 19:23:22.361728 kubelet[2544]: I0213 19:23:22.361705 2544 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9dc128c66258cebc145fb5927188ffde0a7a2ce1077cde74816d4066ba9c705" Feb 13 19:23:22.362766 containerd[1472]: time="2025-02-13T19:23:22.362734700Z" level=info msg="StopPodSandbox for \"b9dc128c66258cebc145fb5927188ffde0a7a2ce1077cde74816d4066ba9c705\"" Feb 13 19:23:22.362898 containerd[1472]: time="2025-02-13T19:23:22.362876260Z" level=info msg="Ensure that sandbox b9dc128c66258cebc145fb5927188ffde0a7a2ce1077cde74816d4066ba9c705 in task-service has been cleanup successfully" Feb 13 19:23:22.363492 containerd[1472]: time="2025-02-13T19:23:22.363418021Z" level=info msg="TearDown network for sandbox \"b9dc128c66258cebc145fb5927188ffde0a7a2ce1077cde74816d4066ba9c705\" successfully" Feb 13 19:23:22.363492 containerd[1472]: time="2025-02-13T19:23:22.363472021Z" level=info msg="StopPodSandbox for \"b9dc128c66258cebc145fb5927188ffde0a7a2ce1077cde74816d4066ba9c705\" returns successfully" Feb 13 19:23:22.365126 containerd[1472]: time="2025-02-13T19:23:22.364809624Z" level=info msg="StopPodSandbox for \"50fc7b5a1212a7f5c0f58d01289dd6423491fe5b48f97ad072a542ba7bf3a842\"" Feb 13 19:23:22.365255 containerd[1472]: time="2025-02-13T19:23:22.365200024Z" level=info msg="TearDown network for sandbox \"50fc7b5a1212a7f5c0f58d01289dd6423491fe5b48f97ad072a542ba7bf3a842\" successfully" Feb 13 19:23:22.365255 containerd[1472]: time="2025-02-13T19:23:22.365216265Z" level=info msg="StopPodSandbox for \"50fc7b5a1212a7f5c0f58d01289dd6423491fe5b48f97ad072a542ba7bf3a842\" returns successfully" Feb 13 19:23:22.365828 kubelet[2544]: I0213 19:23:22.365630 2544 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="096da91385a3c8839c09bc5456755139dacce2271f80a0a43253c557ac7e758d" Feb 13 19:23:22.366201 containerd[1472]: time="2025-02-13T19:23:22.366173906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dtfv4,Uid:278e43f1-bd8c-4a43-8396-436ddaca249b,Namespace:calico-system,Attempt:2,}" Feb 13 19:23:22.367337 containerd[1472]: time="2025-02-13T19:23:22.366913468Z" level=info msg="StopPodSandbox for \"096da91385a3c8839c09bc5456755139dacce2271f80a0a43253c557ac7e758d\"" Feb 13 19:23:22.368790 containerd[1472]: time="2025-02-13T19:23:22.368700311Z" level=info msg="Ensure that sandbox 096da91385a3c8839c09bc5456755139dacce2271f80a0a43253c557ac7e758d in task-service has been cleanup successfully" Feb 13 19:23:22.369271 containerd[1472]: time="2025-02-13T19:23:22.369246112Z" level=info msg="TearDown network for sandbox \"096da91385a3c8839c09bc5456755139dacce2271f80a0a43253c557ac7e758d\" successfully" Feb 13 19:23:22.369844 containerd[1472]: time="2025-02-13T19:23:22.369819073Z" level=info msg="StopPodSandbox for \"096da91385a3c8839c09bc5456755139dacce2271f80a0a43253c557ac7e758d\" returns successfully" Feb 13 19:23:22.371283 containerd[1472]: time="2025-02-13T19:23:22.371238515Z" level=info msg="StopPodSandbox for \"7a00c42bb521f7bef7a32da102cdf4d4a156bae2448cfacf8ccc9caa13e70634\"" Feb 13 19:23:22.372331 containerd[1472]: time="2025-02-13T19:23:22.371483996Z" level=info msg="TearDown network for sandbox \"7a00c42bb521f7bef7a32da102cdf4d4a156bae2448cfacf8ccc9caa13e70634\" successfully" Feb 13 19:23:22.372331 containerd[1472]: time="2025-02-13T19:23:22.371499956Z" level=info msg="StopPodSandbox for \"7a00c42bb521f7bef7a32da102cdf4d4a156bae2448cfacf8ccc9caa13e70634\" returns successfully" Feb 13 19:23:22.372762 kubelet[2544]: E0213 19:23:22.372082 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:23:22.372806 containerd[1472]: time="2025-02-13T19:23:22.372504038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-slhww,Uid:beb760d9-f48b-4afe-876e-eb78778e0f0b,Namespace:kube-system,Attempt:2,}" Feb 13 19:23:22.373141 kubelet[2544]: I0213 19:23:22.373092 2544 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a65ecae1d3e89163ada04189f06c243b3b5a17116154cd6cbede404e3524543" Feb 13 19:23:22.373991 containerd[1472]: time="2025-02-13T19:23:22.373929960Z" level=info msg="StopPodSandbox for \"1a65ecae1d3e89163ada04189f06c243b3b5a17116154cd6cbede404e3524543\"" Feb 13 19:23:22.375245 containerd[1472]: time="2025-02-13T19:23:22.375197083Z" level=info msg="Ensure that sandbox 1a65ecae1d3e89163ada04189f06c243b3b5a17116154cd6cbede404e3524543 in task-service has been cleanup successfully" Feb 13 19:23:22.376233 containerd[1472]: time="2025-02-13T19:23:22.376206845Z" level=info msg="TearDown network for sandbox \"1a65ecae1d3e89163ada04189f06c243b3b5a17116154cd6cbede404e3524543\" successfully" Feb 13 19:23:22.376233 containerd[1472]: time="2025-02-13T19:23:22.376227765Z" level=info msg="StopPodSandbox for \"1a65ecae1d3e89163ada04189f06c243b3b5a17116154cd6cbede404e3524543\" returns successfully" Feb 13 19:23:22.376514 containerd[1472]: time="2025-02-13T19:23:22.376487125Z" level=info msg="StopPodSandbox for \"9238bb01d809702cf71989d83c983b9c52956861a8e39fcabd6dd1ed7ee329dd\"" Feb 13 19:23:22.377061 containerd[1472]: time="2025-02-13T19:23:22.377031566Z" level=info msg="TearDown network for sandbox \"9238bb01d809702cf71989d83c983b9c52956861a8e39fcabd6dd1ed7ee329dd\" successfully" Feb 13 19:23:22.377061 containerd[1472]: time="2025-02-13T19:23:22.377057806Z" level=info msg="StopPodSandbox for \"9238bb01d809702cf71989d83c983b9c52956861a8e39fcabd6dd1ed7ee329dd\" returns successfully" Feb 13 19:23:22.377534 kubelet[2544]: E0213 19:23:22.377413 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:23:22.378152 containerd[1472]: time="2025-02-13T19:23:22.377962728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-8shn2,Uid:11ebe699-3307-4e28-ac6a-e555af8a982c,Namespace:kube-system,Attempt:2,}" Feb 13 19:23:22.381761 kubelet[2544]: I0213 19:23:22.381739 2544 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c587b1c1873a4da0b2c59ad781386932a84f435d9277e390c1c647867dcf9c6" Feb 13 19:23:22.384259 containerd[1472]: time="2025-02-13T19:23:22.384230899Z" level=info msg="StopPodSandbox for \"9c587b1c1873a4da0b2c59ad781386932a84f435d9277e390c1c647867dcf9c6\"" Feb 13 19:23:22.384884 containerd[1472]: time="2025-02-13T19:23:22.384564220Z" level=info msg="Ensure that sandbox 9c587b1c1873a4da0b2c59ad781386932a84f435d9277e390c1c647867dcf9c6 in task-service has been cleanup successfully" Feb 13 19:23:22.395096 containerd[1472]: time="2025-02-13T19:23:22.394924639Z" level=info msg="TearDown network for sandbox \"9c587b1c1873a4da0b2c59ad781386932a84f435d9277e390c1c647867dcf9c6\" successfully" Feb 13 19:23:22.395096 containerd[1472]: time="2025-02-13T19:23:22.394957679Z" level=info msg="StopPodSandbox for \"9c587b1c1873a4da0b2c59ad781386932a84f435d9277e390c1c647867dcf9c6\" returns successfully" Feb 13 19:23:22.395608 containerd[1472]: time="2025-02-13T19:23:22.395571120Z" level=info msg="StopPodSandbox for \"dd11beb574d3fbe9971dfab17c87cd30120129847d48f855e440b73909b823f2\"" Feb 13 19:23:22.395760 containerd[1472]: time="2025-02-13T19:23:22.395694840Z" level=info msg="TearDown network for sandbox \"dd11beb574d3fbe9971dfab17c87cd30120129847d48f855e440b73909b823f2\" successfully" Feb 13 19:23:22.395760 containerd[1472]: time="2025-02-13T19:23:22.395709800Z" level=info msg="StopPodSandbox for \"dd11beb574d3fbe9971dfab17c87cd30120129847d48f855e440b73909b823f2\" returns successfully" Feb 13 19:23:22.397459 containerd[1472]: time="2025-02-13T19:23:22.397377363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58dfd6696-g69sl,Uid:bd7b8e08-eba9-4ff4-a84a-26b9405284a6,Namespace:calico-system,Attempt:2,}" Feb 13 19:23:22.479435 containerd[1472]: time="2025-02-13T19:23:22.479378672Z" level=error msg="Failed to destroy network for sandbox \"de7676c8254aaae60409033dd601bfd2dfc40b59d6b06333583a5604aef4c02d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:22.479837 containerd[1472]: time="2025-02-13T19:23:22.479709393Z" level=error msg="encountered an error cleaning up failed sandbox \"de7676c8254aaae60409033dd601bfd2dfc40b59d6b06333583a5604aef4c02d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:22.479837 containerd[1472]: time="2025-02-13T19:23:22.479765353Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b44d967f-p6w96,Uid:400c37f6-81a9-403c-9b2f-cc1d18ee97aa,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"de7676c8254aaae60409033dd601bfd2dfc40b59d6b06333583a5604aef4c02d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:22.480699 kubelet[2544]: E0213 19:23:22.480531 2544 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de7676c8254aaae60409033dd601bfd2dfc40b59d6b06333583a5604aef4c02d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:22.480699 kubelet[2544]: E0213 19:23:22.480644 2544 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de7676c8254aaae60409033dd601bfd2dfc40b59d6b06333583a5604aef4c02d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b44d967f-p6w96" Feb 13 19:23:22.480699 kubelet[2544]: E0213 19:23:22.480668 2544 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de7676c8254aaae60409033dd601bfd2dfc40b59d6b06333583a5604aef4c02d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b44d967f-p6w96" Feb 13 19:23:22.480889 kubelet[2544]: E0213 19:23:22.480715 2544 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b44d967f-p6w96_calico-apiserver(400c37f6-81a9-403c-9b2f-cc1d18ee97aa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b44d967f-p6w96_calico-apiserver(400c37f6-81a9-403c-9b2f-cc1d18ee97aa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"de7676c8254aaae60409033dd601bfd2dfc40b59d6b06333583a5604aef4c02d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b44d967f-p6w96" podUID="400c37f6-81a9-403c-9b2f-cc1d18ee97aa" Feb 13 19:23:22.492676 systemd[1]: run-netns-cni\x2d000de00e\x2d8a89\x2d3ded\x2d450a\x2d9d65ee847e34.mount: Deactivated successfully. Feb 13 19:23:22.492766 systemd[1]: run-netns-cni\x2dcfe30cf7\x2d3dfc\x2db4a7\x2d58c3\x2d19886d5498ab.mount: Deactivated successfully. Feb 13 19:23:22.492814 systemd[1]: run-netns-cni\x2da6dc341e\x2d7150\x2d7c22\x2d40ec\x2deb5341c24463.mount: Deactivated successfully. Feb 13 19:23:22.492859 systemd[1]: run-netns-cni\x2da5b17a3e\x2d480d\x2d6e14\x2def4b\x2d8a356de92fbd.mount: Deactivated successfully. Feb 13 19:23:22.492901 systemd[1]: run-netns-cni\x2d7c908d7c\x2deec2\x2d5fff\x2ddb1b\x2dbac1d7d1e61f.mount: Deactivated successfully. Feb 13 19:23:22.492941 systemd[1]: run-netns-cni\x2dc1348d67\x2df8a1\x2d563a\x2d76f6\x2d6674e0475e5e.mount: Deactivated successfully. Feb 13 19:23:22.524892 containerd[1472]: time="2025-02-13T19:23:22.524658795Z" level=error msg="Failed to destroy network for sandbox \"7c2a32b901b31e27ae1a20a1fe61ade17e69b2e6fa8b5f8f14ee716821af402d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:22.528687 containerd[1472]: time="2025-02-13T19:23:22.526659678Z" level=error msg="Failed to destroy network for sandbox \"d89d3553643149b3994db4a0653fdbd12da912d00bb5ab4315973581b6b660b5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:22.526772 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7c2a32b901b31e27ae1a20a1fe61ade17e69b2e6fa8b5f8f14ee716821af402d-shm.mount: Deactivated successfully. Feb 13 19:23:22.530584 containerd[1472]: time="2025-02-13T19:23:22.529223803Z" level=error msg="encountered an error cleaning up failed sandbox \"d89d3553643149b3994db4a0653fdbd12da912d00bb5ab4315973581b6b660b5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:22.530584 containerd[1472]: time="2025-02-13T19:23:22.529335283Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-slhww,Uid:beb760d9-f48b-4afe-876e-eb78778e0f0b,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"d89d3553643149b3994db4a0653fdbd12da912d00bb5ab4315973581b6b660b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:22.529828 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d89d3553643149b3994db4a0653fdbd12da912d00bb5ab4315973581b6b660b5-shm.mount: Deactivated successfully. Feb 13 19:23:22.530759 kubelet[2544]: E0213 19:23:22.529748 2544 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d89d3553643149b3994db4a0653fdbd12da912d00bb5ab4315973581b6b660b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:22.530759 kubelet[2544]: E0213 19:23:22.529838 2544 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d89d3553643149b3994db4a0653fdbd12da912d00bb5ab4315973581b6b660b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-slhww" Feb 13 19:23:22.530759 kubelet[2544]: E0213 19:23:22.529870 2544 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d89d3553643149b3994db4a0653fdbd12da912d00bb5ab4315973581b6b660b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-slhww" Feb 13 19:23:22.530839 kubelet[2544]: E0213 19:23:22.529917 2544 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-slhww_kube-system(beb760d9-f48b-4afe-876e-eb78778e0f0b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-slhww_kube-system(beb760d9-f48b-4afe-876e-eb78778e0f0b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d89d3553643149b3994db4a0653fdbd12da912d00bb5ab4315973581b6b660b5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-slhww" podUID="beb760d9-f48b-4afe-876e-eb78778e0f0b" Feb 13 19:23:22.532282 containerd[1472]: time="2025-02-13T19:23:22.532227489Z" level=error msg="Failed to destroy network for sandbox \"ba474e5db3af83c6e13348802c48deb52dcfa25885a7fa5ba47c3f19ac08cda7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:22.533840 containerd[1472]: time="2025-02-13T19:23:22.533557851Z" level=error msg="encountered an error cleaning up failed sandbox \"ba474e5db3af83c6e13348802c48deb52dcfa25885a7fa5ba47c3f19ac08cda7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:22.533840 containerd[1472]: time="2025-02-13T19:23:22.533656171Z" level=error msg="encountered an error cleaning up failed sandbox \"7c2a32b901b31e27ae1a20a1fe61ade17e69b2e6fa8b5f8f14ee716821af402d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:22.533840 containerd[1472]: time="2025-02-13T19:23:22.533731291Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58dfd6696-g69sl,Uid:bd7b8e08-eba9-4ff4-a84a-26b9405284a6,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"7c2a32b901b31e27ae1a20a1fe61ade17e69b2e6fa8b5f8f14ee716821af402d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:22.533840 containerd[1472]: time="2025-02-13T19:23:22.533673891Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dtfv4,Uid:278e43f1-bd8c-4a43-8396-436ddaca249b,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"ba474e5db3af83c6e13348802c48deb52dcfa25885a7fa5ba47c3f19ac08cda7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:22.534821 kubelet[2544]: E0213 19:23:22.533905 2544 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba474e5db3af83c6e13348802c48deb52dcfa25885a7fa5ba47c3f19ac08cda7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:22.534821 kubelet[2544]: E0213 19:23:22.533950 2544 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba474e5db3af83c6e13348802c48deb52dcfa25885a7fa5ba47c3f19ac08cda7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dtfv4" Feb 13 19:23:22.534821 kubelet[2544]: E0213 19:23:22.533966 2544 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba474e5db3af83c6e13348802c48deb52dcfa25885a7fa5ba47c3f19ac08cda7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dtfv4" Feb 13 19:23:22.534257 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ba474e5db3af83c6e13348802c48deb52dcfa25885a7fa5ba47c3f19ac08cda7-shm.mount: Deactivated successfully. Feb 13 19:23:22.535765 kubelet[2544]: E0213 19:23:22.533999 2544 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dtfv4_calico-system(278e43f1-bd8c-4a43-8396-436ddaca249b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dtfv4_calico-system(278e43f1-bd8c-4a43-8396-436ddaca249b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ba474e5db3af83c6e13348802c48deb52dcfa25885a7fa5ba47c3f19ac08cda7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dtfv4" podUID="278e43f1-bd8c-4a43-8396-436ddaca249b" Feb 13 19:23:22.535765 kubelet[2544]: E0213 19:23:22.534034 2544 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c2a32b901b31e27ae1a20a1fe61ade17e69b2e6fa8b5f8f14ee716821af402d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:22.535765 kubelet[2544]: E0213 19:23:22.534052 2544 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c2a32b901b31e27ae1a20a1fe61ade17e69b2e6fa8b5f8f14ee716821af402d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-58dfd6696-g69sl" Feb 13 19:23:22.535863 kubelet[2544]: E0213 19:23:22.534063 2544 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c2a32b901b31e27ae1a20a1fe61ade17e69b2e6fa8b5f8f14ee716821af402d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-58dfd6696-g69sl" Feb 13 19:23:22.535863 kubelet[2544]: E0213 19:23:22.534083 2544 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-58dfd6696-g69sl_calico-system(bd7b8e08-eba9-4ff4-a84a-26b9405284a6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-58dfd6696-g69sl_calico-system(bd7b8e08-eba9-4ff4-a84a-26b9405284a6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7c2a32b901b31e27ae1a20a1fe61ade17e69b2e6fa8b5f8f14ee716821af402d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-58dfd6696-g69sl" podUID="bd7b8e08-eba9-4ff4-a84a-26b9405284a6" Feb 13 19:23:22.536816 containerd[1472]: time="2025-02-13T19:23:22.536725817Z" level=error msg="Failed to destroy network for sandbox \"fff6122e0e7af1c4d61552d62a901428e4905f43cb9d5e0cb796801cf8c19caa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:22.536816 containerd[1472]: time="2025-02-13T19:23:22.536804457Z" level=error msg="Failed to destroy network for sandbox \"5c69bd3f5517904d9d167931362b4913dc35adc9a2c26a7101f072289741fcfe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:22.537738 containerd[1472]: time="2025-02-13T19:23:22.537697778Z" level=error msg="encountered an error cleaning up failed sandbox \"fff6122e0e7af1c4d61552d62a901428e4905f43cb9d5e0cb796801cf8c19caa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:22.537815 containerd[1472]: time="2025-02-13T19:23:22.537754499Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b44d967f-hpx7w,Uid:90b2170d-a844-4e38-873e-8af01cba6fe0,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"fff6122e0e7af1c4d61552d62a901428e4905f43cb9d5e0cb796801cf8c19caa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:22.538579 kubelet[2544]: E0213 19:23:22.538136 2544 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fff6122e0e7af1c4d61552d62a901428e4905f43cb9d5e0cb796801cf8c19caa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:22.538579 kubelet[2544]: E0213 19:23:22.538177 2544 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fff6122e0e7af1c4d61552d62a901428e4905f43cb9d5e0cb796801cf8c19caa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b44d967f-hpx7w" Feb 13 19:23:22.538579 kubelet[2544]: E0213 19:23:22.538203 2544 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fff6122e0e7af1c4d61552d62a901428e4905f43cb9d5e0cb796801cf8c19caa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b44d967f-hpx7w" Feb 13 19:23:22.538788 containerd[1472]: time="2025-02-13T19:23:22.538174619Z" level=error msg="encountered an error cleaning up failed sandbox \"5c69bd3f5517904d9d167931362b4913dc35adc9a2c26a7101f072289741fcfe\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:22.538788 containerd[1472]: time="2025-02-13T19:23:22.538222019Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-8shn2,Uid:11ebe699-3307-4e28-ac6a-e555af8a982c,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"5c69bd3f5517904d9d167931362b4913dc35adc9a2c26a7101f072289741fcfe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:22.538845 kubelet[2544]: E0213 19:23:22.538233 2544 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b44d967f-hpx7w_calico-apiserver(90b2170d-a844-4e38-873e-8af01cba6fe0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b44d967f-hpx7w_calico-apiserver(90b2170d-a844-4e38-873e-8af01cba6fe0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fff6122e0e7af1c4d61552d62a901428e4905f43cb9d5e0cb796801cf8c19caa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b44d967f-hpx7w" podUID="90b2170d-a844-4e38-873e-8af01cba6fe0" Feb 13 19:23:22.538891 kubelet[2544]: E0213 19:23:22.538870 2544 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c69bd3f5517904d9d167931362b4913dc35adc9a2c26a7101f072289741fcfe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:22.538920 kubelet[2544]: E0213 19:23:22.538901 2544 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c69bd3f5517904d9d167931362b4913dc35adc9a2c26a7101f072289741fcfe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-8shn2" Feb 13 19:23:22.538947 kubelet[2544]: E0213 19:23:22.538921 2544 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c69bd3f5517904d9d167931362b4913dc35adc9a2c26a7101f072289741fcfe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-8shn2" Feb 13 19:23:22.538970 kubelet[2544]: E0213 19:23:22.538948 2544 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-8shn2_kube-system(11ebe699-3307-4e28-ac6a-e555af8a982c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-8shn2_kube-system(11ebe699-3307-4e28-ac6a-e555af8a982c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5c69bd3f5517904d9d167931362b4913dc35adc9a2c26a7101f072289741fcfe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-8shn2" podUID="11ebe699-3307-4e28-ac6a-e555af8a982c" Feb 13 19:23:22.540614 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5c69bd3f5517904d9d167931362b4913dc35adc9a2c26a7101f072289741fcfe-shm.mount: Deactivated successfully. Feb 13 19:23:22.540708 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fff6122e0e7af1c4d61552d62a901428e4905f43cb9d5e0cb796801cf8c19caa-shm.mount: Deactivated successfully. Feb 13 19:23:22.783684 kubelet[2544]: I0213 19:23:22.783540 2544 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:23:22.784406 kubelet[2544]: E0213 19:23:22.783939 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:23:23.385358 kubelet[2544]: I0213 19:23:23.385322 2544 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c2a32b901b31e27ae1a20a1fe61ade17e69b2e6fa8b5f8f14ee716821af402d" Feb 13 19:23:23.386238 containerd[1472]: time="2025-02-13T19:23:23.386199039Z" level=info msg="StopPodSandbox for \"7c2a32b901b31e27ae1a20a1fe61ade17e69b2e6fa8b5f8f14ee716821af402d\"" Feb 13 19:23:23.387002 containerd[1472]: time="2025-02-13T19:23:23.386369599Z" level=info msg="Ensure that sandbox 7c2a32b901b31e27ae1a20a1fe61ade17e69b2e6fa8b5f8f14ee716821af402d in task-service has been cleanup successfully" Feb 13 19:23:23.388078 containerd[1472]: time="2025-02-13T19:23:23.387747722Z" level=info msg="TearDown network for sandbox \"7c2a32b901b31e27ae1a20a1fe61ade17e69b2e6fa8b5f8f14ee716821af402d\" successfully" Feb 13 19:23:23.388078 containerd[1472]: time="2025-02-13T19:23:23.387771922Z" level=info msg="StopPodSandbox for \"7c2a32b901b31e27ae1a20a1fe61ade17e69b2e6fa8b5f8f14ee716821af402d\" returns successfully" Feb 13 19:23:23.388626 containerd[1472]: time="2025-02-13T19:23:23.388590323Z" level=info msg="StopPodSandbox for \"9c587b1c1873a4da0b2c59ad781386932a84f435d9277e390c1c647867dcf9c6\"" Feb 13 19:23:23.388796 containerd[1472]: time="2025-02-13T19:23:23.388778084Z" level=info msg="TearDown network for sandbox \"9c587b1c1873a4da0b2c59ad781386932a84f435d9277e390c1c647867dcf9c6\" successfully" Feb 13 19:23:23.388890 containerd[1472]: time="2025-02-13T19:23:23.388856404Z" level=info msg="StopPodSandbox for \"9c587b1c1873a4da0b2c59ad781386932a84f435d9277e390c1c647867dcf9c6\" returns successfully" Feb 13 19:23:23.388928 kubelet[2544]: I0213 19:23:23.388897 2544 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fff6122e0e7af1c4d61552d62a901428e4905f43cb9d5e0cb796801cf8c19caa" Feb 13 19:23:23.389704 containerd[1472]: time="2025-02-13T19:23:23.389673085Z" level=info msg="StopPodSandbox for \"dd11beb574d3fbe9971dfab17c87cd30120129847d48f855e440b73909b823f2\"" Feb 13 19:23:23.390134 containerd[1472]: time="2025-02-13T19:23:23.390107006Z" level=info msg="TearDown network for sandbox \"dd11beb574d3fbe9971dfab17c87cd30120129847d48f855e440b73909b823f2\" successfully" Feb 13 19:23:23.390134 containerd[1472]: time="2025-02-13T19:23:23.390128526Z" level=info msg="StopPodSandbox for \"dd11beb574d3fbe9971dfab17c87cd30120129847d48f855e440b73909b823f2\" returns successfully" Feb 13 19:23:23.390336 containerd[1472]: time="2025-02-13T19:23:23.390248686Z" level=info msg="StopPodSandbox for \"fff6122e0e7af1c4d61552d62a901428e4905f43cb9d5e0cb796801cf8c19caa\"" Feb 13 19:23:23.390428 containerd[1472]: time="2025-02-13T19:23:23.390409046Z" level=info msg="Ensure that sandbox fff6122e0e7af1c4d61552d62a901428e4905f43cb9d5e0cb796801cf8c19caa in task-service has been cleanup successfully" Feb 13 19:23:23.391132 containerd[1472]: time="2025-02-13T19:23:23.391101568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58dfd6696-g69sl,Uid:bd7b8e08-eba9-4ff4-a84a-26b9405284a6,Namespace:calico-system,Attempt:3,}" Feb 13 19:23:23.392568 containerd[1472]: time="2025-02-13T19:23:23.392436610Z" level=info msg="TearDown network for sandbox \"fff6122e0e7af1c4d61552d62a901428e4905f43cb9d5e0cb796801cf8c19caa\" successfully" Feb 13 19:23:23.392568 containerd[1472]: time="2025-02-13T19:23:23.392464970Z" level=info msg="StopPodSandbox for \"fff6122e0e7af1c4d61552d62a901428e4905f43cb9d5e0cb796801cf8c19caa\" returns successfully" Feb 13 19:23:23.392930 containerd[1472]: time="2025-02-13T19:23:23.392902651Z" level=info msg="StopPodSandbox for \"f08c19b9a372aabce29703f12b709d1b46d9e7576abfa63f9ca341a1ee693428\"" Feb 13 19:23:23.393110 containerd[1472]: time="2025-02-13T19:23:23.393092971Z" level=info msg="TearDown network for sandbox \"f08c19b9a372aabce29703f12b709d1b46d9e7576abfa63f9ca341a1ee693428\" successfully" Feb 13 19:23:23.393233 kubelet[2544]: I0213 19:23:23.393147 2544 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de7676c8254aaae60409033dd601bfd2dfc40b59d6b06333583a5604aef4c02d" Feb 13 19:23:23.393322 containerd[1472]: time="2025-02-13T19:23:23.393173931Z" level=info msg="StopPodSandbox for \"f08c19b9a372aabce29703f12b709d1b46d9e7576abfa63f9ca341a1ee693428\" returns successfully" Feb 13 19:23:23.393947 containerd[1472]: time="2025-02-13T19:23:23.393886332Z" level=info msg="StopPodSandbox for \"79cc9bd063882a3af3beda81b96a936ae1bf51ac45f2a1def3f15fdb84de30f2\"" Feb 13 19:23:23.394018 containerd[1472]: time="2025-02-13T19:23:23.393966012Z" level=info msg="TearDown network for sandbox \"79cc9bd063882a3af3beda81b96a936ae1bf51ac45f2a1def3f15fdb84de30f2\" successfully" Feb 13 19:23:23.394018 containerd[1472]: time="2025-02-13T19:23:23.393982772Z" level=info msg="StopPodSandbox for \"79cc9bd063882a3af3beda81b96a936ae1bf51ac45f2a1def3f15fdb84de30f2\" returns successfully" Feb 13 19:23:23.394808 containerd[1472]: time="2025-02-13T19:23:23.394187293Z" level=info msg="StopPodSandbox for \"de7676c8254aaae60409033dd601bfd2dfc40b59d6b06333583a5604aef4c02d\"" Feb 13 19:23:23.394808 containerd[1472]: time="2025-02-13T19:23:23.394350493Z" level=info msg="Ensure that sandbox de7676c8254aaae60409033dd601bfd2dfc40b59d6b06333583a5604aef4c02d in task-service has been cleanup successfully" Feb 13 19:23:23.394808 containerd[1472]: time="2025-02-13T19:23:23.394544013Z" level=info msg="TearDown network for sandbox \"de7676c8254aaae60409033dd601bfd2dfc40b59d6b06333583a5604aef4c02d\" successfully" Feb 13 19:23:23.394808 containerd[1472]: time="2025-02-13T19:23:23.394559893Z" level=info msg="StopPodSandbox for \"de7676c8254aaae60409033dd601bfd2dfc40b59d6b06333583a5604aef4c02d\" returns successfully" Feb 13 19:23:23.394808 containerd[1472]: time="2025-02-13T19:23:23.394565293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b44d967f-hpx7w,Uid:90b2170d-a844-4e38-873e-8af01cba6fe0,Namespace:calico-apiserver,Attempt:3,}" Feb 13 19:23:23.395500 containerd[1472]: time="2025-02-13T19:23:23.395465695Z" level=info msg="StopPodSandbox for \"a4926452b9909d806af29b6189b194ae88d64402549efa6a8aa8157676ffeab6\"" Feb 13 19:23:23.395620 containerd[1472]: time="2025-02-13T19:23:23.395551455Z" level=info msg="TearDown network for sandbox \"a4926452b9909d806af29b6189b194ae88d64402549efa6a8aa8157676ffeab6\" successfully" Feb 13 19:23:23.395620 containerd[1472]: time="2025-02-13T19:23:23.395562335Z" level=info msg="StopPodSandbox for \"a4926452b9909d806af29b6189b194ae88d64402549efa6a8aa8157676ffeab6\" returns successfully" Feb 13 19:23:23.397306 kubelet[2544]: I0213 19:23:23.396474 2544 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba474e5db3af83c6e13348802c48deb52dcfa25885a7fa5ba47c3f19ac08cda7" Feb 13 19:23:23.398338 containerd[1472]: time="2025-02-13T19:23:23.397904019Z" level=info msg="StopPodSandbox for \"ba474e5db3af83c6e13348802c48deb52dcfa25885a7fa5ba47c3f19ac08cda7\"" Feb 13 19:23:23.398338 containerd[1472]: time="2025-02-13T19:23:23.398002099Z" level=info msg="StopPodSandbox for \"9886a774b5ddfc176ce9f280222642978f1f162c12b106b23d0358bf9b5a6fe4\"" Feb 13 19:23:23.398338 containerd[1472]: time="2025-02-13T19:23:23.398080099Z" level=info msg="TearDown network for sandbox \"9886a774b5ddfc176ce9f280222642978f1f162c12b106b23d0358bf9b5a6fe4\" successfully" Feb 13 19:23:23.398338 containerd[1472]: time="2025-02-13T19:23:23.398090459Z" level=info msg="StopPodSandbox for \"9886a774b5ddfc176ce9f280222642978f1f162c12b106b23d0358bf9b5a6fe4\" returns successfully" Feb 13 19:23:23.398338 containerd[1472]: time="2025-02-13T19:23:23.398100219Z" level=info msg="Ensure that sandbox ba474e5db3af83c6e13348802c48deb52dcfa25885a7fa5ba47c3f19ac08cda7 in task-service has been cleanup successfully" Feb 13 19:23:23.398747 containerd[1472]: time="2025-02-13T19:23:23.398671740Z" level=info msg="TearDown network for sandbox \"ba474e5db3af83c6e13348802c48deb52dcfa25885a7fa5ba47c3f19ac08cda7\" successfully" Feb 13 19:23:23.398747 containerd[1472]: time="2025-02-13T19:23:23.398697500Z" level=info msg="StopPodSandbox for \"ba474e5db3af83c6e13348802c48deb52dcfa25885a7fa5ba47c3f19ac08cda7\" returns successfully" Feb 13 19:23:23.399194 containerd[1472]: time="2025-02-13T19:23:23.399171101Z" level=info msg="StopPodSandbox for \"b9dc128c66258cebc145fb5927188ffde0a7a2ce1077cde74816d4066ba9c705\"" Feb 13 19:23:23.399344 containerd[1472]: time="2025-02-13T19:23:23.399250181Z" level=info msg="TearDown network for sandbox \"b9dc128c66258cebc145fb5927188ffde0a7a2ce1077cde74816d4066ba9c705\" successfully" Feb 13 19:23:23.399344 containerd[1472]: time="2025-02-13T19:23:23.399265901Z" level=info msg="StopPodSandbox for \"b9dc128c66258cebc145fb5927188ffde0a7a2ce1077cde74816d4066ba9c705\" returns successfully" Feb 13 19:23:23.399403 containerd[1472]: time="2025-02-13T19:23:23.399353742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b44d967f-p6w96,Uid:400c37f6-81a9-403c-9b2f-cc1d18ee97aa,Namespace:calico-apiserver,Attempt:3,}" Feb 13 19:23:23.399900 containerd[1472]: time="2025-02-13T19:23:23.399834942Z" level=info msg="StopPodSandbox for \"50fc7b5a1212a7f5c0f58d01289dd6423491fe5b48f97ad072a542ba7bf3a842\"" Feb 13 19:23:23.400142 containerd[1472]: time="2025-02-13T19:23:23.400005303Z" level=info msg="TearDown network for sandbox \"50fc7b5a1212a7f5c0f58d01289dd6423491fe5b48f97ad072a542ba7bf3a842\" successfully" Feb 13 19:23:23.400142 containerd[1472]: time="2025-02-13T19:23:23.400025103Z" level=info msg="StopPodSandbox for \"50fc7b5a1212a7f5c0f58d01289dd6423491fe5b48f97ad072a542ba7bf3a842\" returns successfully" Feb 13 19:23:23.401353 containerd[1472]: time="2025-02-13T19:23:23.401153945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dtfv4,Uid:278e43f1-bd8c-4a43-8396-436ddaca249b,Namespace:calico-system,Attempt:3,}" Feb 13 19:23:23.401475 kubelet[2544]: I0213 19:23:23.401446 2544 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d89d3553643149b3994db4a0653fdbd12da912d00bb5ab4315973581b6b660b5" Feb 13 19:23:23.401978 containerd[1472]: time="2025-02-13T19:23:23.401952186Z" level=info msg="StopPodSandbox for \"d89d3553643149b3994db4a0653fdbd12da912d00bb5ab4315973581b6b660b5\"" Feb 13 19:23:23.402231 containerd[1472]: time="2025-02-13T19:23:23.402103706Z" level=info msg="Ensure that sandbox d89d3553643149b3994db4a0653fdbd12da912d00bb5ab4315973581b6b660b5 in task-service has been cleanup successfully" Feb 13 19:23:23.402571 containerd[1472]: time="2025-02-13T19:23:23.402492507Z" level=info msg="TearDown network for sandbox \"d89d3553643149b3994db4a0653fdbd12da912d00bb5ab4315973581b6b660b5\" successfully" Feb 13 19:23:23.402571 containerd[1472]: time="2025-02-13T19:23:23.402515587Z" level=info msg="StopPodSandbox for \"d89d3553643149b3994db4a0653fdbd12da912d00bb5ab4315973581b6b660b5\" returns successfully" Feb 13 19:23:23.403361 containerd[1472]: time="2025-02-13T19:23:23.403219868Z" level=info msg="StopPodSandbox for \"096da91385a3c8839c09bc5456755139dacce2271f80a0a43253c557ac7e758d\"" Feb 13 19:23:23.403361 containerd[1472]: time="2025-02-13T19:23:23.403321988Z" level=info msg="TearDown network for sandbox \"096da91385a3c8839c09bc5456755139dacce2271f80a0a43253c557ac7e758d\" successfully" Feb 13 19:23:23.403361 containerd[1472]: time="2025-02-13T19:23:23.403333028Z" level=info msg="StopPodSandbox for \"096da91385a3c8839c09bc5456755139dacce2271f80a0a43253c557ac7e758d\" returns successfully" Feb 13 19:23:23.403923 kubelet[2544]: I0213 19:23:23.403885 2544 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c69bd3f5517904d9d167931362b4913dc35adc9a2c26a7101f072289741fcfe" Feb 13 19:23:23.404004 containerd[1472]: time="2025-02-13T19:23:23.403934669Z" level=info msg="StopPodSandbox for \"7a00c42bb521f7bef7a32da102cdf4d4a156bae2448cfacf8ccc9caa13e70634\"" Feb 13 19:23:23.404169 containerd[1472]: time="2025-02-13T19:23:23.404104990Z" level=info msg="TearDown network for sandbox \"7a00c42bb521f7bef7a32da102cdf4d4a156bae2448cfacf8ccc9caa13e70634\" successfully" Feb 13 19:23:23.404169 containerd[1472]: time="2025-02-13T19:23:23.404127870Z" level=info msg="StopPodSandbox for \"7a00c42bb521f7bef7a32da102cdf4d4a156bae2448cfacf8ccc9caa13e70634\" returns successfully" Feb 13 19:23:23.404225 kubelet[2544]: E0213 19:23:23.404169 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:23:23.404418 kubelet[2544]: E0213 19:23:23.404393 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:23:23.404497 containerd[1472]: time="2025-02-13T19:23:23.404431470Z" level=info msg="StopPodSandbox for \"5c69bd3f5517904d9d167931362b4913dc35adc9a2c26a7101f072289741fcfe\"" Feb 13 19:23:23.404771 containerd[1472]: time="2025-02-13T19:23:23.404564551Z" level=info msg="Ensure that sandbox 5c69bd3f5517904d9d167931362b4913dc35adc9a2c26a7101f072289741fcfe in task-service has been cleanup successfully" Feb 13 19:23:23.404771 containerd[1472]: time="2025-02-13T19:23:23.404737431Z" level=info msg="TearDown network for sandbox \"5c69bd3f5517904d9d167931362b4913dc35adc9a2c26a7101f072289741fcfe\" successfully" Feb 13 19:23:23.404771 containerd[1472]: time="2025-02-13T19:23:23.404753951Z" level=info msg="StopPodSandbox for \"5c69bd3f5517904d9d167931362b4913dc35adc9a2c26a7101f072289741fcfe\" returns successfully" Feb 13 19:23:23.404907 containerd[1472]: time="2025-02-13T19:23:23.404743391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-slhww,Uid:beb760d9-f48b-4afe-876e-eb78778e0f0b,Namespace:kube-system,Attempt:3,}" Feb 13 19:23:23.405625 containerd[1472]: time="2025-02-13T19:23:23.405560552Z" level=info msg="StopPodSandbox for \"1a65ecae1d3e89163ada04189f06c243b3b5a17116154cd6cbede404e3524543\"" Feb 13 19:23:23.405702 containerd[1472]: time="2025-02-13T19:23:23.405660952Z" level=info msg="TearDown network for sandbox \"1a65ecae1d3e89163ada04189f06c243b3b5a17116154cd6cbede404e3524543\" successfully" Feb 13 19:23:23.405702 containerd[1472]: time="2025-02-13T19:23:23.405680592Z" level=info msg="StopPodSandbox for \"1a65ecae1d3e89163ada04189f06c243b3b5a17116154cd6cbede404e3524543\" returns successfully" Feb 13 19:23:23.405999 containerd[1472]: time="2025-02-13T19:23:23.405972513Z" level=info msg="StopPodSandbox for \"9238bb01d809702cf71989d83c983b9c52956861a8e39fcabd6dd1ed7ee329dd\"" Feb 13 19:23:23.406071 containerd[1472]: time="2025-02-13T19:23:23.406055753Z" level=info msg="TearDown network for sandbox \"9238bb01d809702cf71989d83c983b9c52956861a8e39fcabd6dd1ed7ee329dd\" successfully" Feb 13 19:23:23.406116 containerd[1472]: time="2025-02-13T19:23:23.406069353Z" level=info msg="StopPodSandbox for \"9238bb01d809702cf71989d83c983b9c52956861a8e39fcabd6dd1ed7ee329dd\" returns successfully" Feb 13 19:23:23.406409 kubelet[2544]: E0213 19:23:23.406324 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:23:23.406792 containerd[1472]: time="2025-02-13T19:23:23.406766114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-8shn2,Uid:11ebe699-3307-4e28-ac6a-e555af8a982c,Namespace:kube-system,Attempt:3,}" Feb 13 19:23:23.484973 systemd[1]: run-netns-cni\x2dbdda3c79\x2d9640\x2dc684\x2d9597\x2d671adcdd7ec3.mount: Deactivated successfully. Feb 13 19:23:23.485146 systemd[1]: run-netns-cni\x2df3b37105\x2db552\x2de71b\x2d4afc\x2d3cc184e948f7.mount: Deactivated successfully. Feb 13 19:23:23.485197 systemd[1]: run-netns-cni\x2d51701e44\x2d6c0b\x2d6238\x2d72df\x2d5c30000483b2.mount: Deactivated successfully. Feb 13 19:23:23.485244 systemd[1]: run-netns-cni\x2dee632f8b\x2dea2f\x2d5afc\x2d65ab\x2d9caaf8a7dede.mount: Deactivated successfully. Feb 13 19:23:23.485296 systemd[1]: run-netns-cni\x2dd7aaafdb\x2d0df1\x2dc26e\x2d18b0\x2d5a79dcb878b6.mount: Deactivated successfully. Feb 13 19:23:23.485343 systemd[1]: run-netns-cni\x2df7d14317\x2d6234\x2d6725\x2d4b12\x2d1ff912ee9aaf.mount: Deactivated successfully. Feb 13 19:23:23.761629 containerd[1472]: time="2025-02-13T19:23:23.761564280Z" level=error msg="Failed to destroy network for sandbox \"6016e85c998124b83ac646fb6a3dabbbc1920d9fd029722e7215ebdffb13a5be\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:23.763642 containerd[1472]: time="2025-02-13T19:23:23.763586683Z" level=error msg="Failed to destroy network for sandbox \"e9ddff8643dc18c5cc1376aa87d108844c569928eb57728a5442cfa00994edcf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:23.764114 containerd[1472]: time="2025-02-13T19:23:23.763981724Z" level=error msg="encountered an error cleaning up failed sandbox \"6016e85c998124b83ac646fb6a3dabbbc1920d9fd029722e7215ebdffb13a5be\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:23.764246 containerd[1472]: time="2025-02-13T19:23:23.764224044Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58dfd6696-g69sl,Uid:bd7b8e08-eba9-4ff4-a84a-26b9405284a6,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"6016e85c998124b83ac646fb6a3dabbbc1920d9fd029722e7215ebdffb13a5be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:23.764654 kubelet[2544]: E0213 19:23:23.764573 2544 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6016e85c998124b83ac646fb6a3dabbbc1920d9fd029722e7215ebdffb13a5be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:23.764860 kubelet[2544]: E0213 19:23:23.764671 2544 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6016e85c998124b83ac646fb6a3dabbbc1920d9fd029722e7215ebdffb13a5be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-58dfd6696-g69sl" Feb 13 19:23:23.764860 kubelet[2544]: E0213 19:23:23.764739 2544 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6016e85c998124b83ac646fb6a3dabbbc1920d9fd029722e7215ebdffb13a5be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-58dfd6696-g69sl" Feb 13 19:23:23.764860 kubelet[2544]: E0213 19:23:23.764790 2544 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-58dfd6696-g69sl_calico-system(bd7b8e08-eba9-4ff4-a84a-26b9405284a6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-58dfd6696-g69sl_calico-system(bd7b8e08-eba9-4ff4-a84a-26b9405284a6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6016e85c998124b83ac646fb6a3dabbbc1920d9fd029722e7215ebdffb13a5be\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-58dfd6696-g69sl" podUID="bd7b8e08-eba9-4ff4-a84a-26b9405284a6" Feb 13 19:23:23.765029 containerd[1472]: time="2025-02-13T19:23:23.764626205Z" level=error msg="encountered an error cleaning up failed sandbox \"e9ddff8643dc18c5cc1376aa87d108844c569928eb57728a5442cfa00994edcf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:23.765502 containerd[1472]: time="2025-02-13T19:23:23.765465966Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b44d967f-p6w96,Uid:400c37f6-81a9-403c-9b2f-cc1d18ee97aa,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"e9ddff8643dc18c5cc1376aa87d108844c569928eb57728a5442cfa00994edcf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:23.765819 kubelet[2544]: E0213 19:23:23.765783 2544 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9ddff8643dc18c5cc1376aa87d108844c569928eb57728a5442cfa00994edcf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:23.765897 kubelet[2544]: E0213 19:23:23.765827 2544 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9ddff8643dc18c5cc1376aa87d108844c569928eb57728a5442cfa00994edcf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b44d967f-p6w96" Feb 13 19:23:23.765897 kubelet[2544]: E0213 19:23:23.765846 2544 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9ddff8643dc18c5cc1376aa87d108844c569928eb57728a5442cfa00994edcf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b44d967f-p6w96" Feb 13 19:23:23.765897 kubelet[2544]: E0213 19:23:23.765879 2544 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b44d967f-p6w96_calico-apiserver(400c37f6-81a9-403c-9b2f-cc1d18ee97aa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b44d967f-p6w96_calico-apiserver(400c37f6-81a9-403c-9b2f-cc1d18ee97aa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e9ddff8643dc18c5cc1376aa87d108844c569928eb57728a5442cfa00994edcf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b44d967f-p6w96" podUID="400c37f6-81a9-403c-9b2f-cc1d18ee97aa" Feb 13 19:23:23.767683 containerd[1472]: time="2025-02-13T19:23:23.767648770Z" level=error msg="Failed to destroy network for sandbox \"8838278d5d134ca219edb457e7749500a22cc1ccd0695e4995e6fa55293cd05e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:23.768078 containerd[1472]: time="2025-02-13T19:23:23.768050291Z" level=error msg="encountered an error cleaning up failed sandbox \"8838278d5d134ca219edb457e7749500a22cc1ccd0695e4995e6fa55293cd05e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:23.768382 containerd[1472]: time="2025-02-13T19:23:23.768356651Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b44d967f-hpx7w,Uid:90b2170d-a844-4e38-873e-8af01cba6fe0,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"8838278d5d134ca219edb457e7749500a22cc1ccd0695e4995e6fa55293cd05e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:23.769200 kubelet[2544]: E0213 19:23:23.768640 2544 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8838278d5d134ca219edb457e7749500a22cc1ccd0695e4995e6fa55293cd05e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:23.769280 kubelet[2544]: E0213 19:23:23.769215 2544 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8838278d5d134ca219edb457e7749500a22cc1ccd0695e4995e6fa55293cd05e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b44d967f-hpx7w" Feb 13 19:23:23.769280 kubelet[2544]: E0213 19:23:23.769240 2544 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8838278d5d134ca219edb457e7749500a22cc1ccd0695e4995e6fa55293cd05e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b44d967f-hpx7w" Feb 13 19:23:23.769342 kubelet[2544]: E0213 19:23:23.769274 2544 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b44d967f-hpx7w_calico-apiserver(90b2170d-a844-4e38-873e-8af01cba6fe0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b44d967f-hpx7w_calico-apiserver(90b2170d-a844-4e38-873e-8af01cba6fe0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8838278d5d134ca219edb457e7749500a22cc1ccd0695e4995e6fa55293cd05e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b44d967f-hpx7w" podUID="90b2170d-a844-4e38-873e-8af01cba6fe0" Feb 13 19:23:23.774020 containerd[1472]: time="2025-02-13T19:23:23.773704460Z" level=error msg="Failed to destroy network for sandbox \"cb7be6a4712632dbe090eebc2eeaede27df77149aef06ceca10bc6d335e26b9b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:23.774113 containerd[1472]: time="2025-02-13T19:23:23.774093821Z" level=error msg="encountered an error cleaning up failed sandbox \"cb7be6a4712632dbe090eebc2eeaede27df77149aef06ceca10bc6d335e26b9b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:23.774175 containerd[1472]: time="2025-02-13T19:23:23.774144901Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dtfv4,Uid:278e43f1-bd8c-4a43-8396-436ddaca249b,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"cb7be6a4712632dbe090eebc2eeaede27df77149aef06ceca10bc6d335e26b9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:23.774471 kubelet[2544]: E0213 19:23:23.774431 2544 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb7be6a4712632dbe090eebc2eeaede27df77149aef06ceca10bc6d335e26b9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:23.774536 kubelet[2544]: E0213 19:23:23.774490 2544 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb7be6a4712632dbe090eebc2eeaede27df77149aef06ceca10bc6d335e26b9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dtfv4" Feb 13 19:23:23.774536 kubelet[2544]: E0213 19:23:23.774513 2544 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb7be6a4712632dbe090eebc2eeaede27df77149aef06ceca10bc6d335e26b9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dtfv4" Feb 13 19:23:23.774675 kubelet[2544]: E0213 19:23:23.774557 2544 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dtfv4_calico-system(278e43f1-bd8c-4a43-8396-436ddaca249b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dtfv4_calico-system(278e43f1-bd8c-4a43-8396-436ddaca249b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cb7be6a4712632dbe090eebc2eeaede27df77149aef06ceca10bc6d335e26b9b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dtfv4" podUID="278e43f1-bd8c-4a43-8396-436ddaca249b" Feb 13 19:23:23.788616 containerd[1472]: time="2025-02-13T19:23:23.786829043Z" level=error msg="Failed to destroy network for sandbox \"d6250d08a5ceb4ef45dd21cda6947799ccd29ddbdf4b402d69923e57de5f0de1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:23.788616 containerd[1472]: time="2025-02-13T19:23:23.787131603Z" level=error msg="encountered an error cleaning up failed sandbox \"d6250d08a5ceb4ef45dd21cda6947799ccd29ddbdf4b402d69923e57de5f0de1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:23.788616 containerd[1472]: time="2025-02-13T19:23:23.787186483Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-slhww,Uid:beb760d9-f48b-4afe-876e-eb78778e0f0b,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"d6250d08a5ceb4ef45dd21cda6947799ccd29ddbdf4b402d69923e57de5f0de1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:23.789247 kubelet[2544]: E0213 19:23:23.788905 2544 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6250d08a5ceb4ef45dd21cda6947799ccd29ddbdf4b402d69923e57de5f0de1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:23.789247 kubelet[2544]: E0213 19:23:23.788964 2544 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6250d08a5ceb4ef45dd21cda6947799ccd29ddbdf4b402d69923e57de5f0de1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-slhww" Feb 13 19:23:23.789247 kubelet[2544]: E0213 19:23:23.788985 2544 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6250d08a5ceb4ef45dd21cda6947799ccd29ddbdf4b402d69923e57de5f0de1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-slhww" Feb 13 19:23:23.789399 kubelet[2544]: E0213 19:23:23.789028 2544 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-slhww_kube-system(beb760d9-f48b-4afe-876e-eb78778e0f0b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-slhww_kube-system(beb760d9-f48b-4afe-876e-eb78778e0f0b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d6250d08a5ceb4ef45dd21cda6947799ccd29ddbdf4b402d69923e57de5f0de1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-slhww" podUID="beb760d9-f48b-4afe-876e-eb78778e0f0b" Feb 13 19:23:23.792243 containerd[1472]: time="2025-02-13T19:23:23.791925412Z" level=error msg="Failed to destroy network for sandbox \"48811930cf72c6624d608457afdcfffb68cadf11dbe5cbc1d4081b85d2fea001\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:23.792823 containerd[1472]: time="2025-02-13T19:23:23.792787293Z" level=error msg="encountered an error cleaning up failed sandbox \"48811930cf72c6624d608457afdcfffb68cadf11dbe5cbc1d4081b85d2fea001\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:23.792892 containerd[1472]: time="2025-02-13T19:23:23.792855973Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-8shn2,Uid:11ebe699-3307-4e28-ac6a-e555af8a982c,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"48811930cf72c6624d608457afdcfffb68cadf11dbe5cbc1d4081b85d2fea001\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:23.793108 kubelet[2544]: E0213 19:23:23.793071 2544 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48811930cf72c6624d608457afdcfffb68cadf11dbe5cbc1d4081b85d2fea001\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:23.793320 kubelet[2544]: E0213 19:23:23.793162 2544 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48811930cf72c6624d608457afdcfffb68cadf11dbe5cbc1d4081b85d2fea001\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-8shn2" Feb 13 19:23:23.793320 kubelet[2544]: E0213 19:23:23.793180 2544 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48811930cf72c6624d608457afdcfffb68cadf11dbe5cbc1d4081b85d2fea001\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-8shn2" Feb 13 19:23:23.793320 kubelet[2544]: E0213 19:23:23.793229 2544 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-8shn2_kube-system(11ebe699-3307-4e28-ac6a-e555af8a982c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-8shn2_kube-system(11ebe699-3307-4e28-ac6a-e555af8a982c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"48811930cf72c6624d608457afdcfffb68cadf11dbe5cbc1d4081b85d2fea001\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-8shn2" podUID="11ebe699-3307-4e28-ac6a-e555af8a982c" Feb 13 19:23:24.410703 kubelet[2544]: I0213 19:23:24.410622 2544 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb7be6a4712632dbe090eebc2eeaede27df77149aef06ceca10bc6d335e26b9b" Feb 13 19:23:24.411240 containerd[1472]: time="2025-02-13T19:23:24.411204865Z" level=info msg="StopPodSandbox for \"cb7be6a4712632dbe090eebc2eeaede27df77149aef06ceca10bc6d335e26b9b\"" Feb 13 19:23:24.411627 containerd[1472]: time="2025-02-13T19:23:24.411606665Z" level=info msg="Ensure that sandbox cb7be6a4712632dbe090eebc2eeaede27df77149aef06ceca10bc6d335e26b9b in task-service has been cleanup successfully" Feb 13 19:23:24.412313 containerd[1472]: time="2025-02-13T19:23:24.412288826Z" level=info msg="TearDown network for sandbox \"cb7be6a4712632dbe090eebc2eeaede27df77149aef06ceca10bc6d335e26b9b\" successfully" Feb 13 19:23:24.412540 containerd[1472]: time="2025-02-13T19:23:24.412513747Z" level=info msg="StopPodSandbox for \"cb7be6a4712632dbe090eebc2eeaede27df77149aef06ceca10bc6d335e26b9b\" returns successfully" Feb 13 19:23:24.413932 containerd[1472]: time="2025-02-13T19:23:24.413079468Z" level=info msg="StopPodSandbox for \"ba474e5db3af83c6e13348802c48deb52dcfa25885a7fa5ba47c3f19ac08cda7\"" Feb 13 19:23:24.414032 kubelet[2544]: I0213 19:23:24.414000 2544 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8838278d5d134ca219edb457e7749500a22cc1ccd0695e4995e6fa55293cd05e" Feb 13 19:23:24.414071 containerd[1472]: time="2025-02-13T19:23:24.414045669Z" level=info msg="TearDown network for sandbox \"ba474e5db3af83c6e13348802c48deb52dcfa25885a7fa5ba47c3f19ac08cda7\" successfully" Feb 13 19:23:24.414510 containerd[1472]: time="2025-02-13T19:23:24.414208629Z" level=info msg="StopPodSandbox for \"ba474e5db3af83c6e13348802c48deb52dcfa25885a7fa5ba47c3f19ac08cda7\" returns successfully" Feb 13 19:23:24.414904 containerd[1472]: time="2025-02-13T19:23:24.414552630Z" level=info msg="StopPodSandbox for \"8838278d5d134ca219edb457e7749500a22cc1ccd0695e4995e6fa55293cd05e\"" Feb 13 19:23:24.414904 containerd[1472]: time="2025-02-13T19:23:24.414685670Z" level=info msg="StopPodSandbox for \"b9dc128c66258cebc145fb5927188ffde0a7a2ce1077cde74816d4066ba9c705\"" Feb 13 19:23:24.414904 containerd[1472]: time="2025-02-13T19:23:24.414787710Z" level=info msg="Ensure that sandbox 8838278d5d134ca219edb457e7749500a22cc1ccd0695e4995e6fa55293cd05e in task-service has been cleanup successfully" Feb 13 19:23:24.414904 containerd[1472]: time="2025-02-13T19:23:24.414802590Z" level=info msg="TearDown network for sandbox \"b9dc128c66258cebc145fb5927188ffde0a7a2ce1077cde74816d4066ba9c705\" successfully" Feb 13 19:23:24.414904 containerd[1472]: time="2025-02-13T19:23:24.414825870Z" level=info msg="StopPodSandbox for \"b9dc128c66258cebc145fb5927188ffde0a7a2ce1077cde74816d4066ba9c705\" returns successfully" Feb 13 19:23:24.415063 containerd[1472]: time="2025-02-13T19:23:24.414964471Z" level=info msg="TearDown network for sandbox \"8838278d5d134ca219edb457e7749500a22cc1ccd0695e4995e6fa55293cd05e\" successfully" Feb 13 19:23:24.415063 containerd[1472]: time="2025-02-13T19:23:24.414978711Z" level=info msg="StopPodSandbox for \"8838278d5d134ca219edb457e7749500a22cc1ccd0695e4995e6fa55293cd05e\" returns successfully" Feb 13 19:23:24.415351 containerd[1472]: time="2025-02-13T19:23:24.415317311Z" level=info msg="StopPodSandbox for \"50fc7b5a1212a7f5c0f58d01289dd6423491fe5b48f97ad072a542ba7bf3a842\"" Feb 13 19:23:24.415440 containerd[1472]: time="2025-02-13T19:23:24.415410431Z" level=info msg="StopPodSandbox for \"fff6122e0e7af1c4d61552d62a901428e4905f43cb9d5e0cb796801cf8c19caa\"" Feb 13 19:23:24.415536 containerd[1472]: time="2025-02-13T19:23:24.415411431Z" level=info msg="TearDown network for sandbox \"50fc7b5a1212a7f5c0f58d01289dd6423491fe5b48f97ad072a542ba7bf3a842\" successfully" Feb 13 19:23:24.415536 containerd[1472]: time="2025-02-13T19:23:24.415532352Z" level=info msg="StopPodSandbox for \"50fc7b5a1212a7f5c0f58d01289dd6423491fe5b48f97ad072a542ba7bf3a842\" returns successfully" Feb 13 19:23:24.415613 containerd[1472]: time="2025-02-13T19:23:24.415507831Z" level=info msg="TearDown network for sandbox \"fff6122e0e7af1c4d61552d62a901428e4905f43cb9d5e0cb796801cf8c19caa\" successfully" Feb 13 19:23:24.415613 containerd[1472]: time="2025-02-13T19:23:24.415566672Z" level=info msg="StopPodSandbox for \"fff6122e0e7af1c4d61552d62a901428e4905f43cb9d5e0cb796801cf8c19caa\" returns successfully" Feb 13 19:23:24.416631 containerd[1472]: time="2025-02-13T19:23:24.416199073Z" level=info msg="StopPodSandbox for \"f08c19b9a372aabce29703f12b709d1b46d9e7576abfa63f9ca341a1ee693428\"" Feb 13 19:23:24.416631 containerd[1472]: time="2025-02-13T19:23:24.416310033Z" level=info msg="TearDown network for sandbox \"f08c19b9a372aabce29703f12b709d1b46d9e7576abfa63f9ca341a1ee693428\" successfully" Feb 13 19:23:24.416631 containerd[1472]: time="2025-02-13T19:23:24.416334113Z" level=info msg="StopPodSandbox for \"f08c19b9a372aabce29703f12b709d1b46d9e7576abfa63f9ca341a1ee693428\" returns successfully" Feb 13 19:23:24.417101 containerd[1472]: time="2025-02-13T19:23:24.417063794Z" level=info msg="StopPodSandbox for \"79cc9bd063882a3af3beda81b96a936ae1bf51ac45f2a1def3f15fdb84de30f2\"" Feb 13 19:23:24.417212 containerd[1472]: time="2025-02-13T19:23:24.417182874Z" level=info msg="TearDown network for sandbox \"79cc9bd063882a3af3beda81b96a936ae1bf51ac45f2a1def3f15fdb84de30f2\" successfully" Feb 13 19:23:24.417261 containerd[1472]: time="2025-02-13T19:23:24.417211914Z" level=info msg="StopPodSandbox for \"79cc9bd063882a3af3beda81b96a936ae1bf51ac45f2a1def3f15fdb84de30f2\" returns successfully" Feb 13 19:23:24.417677 kubelet[2544]: I0213 19:23:24.417457 2544 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48811930cf72c6624d608457afdcfffb68cadf11dbe5cbc1d4081b85d2fea001" Feb 13 19:23:24.417944 containerd[1472]: time="2025-02-13T19:23:24.417767955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b44d967f-hpx7w,Uid:90b2170d-a844-4e38-873e-8af01cba6fe0,Namespace:calico-apiserver,Attempt:4,}" Feb 13 19:23:24.418363 containerd[1472]: time="2025-02-13T19:23:24.418335156Z" level=info msg="StopPodSandbox for \"48811930cf72c6624d608457afdcfffb68cadf11dbe5cbc1d4081b85d2fea001\"" Feb 13 19:23:24.418547 containerd[1472]: time="2025-02-13T19:23:24.418528156Z" level=info msg="Ensure that sandbox 48811930cf72c6624d608457afdcfffb68cadf11dbe5cbc1d4081b85d2fea001 in task-service has been cleanup successfully" Feb 13 19:23:24.418748 containerd[1472]: time="2025-02-13T19:23:24.418718677Z" level=info msg="TearDown network for sandbox \"48811930cf72c6624d608457afdcfffb68cadf11dbe5cbc1d4081b85d2fea001\" successfully" Feb 13 19:23:24.418748 containerd[1472]: time="2025-02-13T19:23:24.418739357Z" level=info msg="StopPodSandbox for \"48811930cf72c6624d608457afdcfffb68cadf11dbe5cbc1d4081b85d2fea001\" returns successfully" Feb 13 19:23:24.419124 containerd[1472]: time="2025-02-13T19:23:24.419099757Z" level=info msg="StopPodSandbox for \"5c69bd3f5517904d9d167931362b4913dc35adc9a2c26a7101f072289741fcfe\"" Feb 13 19:23:24.419231 containerd[1472]: time="2025-02-13T19:23:24.419204317Z" level=info msg="TearDown network for sandbox \"5c69bd3f5517904d9d167931362b4913dc35adc9a2c26a7101f072289741fcfe\" successfully" Feb 13 19:23:24.419261 containerd[1472]: time="2025-02-13T19:23:24.419218637Z" level=info msg="StopPodSandbox for \"5c69bd3f5517904d9d167931362b4913dc35adc9a2c26a7101f072289741fcfe\" returns successfully" Feb 13 19:23:24.419683 containerd[1472]: time="2025-02-13T19:23:24.419632518Z" level=info msg="StopPodSandbox for \"1a65ecae1d3e89163ada04189f06c243b3b5a17116154cd6cbede404e3524543\"" Feb 13 19:23:24.420148 containerd[1472]: time="2025-02-13T19:23:24.419884838Z" level=info msg="TearDown network for sandbox \"1a65ecae1d3e89163ada04189f06c243b3b5a17116154cd6cbede404e3524543\" successfully" Feb 13 19:23:24.420148 containerd[1472]: time="2025-02-13T19:23:24.419902518Z" level=info msg="StopPodSandbox for \"1a65ecae1d3e89163ada04189f06c243b3b5a17116154cd6cbede404e3524543\" returns successfully" Feb 13 19:23:24.420448 containerd[1472]: time="2025-02-13T19:23:24.420403879Z" level=info msg="StopPodSandbox for \"9238bb01d809702cf71989d83c983b9c52956861a8e39fcabd6dd1ed7ee329dd\"" Feb 13 19:23:24.420582 containerd[1472]: time="2025-02-13T19:23:24.420550640Z" level=info msg="TearDown network for sandbox \"9238bb01d809702cf71989d83c983b9c52956861a8e39fcabd6dd1ed7ee329dd\" successfully" Feb 13 19:23:24.420582 containerd[1472]: time="2025-02-13T19:23:24.420578040Z" level=info msg="StopPodSandbox for \"9238bb01d809702cf71989d83c983b9c52956861a8e39fcabd6dd1ed7ee329dd\" returns successfully" Feb 13 19:23:24.420789 kubelet[2544]: E0213 19:23:24.420762 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:23:24.421201 containerd[1472]: time="2025-02-13T19:23:24.421166721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-8shn2,Uid:11ebe699-3307-4e28-ac6a-e555af8a982c,Namespace:kube-system,Attempt:4,}" Feb 13 19:23:24.421622 kubelet[2544]: I0213 19:23:24.421415 2544 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6016e85c998124b83ac646fb6a3dabbbc1920d9fd029722e7215ebdffb13a5be" Feb 13 19:23:24.422167 containerd[1472]: time="2025-02-13T19:23:24.422130482Z" level=info msg="StopPodSandbox for \"6016e85c998124b83ac646fb6a3dabbbc1920d9fd029722e7215ebdffb13a5be\"" Feb 13 19:23:24.422533 containerd[1472]: time="2025-02-13T19:23:24.422486123Z" level=info msg="Ensure that sandbox 6016e85c998124b83ac646fb6a3dabbbc1920d9fd029722e7215ebdffb13a5be in task-service has been cleanup successfully" Feb 13 19:23:24.422729 containerd[1472]: time="2025-02-13T19:23:24.422673363Z" level=info msg="TearDown network for sandbox \"6016e85c998124b83ac646fb6a3dabbbc1920d9fd029722e7215ebdffb13a5be\" successfully" Feb 13 19:23:24.422729 containerd[1472]: time="2025-02-13T19:23:24.422690843Z" level=info msg="StopPodSandbox for \"6016e85c998124b83ac646fb6a3dabbbc1920d9fd029722e7215ebdffb13a5be\" returns successfully" Feb 13 19:23:24.423835 containerd[1472]: time="2025-02-13T19:23:24.422983963Z" level=info msg="StopPodSandbox for \"7c2a32b901b31e27ae1a20a1fe61ade17e69b2e6fa8b5f8f14ee716821af402d\"" Feb 13 19:23:24.423835 containerd[1472]: time="2025-02-13T19:23:24.423073044Z" level=info msg="TearDown network for sandbox \"7c2a32b901b31e27ae1a20a1fe61ade17e69b2e6fa8b5f8f14ee716821af402d\" successfully" Feb 13 19:23:24.423835 containerd[1472]: time="2025-02-13T19:23:24.423083644Z" level=info msg="StopPodSandbox for \"7c2a32b901b31e27ae1a20a1fe61ade17e69b2e6fa8b5f8f14ee716821af402d\" returns successfully" Feb 13 19:23:24.423835 containerd[1472]: time="2025-02-13T19:23:24.423469444Z" level=info msg="StopPodSandbox for \"9c587b1c1873a4da0b2c59ad781386932a84f435d9277e390c1c647867dcf9c6\"" Feb 13 19:23:24.423835 containerd[1472]: time="2025-02-13T19:23:24.423545524Z" level=info msg="TearDown network for sandbox \"9c587b1c1873a4da0b2c59ad781386932a84f435d9277e390c1c647867dcf9c6\" successfully" Feb 13 19:23:24.423835 containerd[1472]: time="2025-02-13T19:23:24.423554644Z" level=info msg="StopPodSandbox for \"9c587b1c1873a4da0b2c59ad781386932a84f435d9277e390c1c647867dcf9c6\" returns successfully" Feb 13 19:23:24.424004 containerd[1472]: time="2025-02-13T19:23:24.423946005Z" level=info msg="StopPodSandbox for \"dd11beb574d3fbe9971dfab17c87cd30120129847d48f855e440b73909b823f2\"" Feb 13 19:23:24.424199 containerd[1472]: time="2025-02-13T19:23:24.424170085Z" level=info msg="TearDown network for sandbox \"dd11beb574d3fbe9971dfab17c87cd30120129847d48f855e440b73909b823f2\" successfully" Feb 13 19:23:24.424199 containerd[1472]: time="2025-02-13T19:23:24.424195125Z" level=info msg="StopPodSandbox for \"dd11beb574d3fbe9971dfab17c87cd30120129847d48f855e440b73909b823f2\" returns successfully" Feb 13 19:23:24.425150 containerd[1472]: time="2025-02-13T19:23:24.424707726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58dfd6696-g69sl,Uid:bd7b8e08-eba9-4ff4-a84a-26b9405284a6,Namespace:calico-system,Attempt:4,}" Feb 13 19:23:24.425240 kubelet[2544]: I0213 19:23:24.424824 2544 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6250d08a5ceb4ef45dd21cda6947799ccd29ddbdf4b402d69923e57de5f0de1" Feb 13 19:23:24.432041 containerd[1472]: time="2025-02-13T19:23:24.431966298Z" level=info msg="StopPodSandbox for \"d6250d08a5ceb4ef45dd21cda6947799ccd29ddbdf4b402d69923e57de5f0de1\"" Feb 13 19:23:24.432461 containerd[1472]: time="2025-02-13T19:23:24.432437419Z" level=info msg="Ensure that sandbox d6250d08a5ceb4ef45dd21cda6947799ccd29ddbdf4b402d69923e57de5f0de1 in task-service has been cleanup successfully" Feb 13 19:23:24.432757 containerd[1472]: time="2025-02-13T19:23:24.432639259Z" level=info msg="TearDown network for sandbox \"d6250d08a5ceb4ef45dd21cda6947799ccd29ddbdf4b402d69923e57de5f0de1\" successfully" Feb 13 19:23:24.432757 containerd[1472]: time="2025-02-13T19:23:24.432670139Z" level=info msg="StopPodSandbox for \"d6250d08a5ceb4ef45dd21cda6947799ccd29ddbdf4b402d69923e57de5f0de1\" returns successfully" Feb 13 19:23:24.485041 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e9ddff8643dc18c5cc1376aa87d108844c569928eb57728a5442cfa00994edcf-shm.mount: Deactivated successfully. Feb 13 19:23:24.485133 systemd[1]: run-netns-cni\x2dfaff9704\x2d30c9\x2d733b\x2dc82a\x2d20c799bd4cc9.mount: Deactivated successfully. Feb 13 19:23:24.485181 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8838278d5d134ca219edb457e7749500a22cc1ccd0695e4995e6fa55293cd05e-shm.mount: Deactivated successfully. Feb 13 19:23:24.485238 systemd[1]: run-netns-cni\x2d5d1e6cd6\x2d9dee\x2de909\x2d2039\x2da944c79ed374.mount: Deactivated successfully. Feb 13 19:23:24.485293 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6016e85c998124b83ac646fb6a3dabbbc1920d9fd029722e7215ebdffb13a5be-shm.mount: Deactivated successfully. Feb 13 19:23:24.485349 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1382127570.mount: Deactivated successfully. Feb 13 19:23:24.504200 containerd[1472]: time="2025-02-13T19:23:24.503502892Z" level=info msg="StopPodSandbox for \"d89d3553643149b3994db4a0653fdbd12da912d00bb5ab4315973581b6b660b5\"" Feb 13 19:23:24.504200 containerd[1472]: time="2025-02-13T19:23:24.503707093Z" level=info msg="TearDown network for sandbox \"d89d3553643149b3994db4a0653fdbd12da912d00bb5ab4315973581b6b660b5\" successfully" Feb 13 19:23:24.504200 containerd[1472]: time="2025-02-13T19:23:24.503733613Z" level=info msg="StopPodSandbox for \"d89d3553643149b3994db4a0653fdbd12da912d00bb5ab4315973581b6b660b5\" returns successfully" Feb 13 19:23:24.504430 containerd[1472]: time="2025-02-13T19:23:24.504250133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dtfv4,Uid:278e43f1-bd8c-4a43-8396-436ddaca249b,Namespace:calico-system,Attempt:4,}" Feb 13 19:23:24.505085 containerd[1472]: time="2025-02-13T19:23:24.504871454Z" level=info msg="StopPodSandbox for \"096da91385a3c8839c09bc5456755139dacce2271f80a0a43253c557ac7e758d\"" Feb 13 19:23:24.505085 containerd[1472]: time="2025-02-13T19:23:24.504977095Z" level=info msg="TearDown network for sandbox \"096da91385a3c8839c09bc5456755139dacce2271f80a0a43253c557ac7e758d\" successfully" Feb 13 19:23:24.505085 containerd[1472]: time="2025-02-13T19:23:24.504989135Z" level=info msg="StopPodSandbox for \"096da91385a3c8839c09bc5456755139dacce2271f80a0a43253c557ac7e758d\" returns successfully" Feb 13 19:23:24.505816 containerd[1472]: time="2025-02-13T19:23:24.505639416Z" level=info msg="StopPodSandbox for \"7a00c42bb521f7bef7a32da102cdf4d4a156bae2448cfacf8ccc9caa13e70634\"" Feb 13 19:23:24.505816 containerd[1472]: time="2025-02-13T19:23:24.505737976Z" level=info msg="TearDown network for sandbox \"7a00c42bb521f7bef7a32da102cdf4d4a156bae2448cfacf8ccc9caa13e70634\" successfully" Feb 13 19:23:24.505816 containerd[1472]: time="2025-02-13T19:23:24.505748296Z" level=info msg="StopPodSandbox for \"7a00c42bb521f7bef7a32da102cdf4d4a156bae2448cfacf8ccc9caa13e70634\" returns successfully" Feb 13 19:23:24.506279 kubelet[2544]: E0213 19:23:24.506236 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:23:24.506784 containerd[1472]: time="2025-02-13T19:23:24.506543377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-slhww,Uid:beb760d9-f48b-4afe-876e-eb78778e0f0b,Namespace:kube-system,Attempt:4,}" Feb 13 19:23:24.520324 kubelet[2544]: I0213 19:23:24.520293 2544 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9ddff8643dc18c5cc1376aa87d108844c569928eb57728a5442cfa00994edcf" Feb 13 19:23:24.522453 containerd[1472]: time="2025-02-13T19:23:24.522411042Z" level=info msg="StopPodSandbox for \"e9ddff8643dc18c5cc1376aa87d108844c569928eb57728a5442cfa00994edcf\"" Feb 13 19:23:24.522726 containerd[1472]: time="2025-02-13T19:23:24.522700163Z" level=info msg="Ensure that sandbox e9ddff8643dc18c5cc1376aa87d108844c569928eb57728a5442cfa00994edcf in task-service has been cleanup successfully" Feb 13 19:23:24.523062 containerd[1472]: time="2025-02-13T19:23:24.522943123Z" level=info msg="TearDown network for sandbox \"e9ddff8643dc18c5cc1376aa87d108844c569928eb57728a5442cfa00994edcf\" successfully" Feb 13 19:23:24.523062 containerd[1472]: time="2025-02-13T19:23:24.522964203Z" level=info msg="StopPodSandbox for \"e9ddff8643dc18c5cc1376aa87d108844c569928eb57728a5442cfa00994edcf\" returns successfully" Feb 13 19:23:24.524348 containerd[1472]: time="2025-02-13T19:23:24.524307806Z" level=info msg="StopPodSandbox for \"de7676c8254aaae60409033dd601bfd2dfc40b59d6b06333583a5604aef4c02d\"" Feb 13 19:23:24.524414 containerd[1472]: time="2025-02-13T19:23:24.524402646Z" level=info msg="TearDown network for sandbox \"de7676c8254aaae60409033dd601bfd2dfc40b59d6b06333583a5604aef4c02d\" successfully" Feb 13 19:23:24.524443 containerd[1472]: time="2025-02-13T19:23:24.524414486Z" level=info msg="StopPodSandbox for \"de7676c8254aaae60409033dd601bfd2dfc40b59d6b06333583a5604aef4c02d\" returns successfully" Feb 13 19:23:24.524788 systemd[1]: run-netns-cni\x2d12feb4dd\x2d9b6a\x2d7b61\x2dd7e0\x2d0553b8fea264.mount: Deactivated successfully. Feb 13 19:23:24.524929 containerd[1472]: time="2025-02-13T19:23:24.524888646Z" level=info msg="StopPodSandbox for \"a4926452b9909d806af29b6189b194ae88d64402549efa6a8aa8157676ffeab6\"" Feb 13 19:23:24.525045 containerd[1472]: time="2025-02-13T19:23:24.525021327Z" level=info msg="TearDown network for sandbox \"a4926452b9909d806af29b6189b194ae88d64402549efa6a8aa8157676ffeab6\" successfully" Feb 13 19:23:24.525045 containerd[1472]: time="2025-02-13T19:23:24.525038727Z" level=info msg="StopPodSandbox for \"a4926452b9909d806af29b6189b194ae88d64402549efa6a8aa8157676ffeab6\" returns successfully" Feb 13 19:23:24.525306 containerd[1472]: time="2025-02-13T19:23:24.525269287Z" level=info msg="StopPodSandbox for \"9886a774b5ddfc176ce9f280222642978f1f162c12b106b23d0358bf9b5a6fe4\"" Feb 13 19:23:24.525368 containerd[1472]: time="2025-02-13T19:23:24.525349567Z" level=info msg="TearDown network for sandbox \"9886a774b5ddfc176ce9f280222642978f1f162c12b106b23d0358bf9b5a6fe4\" successfully" Feb 13 19:23:24.525368 containerd[1472]: time="2025-02-13T19:23:24.525364687Z" level=info msg="StopPodSandbox for \"9886a774b5ddfc176ce9f280222642978f1f162c12b106b23d0358bf9b5a6fe4\" returns successfully" Feb 13 19:23:24.525835 containerd[1472]: time="2025-02-13T19:23:24.525803768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b44d967f-p6w96,Uid:400c37f6-81a9-403c-9b2f-cc1d18ee97aa,Namespace:calico-apiserver,Attempt:4,}" Feb 13 19:23:24.560109 containerd[1472]: time="2025-02-13T19:23:24.560056903Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:23:24.570546 containerd[1472]: time="2025-02-13T19:23:24.570487479Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Feb 13 19:23:24.581421 containerd[1472]: time="2025-02-13T19:23:24.581382817Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:23:24.619790 containerd[1472]: time="2025-02-13T19:23:24.619727438Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:23:24.627229 containerd[1472]: time="2025-02-13T19:23:24.627164170Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 4.300439587s" Feb 13 19:23:24.627229 containerd[1472]: time="2025-02-13T19:23:24.627204530Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Feb 13 19:23:24.637330 containerd[1472]: time="2025-02-13T19:23:24.636087104Z" level=info msg="CreateContainer within sandbox \"dfc66efa2c85e7615af6c450b93d9f5537de9116d158af82a6df5fb90b91d477\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 19:23:24.680034 containerd[1472]: time="2025-02-13T19:23:24.679921294Z" level=info msg="CreateContainer within sandbox \"dfc66efa2c85e7615af6c450b93d9f5537de9116d158af82a6df5fb90b91d477\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8007a4e7b224ab8683a11f61eb8654c1d3f67123a425146b9c9db8dbca3681a4\"" Feb 13 19:23:24.681063 containerd[1472]: time="2025-02-13T19:23:24.681014056Z" level=info msg="StartContainer for \"8007a4e7b224ab8683a11f61eb8654c1d3f67123a425146b9c9db8dbca3681a4\"" Feb 13 19:23:24.682864 containerd[1472]: time="2025-02-13T19:23:24.682764899Z" level=error msg="Failed to destroy network for sandbox \"25cadf9b39de722cdb51a441f232474c54a2727e8241363fb7678024b0cc2939\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:24.683390 containerd[1472]: time="2025-02-13T19:23:24.683315860Z" level=error msg="encountered an error cleaning up failed sandbox \"25cadf9b39de722cdb51a441f232474c54a2727e8241363fb7678024b0cc2939\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:24.683500 containerd[1472]: time="2025-02-13T19:23:24.683417940Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-8shn2,Uid:11ebe699-3307-4e28-ac6a-e555af8a982c,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"25cadf9b39de722cdb51a441f232474c54a2727e8241363fb7678024b0cc2939\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:24.683605 kubelet[2544]: E0213 19:23:24.683571 2544 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25cadf9b39de722cdb51a441f232474c54a2727e8241363fb7678024b0cc2939\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:24.683674 kubelet[2544]: E0213 19:23:24.683631 2544 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25cadf9b39de722cdb51a441f232474c54a2727e8241363fb7678024b0cc2939\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-8shn2" Feb 13 19:23:24.683674 kubelet[2544]: E0213 19:23:24.683653 2544 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25cadf9b39de722cdb51a441f232474c54a2727e8241363fb7678024b0cc2939\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-8shn2" Feb 13 19:23:24.683772 kubelet[2544]: E0213 19:23:24.683695 2544 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-8shn2_kube-system(11ebe699-3307-4e28-ac6a-e555af8a982c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-8shn2_kube-system(11ebe699-3307-4e28-ac6a-e555af8a982c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"25cadf9b39de722cdb51a441f232474c54a2727e8241363fb7678024b0cc2939\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-8shn2" podUID="11ebe699-3307-4e28-ac6a-e555af8a982c" Feb 13 19:23:24.705614 containerd[1472]: time="2025-02-13T19:23:24.705551335Z" level=error msg="Failed to destroy network for sandbox \"7e6feab27645ffd61b92d460846c135a9d5977d67d273cbde0a2439d3c194c38\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:24.705920 containerd[1472]: time="2025-02-13T19:23:24.705890336Z" level=error msg="encountered an error cleaning up failed sandbox \"7e6feab27645ffd61b92d460846c135a9d5977d67d273cbde0a2439d3c194c38\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:24.705973 containerd[1472]: time="2025-02-13T19:23:24.705952016Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58dfd6696-g69sl,Uid:bd7b8e08-eba9-4ff4-a84a-26b9405284a6,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"7e6feab27645ffd61b92d460846c135a9d5977d67d273cbde0a2439d3c194c38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:24.706216 kubelet[2544]: E0213 19:23:24.706176 2544 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e6feab27645ffd61b92d460846c135a9d5977d67d273cbde0a2439d3c194c38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:24.706291 kubelet[2544]: E0213 19:23:24.706238 2544 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e6feab27645ffd61b92d460846c135a9d5977d67d273cbde0a2439d3c194c38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-58dfd6696-g69sl" Feb 13 19:23:24.706291 kubelet[2544]: E0213 19:23:24.706266 2544 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e6feab27645ffd61b92d460846c135a9d5977d67d273cbde0a2439d3c194c38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-58dfd6696-g69sl" Feb 13 19:23:24.706380 kubelet[2544]: E0213 19:23:24.706317 2544 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-58dfd6696-g69sl_calico-system(bd7b8e08-eba9-4ff4-a84a-26b9405284a6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-58dfd6696-g69sl_calico-system(bd7b8e08-eba9-4ff4-a84a-26b9405284a6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7e6feab27645ffd61b92d460846c135a9d5977d67d273cbde0a2439d3c194c38\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-58dfd6696-g69sl" podUID="bd7b8e08-eba9-4ff4-a84a-26b9405284a6" Feb 13 19:23:24.716747 containerd[1472]: time="2025-02-13T19:23:24.716688953Z" level=error msg="Failed to destroy network for sandbox \"d47414dc28e478d0c0959c2bb8276a7437c645c4dbfafff1a1efab07a3da7b2f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:24.717098 containerd[1472]: time="2025-02-13T19:23:24.717044714Z" level=error msg="encountered an error cleaning up failed sandbox \"d47414dc28e478d0c0959c2bb8276a7437c645c4dbfafff1a1efab07a3da7b2f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:24.717181 containerd[1472]: time="2025-02-13T19:23:24.717117354Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dtfv4,Uid:278e43f1-bd8c-4a43-8396-436ddaca249b,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"d47414dc28e478d0c0959c2bb8276a7437c645c4dbfafff1a1efab07a3da7b2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:24.717450 kubelet[2544]: E0213 19:23:24.717376 2544 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d47414dc28e478d0c0959c2bb8276a7437c645c4dbfafff1a1efab07a3da7b2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:24.717450 kubelet[2544]: E0213 19:23:24.717430 2544 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d47414dc28e478d0c0959c2bb8276a7437c645c4dbfafff1a1efab07a3da7b2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dtfv4" Feb 13 19:23:24.717604 kubelet[2544]: E0213 19:23:24.717455 2544 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d47414dc28e478d0c0959c2bb8276a7437c645c4dbfafff1a1efab07a3da7b2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dtfv4" Feb 13 19:23:24.717604 kubelet[2544]: E0213 19:23:24.717492 2544 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dtfv4_calico-system(278e43f1-bd8c-4a43-8396-436ddaca249b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dtfv4_calico-system(278e43f1-bd8c-4a43-8396-436ddaca249b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d47414dc28e478d0c0959c2bb8276a7437c645c4dbfafff1a1efab07a3da7b2f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dtfv4" podUID="278e43f1-bd8c-4a43-8396-436ddaca249b" Feb 13 19:23:24.720038 containerd[1472]: time="2025-02-13T19:23:24.719999199Z" level=error msg="Failed to destroy network for sandbox \"cc7af871476d637ce9bba215c4499eefedcfe396348ddcced2e24cbaf8c8b503\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:24.721011 containerd[1472]: time="2025-02-13T19:23:24.720973120Z" level=error msg="encountered an error cleaning up failed sandbox \"cc7af871476d637ce9bba215c4499eefedcfe396348ddcced2e24cbaf8c8b503\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:24.721590 containerd[1472]: time="2025-02-13T19:23:24.721555521Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-slhww,Uid:beb760d9-f48b-4afe-876e-eb78778e0f0b,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"cc7af871476d637ce9bba215c4499eefedcfe396348ddcced2e24cbaf8c8b503\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:24.721826 kubelet[2544]: E0213 19:23:24.721786 2544 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc7af871476d637ce9bba215c4499eefedcfe396348ddcced2e24cbaf8c8b503\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:24.721883 kubelet[2544]: E0213 19:23:24.721839 2544 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc7af871476d637ce9bba215c4499eefedcfe396348ddcced2e24cbaf8c8b503\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-slhww" Feb 13 19:23:24.721883 kubelet[2544]: E0213 19:23:24.721859 2544 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc7af871476d637ce9bba215c4499eefedcfe396348ddcced2e24cbaf8c8b503\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-slhww" Feb 13 19:23:24.721980 kubelet[2544]: E0213 19:23:24.721931 2544 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-slhww_kube-system(beb760d9-f48b-4afe-876e-eb78778e0f0b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-slhww_kube-system(beb760d9-f48b-4afe-876e-eb78778e0f0b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cc7af871476d637ce9bba215c4499eefedcfe396348ddcced2e24cbaf8c8b503\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-slhww" podUID="beb760d9-f48b-4afe-876e-eb78778e0f0b" Feb 13 19:23:24.722175 containerd[1472]: time="2025-02-13T19:23:24.722146482Z" level=error msg="Failed to destroy network for sandbox \"8eac2bdb041882bb940afbf935efc0cca1713f5e69cccf9d0bf977711d5c0c18\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:24.722959 containerd[1472]: time="2025-02-13T19:23:24.722932163Z" level=error msg="encountered an error cleaning up failed sandbox \"8eac2bdb041882bb940afbf935efc0cca1713f5e69cccf9d0bf977711d5c0c18\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:24.723098 containerd[1472]: time="2025-02-13T19:23:24.723061844Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b44d967f-hpx7w,Uid:90b2170d-a844-4e38-873e-8af01cba6fe0,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"8eac2bdb041882bb940afbf935efc0cca1713f5e69cccf9d0bf977711d5c0c18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:24.723342 kubelet[2544]: E0213 19:23:24.723314 2544 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8eac2bdb041882bb940afbf935efc0cca1713f5e69cccf9d0bf977711d5c0c18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:24.723424 kubelet[2544]: E0213 19:23:24.723352 2544 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8eac2bdb041882bb940afbf935efc0cca1713f5e69cccf9d0bf977711d5c0c18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b44d967f-hpx7w" Feb 13 19:23:24.723424 kubelet[2544]: E0213 19:23:24.723369 2544 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8eac2bdb041882bb940afbf935efc0cca1713f5e69cccf9d0bf977711d5c0c18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b44d967f-hpx7w" Feb 13 19:23:24.723424 kubelet[2544]: E0213 19:23:24.723402 2544 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b44d967f-hpx7w_calico-apiserver(90b2170d-a844-4e38-873e-8af01cba6fe0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b44d967f-hpx7w_calico-apiserver(90b2170d-a844-4e38-873e-8af01cba6fe0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8eac2bdb041882bb940afbf935efc0cca1713f5e69cccf9d0bf977711d5c0c18\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b44d967f-hpx7w" podUID="90b2170d-a844-4e38-873e-8af01cba6fe0" Feb 13 19:23:24.724038 containerd[1472]: time="2025-02-13T19:23:24.724005485Z" level=error msg="Failed to destroy network for sandbox \"3f96dcbafa5ad411894d41a029bfeebfc46605011bf1ff99a6c49a5b1cc57236\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:24.724307 containerd[1472]: time="2025-02-13T19:23:24.724263205Z" level=error msg="encountered an error cleaning up failed sandbox \"3f96dcbafa5ad411894d41a029bfeebfc46605011bf1ff99a6c49a5b1cc57236\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:24.724354 containerd[1472]: time="2025-02-13T19:23:24.724322766Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b44d967f-p6w96,Uid:400c37f6-81a9-403c-9b2f-cc1d18ee97aa,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"3f96dcbafa5ad411894d41a029bfeebfc46605011bf1ff99a6c49a5b1cc57236\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:24.724609 kubelet[2544]: E0213 19:23:24.724512 2544 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f96dcbafa5ad411894d41a029bfeebfc46605011bf1ff99a6c49a5b1cc57236\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:23:24.724656 kubelet[2544]: E0213 19:23:24.724585 2544 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f96dcbafa5ad411894d41a029bfeebfc46605011bf1ff99a6c49a5b1cc57236\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b44d967f-p6w96" Feb 13 19:23:24.724656 kubelet[2544]: E0213 19:23:24.724638 2544 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f96dcbafa5ad411894d41a029bfeebfc46605011bf1ff99a6c49a5b1cc57236\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b44d967f-p6w96" Feb 13 19:23:24.725199 kubelet[2544]: E0213 19:23:24.724676 2544 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b44d967f-p6w96_calico-apiserver(400c37f6-81a9-403c-9b2f-cc1d18ee97aa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b44d967f-p6w96_calico-apiserver(400c37f6-81a9-403c-9b2f-cc1d18ee97aa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3f96dcbafa5ad411894d41a029bfeebfc46605011bf1ff99a6c49a5b1cc57236\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b44d967f-p6w96" podUID="400c37f6-81a9-403c-9b2f-cc1d18ee97aa" Feb 13 19:23:24.747768 systemd[1]: Started cri-containerd-8007a4e7b224ab8683a11f61eb8654c1d3f67123a425146b9c9db8dbca3681a4.scope - libcontainer container 8007a4e7b224ab8683a11f61eb8654c1d3f67123a425146b9c9db8dbca3681a4. Feb 13 19:23:24.771913 containerd[1472]: time="2025-02-13T19:23:24.771860122Z" level=info msg="StartContainer for \"8007a4e7b224ab8683a11f61eb8654c1d3f67123a425146b9c9db8dbca3681a4\" returns successfully" Feb 13 19:23:24.971151 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 19:23:24.971283 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 19:23:25.487650 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7e6feab27645ffd61b92d460846c135a9d5977d67d273cbde0a2439d3c194c38-shm.mount: Deactivated successfully. Feb 13 19:23:25.487743 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-25cadf9b39de722cdb51a441f232474c54a2727e8241363fb7678024b0cc2939-shm.mount: Deactivated successfully. Feb 13 19:23:25.530733 kubelet[2544]: I0213 19:23:25.530432 2544 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8eac2bdb041882bb940afbf935efc0cca1713f5e69cccf9d0bf977711d5c0c18" Feb 13 19:23:25.531646 containerd[1472]: time="2025-02-13T19:23:25.531340084Z" level=info msg="StopPodSandbox for \"8eac2bdb041882bb940afbf935efc0cca1713f5e69cccf9d0bf977711d5c0c18\"" Feb 13 19:23:25.531646 containerd[1472]: time="2025-02-13T19:23:25.531501324Z" level=info msg="Ensure that sandbox 8eac2bdb041882bb940afbf935efc0cca1713f5e69cccf9d0bf977711d5c0c18 in task-service has been cleanup successfully" Feb 13 19:23:25.532330 containerd[1472]: time="2025-02-13T19:23:25.531914604Z" level=info msg="TearDown network for sandbox \"8eac2bdb041882bb940afbf935efc0cca1713f5e69cccf9d0bf977711d5c0c18\" successfully" Feb 13 19:23:25.532330 containerd[1472]: time="2025-02-13T19:23:25.531931124Z" level=info msg="StopPodSandbox for \"8eac2bdb041882bb940afbf935efc0cca1713f5e69cccf9d0bf977711d5c0c18\" returns successfully" Feb 13 19:23:25.533422 systemd[1]: run-netns-cni\x2d0f31fed8\x2d0718\x2d16c7\x2dd61b\x2dba3e33edda47.mount: Deactivated successfully. Feb 13 19:23:25.534163 containerd[1472]: time="2025-02-13T19:23:25.534136488Z" level=info msg="StopPodSandbox for \"8838278d5d134ca219edb457e7749500a22cc1ccd0695e4995e6fa55293cd05e\"" Feb 13 19:23:25.534312 containerd[1472]: time="2025-02-13T19:23:25.534222008Z" level=info msg="TearDown network for sandbox \"8838278d5d134ca219edb457e7749500a22cc1ccd0695e4995e6fa55293cd05e\" successfully" Feb 13 19:23:25.534312 containerd[1472]: time="2025-02-13T19:23:25.534235488Z" level=info msg="StopPodSandbox for \"8838278d5d134ca219edb457e7749500a22cc1ccd0695e4995e6fa55293cd05e\" returns successfully" Feb 13 19:23:25.534806 containerd[1472]: time="2025-02-13T19:23:25.534574808Z" level=info msg="StopPodSandbox for \"fff6122e0e7af1c4d61552d62a901428e4905f43cb9d5e0cb796801cf8c19caa\"" Feb 13 19:23:25.534806 containerd[1472]: time="2025-02-13T19:23:25.534696289Z" level=info msg="TearDown network for sandbox \"fff6122e0e7af1c4d61552d62a901428e4905f43cb9d5e0cb796801cf8c19caa\" successfully" Feb 13 19:23:25.534806 containerd[1472]: time="2025-02-13T19:23:25.534710209Z" level=info msg="StopPodSandbox for \"fff6122e0e7af1c4d61552d62a901428e4905f43cb9d5e0cb796801cf8c19caa\" returns successfully" Feb 13 19:23:25.535345 containerd[1472]: time="2025-02-13T19:23:25.535086809Z" level=info msg="StopPodSandbox for \"f08c19b9a372aabce29703f12b709d1b46d9e7576abfa63f9ca341a1ee693428\"" Feb 13 19:23:25.535345 containerd[1472]: time="2025-02-13T19:23:25.535167649Z" level=info msg="TearDown network for sandbox \"f08c19b9a372aabce29703f12b709d1b46d9e7576abfa63f9ca341a1ee693428\" successfully" Feb 13 19:23:25.535345 containerd[1472]: time="2025-02-13T19:23:25.535178529Z" level=info msg="StopPodSandbox for \"f08c19b9a372aabce29703f12b709d1b46d9e7576abfa63f9ca341a1ee693428\" returns successfully" Feb 13 19:23:25.535836 containerd[1472]: time="2025-02-13T19:23:25.535813730Z" level=info msg="StopPodSandbox for \"79cc9bd063882a3af3beda81b96a936ae1bf51ac45f2a1def3f15fdb84de30f2\"" Feb 13 19:23:25.535910 containerd[1472]: time="2025-02-13T19:23:25.535884770Z" level=info msg="TearDown network for sandbox \"79cc9bd063882a3af3beda81b96a936ae1bf51ac45f2a1def3f15fdb84de30f2\" successfully" Feb 13 19:23:25.535910 containerd[1472]: time="2025-02-13T19:23:25.535898450Z" level=info msg="StopPodSandbox for \"79cc9bd063882a3af3beda81b96a936ae1bf51ac45f2a1def3f15fdb84de30f2\" returns successfully" Feb 13 19:23:25.536487 containerd[1472]: time="2025-02-13T19:23:25.536457571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b44d967f-hpx7w,Uid:90b2170d-a844-4e38-873e-8af01cba6fe0,Namespace:calico-apiserver,Attempt:5,}" Feb 13 19:23:25.536784 kubelet[2544]: I0213 19:23:25.536763 2544 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f96dcbafa5ad411894d41a029bfeebfc46605011bf1ff99a6c49a5b1cc57236" Feb 13 19:23:25.537577 containerd[1472]: time="2025-02-13T19:23:25.537515213Z" level=info msg="StopPodSandbox for \"3f96dcbafa5ad411894d41a029bfeebfc46605011bf1ff99a6c49a5b1cc57236\"" Feb 13 19:23:25.537739 containerd[1472]: time="2025-02-13T19:23:25.537719613Z" level=info msg="Ensure that sandbox 3f96dcbafa5ad411894d41a029bfeebfc46605011bf1ff99a6c49a5b1cc57236 in task-service has been cleanup successfully" Feb 13 19:23:25.538196 containerd[1472]: time="2025-02-13T19:23:25.538131414Z" level=info msg="TearDown network for sandbox \"3f96dcbafa5ad411894d41a029bfeebfc46605011bf1ff99a6c49a5b1cc57236\" successfully" Feb 13 19:23:25.538249 containerd[1472]: time="2025-02-13T19:23:25.538196614Z" level=info msg="StopPodSandbox for \"3f96dcbafa5ad411894d41a029bfeebfc46605011bf1ff99a6c49a5b1cc57236\" returns successfully" Feb 13 19:23:25.539823 systemd[1]: run-netns-cni\x2ddcaee971\x2dfeed\x2dbc7f\x2dc5e7\x2d590ebdff69fd.mount: Deactivated successfully. Feb 13 19:23:25.540048 containerd[1472]: time="2025-02-13T19:23:25.540025417Z" level=info msg="StopPodSandbox for \"e9ddff8643dc18c5cc1376aa87d108844c569928eb57728a5442cfa00994edcf\"" Feb 13 19:23:25.540119 containerd[1472]: time="2025-02-13T19:23:25.540105137Z" level=info msg="TearDown network for sandbox \"e9ddff8643dc18c5cc1376aa87d108844c569928eb57728a5442cfa00994edcf\" successfully" Feb 13 19:23:25.540150 containerd[1472]: time="2025-02-13T19:23:25.540118977Z" level=info msg="StopPodSandbox for \"e9ddff8643dc18c5cc1376aa87d108844c569928eb57728a5442cfa00994edcf\" returns successfully" Feb 13 19:23:25.540806 containerd[1472]: time="2025-02-13T19:23:25.540776378Z" level=info msg="StopPodSandbox for \"de7676c8254aaae60409033dd601bfd2dfc40b59d6b06333583a5604aef4c02d\"" Feb 13 19:23:25.540897 containerd[1472]: time="2025-02-13T19:23:25.540863738Z" level=info msg="TearDown network for sandbox \"de7676c8254aaae60409033dd601bfd2dfc40b59d6b06333583a5604aef4c02d\" successfully" Feb 13 19:23:25.540897 containerd[1472]: time="2025-02-13T19:23:25.540879778Z" level=info msg="StopPodSandbox for \"de7676c8254aaae60409033dd601bfd2dfc40b59d6b06333583a5604aef4c02d\" returns successfully" Feb 13 19:23:25.541238 containerd[1472]: time="2025-02-13T19:23:25.541218618Z" level=info msg="StopPodSandbox for \"a4926452b9909d806af29b6189b194ae88d64402549efa6a8aa8157676ffeab6\"" Feb 13 19:23:25.541321 containerd[1472]: time="2025-02-13T19:23:25.541305779Z" level=info msg="TearDown network for sandbox \"a4926452b9909d806af29b6189b194ae88d64402549efa6a8aa8157676ffeab6\" successfully" Feb 13 19:23:25.541321 containerd[1472]: time="2025-02-13T19:23:25.541320499Z" level=info msg="StopPodSandbox for \"a4926452b9909d806af29b6189b194ae88d64402549efa6a8aa8157676ffeab6\" returns successfully" Feb 13 19:23:25.541690 containerd[1472]: time="2025-02-13T19:23:25.541588699Z" level=info msg="StopPodSandbox for \"9886a774b5ddfc176ce9f280222642978f1f162c12b106b23d0358bf9b5a6fe4\"" Feb 13 19:23:25.541690 containerd[1472]: time="2025-02-13T19:23:25.541690059Z" level=info msg="TearDown network for sandbox \"9886a774b5ddfc176ce9f280222642978f1f162c12b106b23d0358bf9b5a6fe4\" successfully" Feb 13 19:23:25.541781 containerd[1472]: time="2025-02-13T19:23:25.541700139Z" level=info msg="StopPodSandbox for \"9886a774b5ddfc176ce9f280222642978f1f162c12b106b23d0358bf9b5a6fe4\" returns successfully" Feb 13 19:23:25.541929 kubelet[2544]: I0213 19:23:25.541908 2544 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e6feab27645ffd61b92d460846c135a9d5977d67d273cbde0a2439d3c194c38" Feb 13 19:23:25.542655 containerd[1472]: time="2025-02-13T19:23:25.542631501Z" level=info msg="StopPodSandbox for \"7e6feab27645ffd61b92d460846c135a9d5977d67d273cbde0a2439d3c194c38\"" Feb 13 19:23:25.542805 containerd[1472]: time="2025-02-13T19:23:25.542789981Z" level=info msg="Ensure that sandbox 7e6feab27645ffd61b92d460846c135a9d5977d67d273cbde0a2439d3c194c38 in task-service has been cleanup successfully" Feb 13 19:23:25.542842 containerd[1472]: time="2025-02-13T19:23:25.542829461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b44d967f-p6w96,Uid:400c37f6-81a9-403c-9b2f-cc1d18ee97aa,Namespace:calico-apiserver,Attempt:5,}" Feb 13 19:23:25.544722 systemd[1]: run-netns-cni\x2d19c497a1\x2d8a8f\x2dae57\x2d8e94\x2d350a0003f03c.mount: Deactivated successfully. Feb 13 19:23:25.545341 containerd[1472]: time="2025-02-13T19:23:25.545303985Z" level=info msg="TearDown network for sandbox \"7e6feab27645ffd61b92d460846c135a9d5977d67d273cbde0a2439d3c194c38\" successfully" Feb 13 19:23:25.545341 containerd[1472]: time="2025-02-13T19:23:25.545332545Z" level=info msg="StopPodSandbox for \"7e6feab27645ffd61b92d460846c135a9d5977d67d273cbde0a2439d3c194c38\" returns successfully" Feb 13 19:23:25.546440 containerd[1472]: time="2025-02-13T19:23:25.546405386Z" level=info msg="StopPodSandbox for \"6016e85c998124b83ac646fb6a3dabbbc1920d9fd029722e7215ebdffb13a5be\"" Feb 13 19:23:25.546522 containerd[1472]: time="2025-02-13T19:23:25.546504866Z" level=info msg="TearDown network for sandbox \"6016e85c998124b83ac646fb6a3dabbbc1920d9fd029722e7215ebdffb13a5be\" successfully" Feb 13 19:23:25.546560 containerd[1472]: time="2025-02-13T19:23:25.546520706Z" level=info msg="StopPodSandbox for \"6016e85c998124b83ac646fb6a3dabbbc1920d9fd029722e7215ebdffb13a5be\" returns successfully" Feb 13 19:23:25.546883 containerd[1472]: time="2025-02-13T19:23:25.546859587Z" level=info msg="StopPodSandbox for \"7c2a32b901b31e27ae1a20a1fe61ade17e69b2e6fa8b5f8f14ee716821af402d\"" Feb 13 19:23:25.546952 containerd[1472]: time="2025-02-13T19:23:25.546939347Z" level=info msg="TearDown network for sandbox \"7c2a32b901b31e27ae1a20a1fe61ade17e69b2e6fa8b5f8f14ee716821af402d\" successfully" Feb 13 19:23:25.546975 containerd[1472]: time="2025-02-13T19:23:25.546952947Z" level=info msg="StopPodSandbox for \"7c2a32b901b31e27ae1a20a1fe61ade17e69b2e6fa8b5f8f14ee716821af402d\" returns successfully" Feb 13 19:23:25.547017 kubelet[2544]: I0213 19:23:25.546991 2544 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc7af871476d637ce9bba215c4499eefedcfe396348ddcced2e24cbaf8c8b503" Feb 13 19:23:25.547720 containerd[1472]: time="2025-02-13T19:23:25.547678628Z" level=info msg="StopPodSandbox for \"cc7af871476d637ce9bba215c4499eefedcfe396348ddcced2e24cbaf8c8b503\"" Feb 13 19:23:25.547859 containerd[1472]: time="2025-02-13T19:23:25.547830268Z" level=info msg="Ensure that sandbox cc7af871476d637ce9bba215c4499eefedcfe396348ddcced2e24cbaf8c8b503 in task-service has been cleanup successfully" Feb 13 19:23:25.548436 containerd[1472]: time="2025-02-13T19:23:25.548159669Z" level=info msg="TearDown network for sandbox \"cc7af871476d637ce9bba215c4499eefedcfe396348ddcced2e24cbaf8c8b503\" successfully" Feb 13 19:23:25.548436 containerd[1472]: time="2025-02-13T19:23:25.548181309Z" level=info msg="StopPodSandbox for \"cc7af871476d637ce9bba215c4499eefedcfe396348ddcced2e24cbaf8c8b503\" returns successfully" Feb 13 19:23:25.548741 containerd[1472]: time="2025-02-13T19:23:25.548706950Z" level=info msg="StopPodSandbox for \"d6250d08a5ceb4ef45dd21cda6947799ccd29ddbdf4b402d69923e57de5f0de1\"" Feb 13 19:23:25.548819 containerd[1472]: time="2025-02-13T19:23:25.548799710Z" level=info msg="TearDown network for sandbox \"d6250d08a5ceb4ef45dd21cda6947799ccd29ddbdf4b402d69923e57de5f0de1\" successfully" Feb 13 19:23:25.548819 containerd[1472]: time="2025-02-13T19:23:25.548814030Z" level=info msg="StopPodSandbox for \"d6250d08a5ceb4ef45dd21cda6947799ccd29ddbdf4b402d69923e57de5f0de1\" returns successfully" Feb 13 19:23:25.548878 containerd[1472]: time="2025-02-13T19:23:25.548844790Z" level=info msg="StopPodSandbox for \"9c587b1c1873a4da0b2c59ad781386932a84f435d9277e390c1c647867dcf9c6\"" Feb 13 19:23:25.549692 containerd[1472]: time="2025-02-13T19:23:25.548949350Z" level=info msg="TearDown network for sandbox \"9c587b1c1873a4da0b2c59ad781386932a84f435d9277e390c1c647867dcf9c6\" successfully" Feb 13 19:23:25.549692 containerd[1472]: time="2025-02-13T19:23:25.548979990Z" level=info msg="StopPodSandbox for \"9c587b1c1873a4da0b2c59ad781386932a84f435d9277e390c1c647867dcf9c6\" returns successfully" Feb 13 19:23:25.549780 containerd[1472]: time="2025-02-13T19:23:25.549747511Z" level=info msg="StopPodSandbox for \"d89d3553643149b3994db4a0653fdbd12da912d00bb5ab4315973581b6b660b5\"" Feb 13 19:23:25.549907 containerd[1472]: time="2025-02-13T19:23:25.549878831Z" level=info msg="StopPodSandbox for \"dd11beb574d3fbe9971dfab17c87cd30120129847d48f855e440b73909b823f2\"" Feb 13 19:23:25.549937 containerd[1472]: time="2025-02-13T19:23:25.549914631Z" level=info msg="TearDown network for sandbox \"d89d3553643149b3994db4a0653fdbd12da912d00bb5ab4315973581b6b660b5\" successfully" Feb 13 19:23:25.549937 containerd[1472]: time="2025-02-13T19:23:25.549930151Z" level=info msg="StopPodSandbox for \"d89d3553643149b3994db4a0653fdbd12da912d00bb5ab4315973581b6b660b5\" returns successfully" Feb 13 19:23:25.549981 containerd[1472]: time="2025-02-13T19:23:25.549957632Z" level=info msg="TearDown network for sandbox \"dd11beb574d3fbe9971dfab17c87cd30120129847d48f855e440b73909b823f2\" successfully" Feb 13 19:23:25.549981 containerd[1472]: time="2025-02-13T19:23:25.549967832Z" level=info msg="StopPodSandbox for \"dd11beb574d3fbe9971dfab17c87cd30120129847d48f855e440b73909b823f2\" returns successfully" Feb 13 19:23:25.550976 systemd[1]: run-netns-cni\x2df47f6512\x2d3924\x2db9de\x2df3b4\x2d16b22a33e616.mount: Deactivated successfully. Feb 13 19:23:25.551143 containerd[1472]: time="2025-02-13T19:23:25.551088473Z" level=info msg="StopPodSandbox for \"096da91385a3c8839c09bc5456755139dacce2271f80a0a43253c557ac7e758d\"" Feb 13 19:23:25.551188 containerd[1472]: time="2025-02-13T19:23:25.551179473Z" level=info msg="TearDown network for sandbox \"096da91385a3c8839c09bc5456755139dacce2271f80a0a43253c557ac7e758d\" successfully" Feb 13 19:23:25.551209 containerd[1472]: time="2025-02-13T19:23:25.551189673Z" level=info msg="StopPodSandbox for \"096da91385a3c8839c09bc5456755139dacce2271f80a0a43253c557ac7e758d\" returns successfully" Feb 13 19:23:25.551505 containerd[1472]: time="2025-02-13T19:23:25.551464914Z" level=info msg="StopPodSandbox for \"7a00c42bb521f7bef7a32da102cdf4d4a156bae2448cfacf8ccc9caa13e70634\"" Feb 13 19:23:25.551563 containerd[1472]: time="2025-02-13T19:23:25.551546874Z" level=info msg="TearDown network for sandbox \"7a00c42bb521f7bef7a32da102cdf4d4a156bae2448cfacf8ccc9caa13e70634\" successfully" Feb 13 19:23:25.551563 containerd[1472]: time="2025-02-13T19:23:25.551557314Z" level=info msg="StopPodSandbox for \"7a00c42bb521f7bef7a32da102cdf4d4a156bae2448cfacf8ccc9caa13e70634\" returns successfully" Feb 13 19:23:25.551669 containerd[1472]: time="2025-02-13T19:23:25.551612714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58dfd6696-g69sl,Uid:bd7b8e08-eba9-4ff4-a84a-26b9405284a6,Namespace:calico-system,Attempt:5,}" Feb 13 19:23:25.552008 kubelet[2544]: E0213 19:23:25.551986 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:23:25.553111 containerd[1472]: time="2025-02-13T19:23:25.553062516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-slhww,Uid:beb760d9-f48b-4afe-876e-eb78778e0f0b,Namespace:kube-system,Attempt:5,}" Feb 13 19:23:25.563854 kubelet[2544]: I0213 19:23:25.563794 2544 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25cadf9b39de722cdb51a441f232474c54a2727e8241363fb7678024b0cc2939" Feb 13 19:23:25.564457 containerd[1472]: time="2025-02-13T19:23:25.564331573Z" level=info msg="StopPodSandbox for \"25cadf9b39de722cdb51a441f232474c54a2727e8241363fb7678024b0cc2939\"" Feb 13 19:23:25.564692 containerd[1472]: time="2025-02-13T19:23:25.564651574Z" level=info msg="Ensure that sandbox 25cadf9b39de722cdb51a441f232474c54a2727e8241363fb7678024b0cc2939 in task-service has been cleanup successfully" Feb 13 19:23:25.564985 containerd[1472]: time="2025-02-13T19:23:25.564953174Z" level=info msg="TearDown network for sandbox \"25cadf9b39de722cdb51a441f232474c54a2727e8241363fb7678024b0cc2939\" successfully" Feb 13 19:23:25.565291 containerd[1472]: time="2025-02-13T19:23:25.565037894Z" level=info msg="StopPodSandbox for \"25cadf9b39de722cdb51a441f232474c54a2727e8241363fb7678024b0cc2939\" returns successfully" Feb 13 19:23:25.565634 containerd[1472]: time="2025-02-13T19:23:25.565587295Z" level=info msg="StopPodSandbox for \"48811930cf72c6624d608457afdcfffb68cadf11dbe5cbc1d4081b85d2fea001\"" Feb 13 19:23:25.566366 containerd[1472]: time="2025-02-13T19:23:25.566342256Z" level=info msg="TearDown network for sandbox \"48811930cf72c6624d608457afdcfffb68cadf11dbe5cbc1d4081b85d2fea001\" successfully" Feb 13 19:23:25.566538 containerd[1472]: time="2025-02-13T19:23:25.566450936Z" level=info msg="StopPodSandbox for \"48811930cf72c6624d608457afdcfffb68cadf11dbe5cbc1d4081b85d2fea001\" returns successfully" Feb 13 19:23:25.566937 containerd[1472]: time="2025-02-13T19:23:25.566823257Z" level=info msg="StopPodSandbox for \"5c69bd3f5517904d9d167931362b4913dc35adc9a2c26a7101f072289741fcfe\"" Feb 13 19:23:25.567242 containerd[1472]: time="2025-02-13T19:23:25.567180497Z" level=info msg="TearDown network for sandbox \"5c69bd3f5517904d9d167931362b4913dc35adc9a2c26a7101f072289741fcfe\" successfully" Feb 13 19:23:25.567363 containerd[1472]: time="2025-02-13T19:23:25.567332538Z" level=info msg="StopPodSandbox for \"5c69bd3f5517904d9d167931362b4913dc35adc9a2c26a7101f072289741fcfe\" returns successfully" Feb 13 19:23:25.569025 containerd[1472]: time="2025-02-13T19:23:25.568677620Z" level=info msg="StopPodSandbox for \"1a65ecae1d3e89163ada04189f06c243b3b5a17116154cd6cbede404e3524543\"" Feb 13 19:23:25.569025 containerd[1472]: time="2025-02-13T19:23:25.568822100Z" level=info msg="TearDown network for sandbox \"1a65ecae1d3e89163ada04189f06c243b3b5a17116154cd6cbede404e3524543\" successfully" Feb 13 19:23:25.569025 containerd[1472]: time="2025-02-13T19:23:25.568834660Z" level=info msg="StopPodSandbox for \"1a65ecae1d3e89163ada04189f06c243b3b5a17116154cd6cbede404e3524543\" returns successfully" Feb 13 19:23:25.569459 containerd[1472]: time="2025-02-13T19:23:25.569418021Z" level=info msg="StopPodSandbox for \"9238bb01d809702cf71989d83c983b9c52956861a8e39fcabd6dd1ed7ee329dd\"" Feb 13 19:23:25.569459 containerd[1472]: time="2025-02-13T19:23:25.569610181Z" level=info msg="TearDown network for sandbox \"9238bb01d809702cf71989d83c983b9c52956861a8e39fcabd6dd1ed7ee329dd\" successfully" Feb 13 19:23:25.570728 containerd[1472]: time="2025-02-13T19:23:25.569633981Z" level=info msg="StopPodSandbox for \"9238bb01d809702cf71989d83c983b9c52956861a8e39fcabd6dd1ed7ee329dd\" returns successfully" Feb 13 19:23:25.570728 containerd[1472]: time="2025-02-13T19:23:25.570552382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-8shn2,Uid:11ebe699-3307-4e28-ac6a-e555af8a982c,Namespace:kube-system,Attempt:5,}" Feb 13 19:23:25.570782 kubelet[2544]: I0213 19:23:25.570189 2544 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d47414dc28e478d0c0959c2bb8276a7437c645c4dbfafff1a1efab07a3da7b2f" Feb 13 19:23:25.570782 kubelet[2544]: E0213 19:23:25.570234 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:23:25.571556 containerd[1472]: time="2025-02-13T19:23:25.571528864Z" level=info msg="StopPodSandbox for \"d47414dc28e478d0c0959c2bb8276a7437c645c4dbfafff1a1efab07a3da7b2f\"" Feb 13 19:23:25.571710 containerd[1472]: time="2025-02-13T19:23:25.571684824Z" level=info msg="Ensure that sandbox d47414dc28e478d0c0959c2bb8276a7437c645c4dbfafff1a1efab07a3da7b2f in task-service has been cleanup successfully" Feb 13 19:23:25.572292 containerd[1472]: time="2025-02-13T19:23:25.572245585Z" level=info msg="TearDown network for sandbox \"d47414dc28e478d0c0959c2bb8276a7437c645c4dbfafff1a1efab07a3da7b2f\" successfully" Feb 13 19:23:25.572292 containerd[1472]: time="2025-02-13T19:23:25.572265985Z" level=info msg="StopPodSandbox for \"d47414dc28e478d0c0959c2bb8276a7437c645c4dbfafff1a1efab07a3da7b2f\" returns successfully" Feb 13 19:23:25.575133 containerd[1472]: time="2025-02-13T19:23:25.575064909Z" level=info msg="StopPodSandbox for \"cb7be6a4712632dbe090eebc2eeaede27df77149aef06ceca10bc6d335e26b9b\"" Feb 13 19:23:25.575236 containerd[1472]: time="2025-02-13T19:23:25.575188989Z" level=info msg="TearDown network for sandbox \"cb7be6a4712632dbe090eebc2eeaede27df77149aef06ceca10bc6d335e26b9b\" successfully" Feb 13 19:23:25.575236 containerd[1472]: time="2025-02-13T19:23:25.575198869Z" level=info msg="StopPodSandbox for \"cb7be6a4712632dbe090eebc2eeaede27df77149aef06ceca10bc6d335e26b9b\" returns successfully" Feb 13 19:23:25.576208 containerd[1472]: time="2025-02-13T19:23:25.576170031Z" level=info msg="StopPodSandbox for \"ba474e5db3af83c6e13348802c48deb52dcfa25885a7fa5ba47c3f19ac08cda7\"" Feb 13 19:23:25.576274 containerd[1472]: time="2025-02-13T19:23:25.576258591Z" level=info msg="TearDown network for sandbox \"ba474e5db3af83c6e13348802c48deb52dcfa25885a7fa5ba47c3f19ac08cda7\" successfully" Feb 13 19:23:25.576274 containerd[1472]: time="2025-02-13T19:23:25.576271511Z" level=info msg="StopPodSandbox for \"ba474e5db3af83c6e13348802c48deb52dcfa25885a7fa5ba47c3f19ac08cda7\" returns successfully" Feb 13 19:23:25.577501 kubelet[2544]: E0213 19:23:25.577474 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:23:25.577832 containerd[1472]: time="2025-02-13T19:23:25.577800113Z" level=info msg="StopPodSandbox for \"b9dc128c66258cebc145fb5927188ffde0a7a2ce1077cde74816d4066ba9c705\"" Feb 13 19:23:25.577934 containerd[1472]: time="2025-02-13T19:23:25.577917073Z" level=info msg="TearDown network for sandbox \"b9dc128c66258cebc145fb5927188ffde0a7a2ce1077cde74816d4066ba9c705\" successfully" Feb 13 19:23:25.577961 containerd[1472]: time="2025-02-13T19:23:25.577933153Z" level=info msg="StopPodSandbox for \"b9dc128c66258cebc145fb5927188ffde0a7a2ce1077cde74816d4066ba9c705\" returns successfully" Feb 13 19:23:25.578574 containerd[1472]: time="2025-02-13T19:23:25.578545794Z" level=info msg="StopPodSandbox for \"50fc7b5a1212a7f5c0f58d01289dd6423491fe5b48f97ad072a542ba7bf3a842\"" Feb 13 19:23:25.578880 containerd[1472]: time="2025-02-13T19:23:25.578820275Z" level=info msg="TearDown network for sandbox \"50fc7b5a1212a7f5c0f58d01289dd6423491fe5b48f97ad072a542ba7bf3a842\" successfully" Feb 13 19:23:25.578917 containerd[1472]: time="2025-02-13T19:23:25.578882755Z" level=info msg="StopPodSandbox for \"50fc7b5a1212a7f5c0f58d01289dd6423491fe5b48f97ad072a542ba7bf3a842\" returns successfully" Feb 13 19:23:25.580025 containerd[1472]: time="2025-02-13T19:23:25.579995317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dtfv4,Uid:278e43f1-bd8c-4a43-8396-436ddaca249b,Namespace:calico-system,Attempt:5,}" Feb 13 19:23:26.102301 systemd-networkd[1387]: cali07a718128a3: Link UP Feb 13 19:23:26.102648 systemd-networkd[1387]: cali07a718128a3: Gained carrier Feb 13 19:23:26.113196 kubelet[2544]: I0213 19:23:26.113114 2544 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-d4xt4" podStartSLOduration=2.812788701 podStartE2EDuration="15.113094586s" podCreationTimestamp="2025-02-13 19:23:11 +0000 UTC" firstStartedPulling="2025-02-13 19:23:12.328373248 +0000 UTC m=+13.167342866" lastFinishedPulling="2025-02-13 19:23:24.628679133 +0000 UTC m=+25.467648751" observedRunningTime="2025-02-13 19:23:25.59536798 +0000 UTC m=+26.434337598" watchObservedRunningTime="2025-02-13 19:23:26.113094586 +0000 UTC m=+26.952064204" Feb 13 19:23:26.114629 containerd[1472]: 2025-02-13 19:23:25.609 [INFO][4515] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:23:26.114629 containerd[1472]: 2025-02-13 19:23:25.720 [INFO][4515] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5b44d967f--hpx7w-eth0 calico-apiserver-5b44d967f- calico-apiserver 90b2170d-a844-4e38-873e-8af01cba6fe0 678 0 2025-02-13 19:23:11 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5b44d967f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5b44d967f-hpx7w eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali07a718128a3 [] []}} ContainerID="67216be933592c194b5f750ad0f7b80de2cd06ec84cefe5b3cce175dfc0a860d" Namespace="calico-apiserver" Pod="calico-apiserver-5b44d967f-hpx7w" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b44d967f--hpx7w-" Feb 13 19:23:26.114629 containerd[1472]: 2025-02-13 19:23:25.720 [INFO][4515] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="67216be933592c194b5f750ad0f7b80de2cd06ec84cefe5b3cce175dfc0a860d" Namespace="calico-apiserver" Pod="calico-apiserver-5b44d967f-hpx7w" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b44d967f--hpx7w-eth0" Feb 13 19:23:26.114629 containerd[1472]: 2025-02-13 19:23:25.954 [INFO][4627] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="67216be933592c194b5f750ad0f7b80de2cd06ec84cefe5b3cce175dfc0a860d" HandleID="k8s-pod-network.67216be933592c194b5f750ad0f7b80de2cd06ec84cefe5b3cce175dfc0a860d" Workload="localhost-k8s-calico--apiserver--5b44d967f--hpx7w-eth0" Feb 13 19:23:26.114629 containerd[1472]: 2025-02-13 19:23:25.971 [INFO][4627] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="67216be933592c194b5f750ad0f7b80de2cd06ec84cefe5b3cce175dfc0a860d" HandleID="k8s-pod-network.67216be933592c194b5f750ad0f7b80de2cd06ec84cefe5b3cce175dfc0a860d" Workload="localhost-k8s-calico--apiserver--5b44d967f--hpx7w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000398de0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5b44d967f-hpx7w", "timestamp":"2025-02-13 19:23:25.954357518 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:23:26.114629 containerd[1472]: 2025-02-13 19:23:25.971 [INFO][4627] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:23:26.114629 containerd[1472]: 2025-02-13 19:23:25.972 [INFO][4627] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:23:26.114629 containerd[1472]: 2025-02-13 19:23:25.972 [INFO][4627] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:23:26.114629 containerd[1472]: 2025-02-13 19:23:25.977 [INFO][4627] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.67216be933592c194b5f750ad0f7b80de2cd06ec84cefe5b3cce175dfc0a860d" host="localhost" Feb 13 19:23:26.114629 containerd[1472]: 2025-02-13 19:23:26.069 [INFO][4627] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:23:26.114629 containerd[1472]: 2025-02-13 19:23:26.075 [INFO][4627] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:23:26.114629 containerd[1472]: 2025-02-13 19:23:26.077 [INFO][4627] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:23:26.114629 containerd[1472]: 2025-02-13 19:23:26.079 [INFO][4627] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:23:26.114629 containerd[1472]: 2025-02-13 19:23:26.079 [INFO][4627] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.67216be933592c194b5f750ad0f7b80de2cd06ec84cefe5b3cce175dfc0a860d" host="localhost" Feb 13 19:23:26.114629 containerd[1472]: 2025-02-13 19:23:26.081 [INFO][4627] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.67216be933592c194b5f750ad0f7b80de2cd06ec84cefe5b3cce175dfc0a860d Feb 13 19:23:26.114629 containerd[1472]: 2025-02-13 19:23:26.085 [INFO][4627] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.67216be933592c194b5f750ad0f7b80de2cd06ec84cefe5b3cce175dfc0a860d" host="localhost" Feb 13 19:23:26.114629 containerd[1472]: 2025-02-13 19:23:26.090 [INFO][4627] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.67216be933592c194b5f750ad0f7b80de2cd06ec84cefe5b3cce175dfc0a860d" host="localhost" Feb 13 19:23:26.114629 containerd[1472]: 2025-02-13 19:23:26.090 [INFO][4627] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.67216be933592c194b5f750ad0f7b80de2cd06ec84cefe5b3cce175dfc0a860d" host="localhost" Feb 13 19:23:26.114629 containerd[1472]: 2025-02-13 19:23:26.090 [INFO][4627] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:23:26.114629 containerd[1472]: 2025-02-13 19:23:26.090 [INFO][4627] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="67216be933592c194b5f750ad0f7b80de2cd06ec84cefe5b3cce175dfc0a860d" HandleID="k8s-pod-network.67216be933592c194b5f750ad0f7b80de2cd06ec84cefe5b3cce175dfc0a860d" Workload="localhost-k8s-calico--apiserver--5b44d967f--hpx7w-eth0" Feb 13 19:23:26.115372 containerd[1472]: 2025-02-13 19:23:26.092 [INFO][4515] cni-plugin/k8s.go 386: Populated endpoint ContainerID="67216be933592c194b5f750ad0f7b80de2cd06ec84cefe5b3cce175dfc0a860d" Namespace="calico-apiserver" Pod="calico-apiserver-5b44d967f-hpx7w" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b44d967f--hpx7w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b44d967f--hpx7w-eth0", GenerateName:"calico-apiserver-5b44d967f-", Namespace:"calico-apiserver", SelfLink:"", UID:"90b2170d-a844-4e38-873e-8af01cba6fe0", ResourceVersion:"678", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 23, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b44d967f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5b44d967f-hpx7w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali07a718128a3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:23:26.115372 containerd[1472]: 2025-02-13 19:23:26.093 [INFO][4515] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="67216be933592c194b5f750ad0f7b80de2cd06ec84cefe5b3cce175dfc0a860d" Namespace="calico-apiserver" Pod="calico-apiserver-5b44d967f-hpx7w" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b44d967f--hpx7w-eth0" Feb 13 19:23:26.115372 containerd[1472]: 2025-02-13 19:23:26.093 [INFO][4515] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali07a718128a3 ContainerID="67216be933592c194b5f750ad0f7b80de2cd06ec84cefe5b3cce175dfc0a860d" Namespace="calico-apiserver" Pod="calico-apiserver-5b44d967f-hpx7w" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b44d967f--hpx7w-eth0" Feb 13 19:23:26.115372 containerd[1472]: 2025-02-13 19:23:26.102 [INFO][4515] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="67216be933592c194b5f750ad0f7b80de2cd06ec84cefe5b3cce175dfc0a860d" Namespace="calico-apiserver" Pod="calico-apiserver-5b44d967f-hpx7w" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b44d967f--hpx7w-eth0" Feb 13 19:23:26.115372 containerd[1472]: 2025-02-13 19:23:26.102 [INFO][4515] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="67216be933592c194b5f750ad0f7b80de2cd06ec84cefe5b3cce175dfc0a860d" Namespace="calico-apiserver" Pod="calico-apiserver-5b44d967f-hpx7w" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b44d967f--hpx7w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b44d967f--hpx7w-eth0", GenerateName:"calico-apiserver-5b44d967f-", Namespace:"calico-apiserver", SelfLink:"", UID:"90b2170d-a844-4e38-873e-8af01cba6fe0", ResourceVersion:"678", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 23, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b44d967f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"67216be933592c194b5f750ad0f7b80de2cd06ec84cefe5b3cce175dfc0a860d", Pod:"calico-apiserver-5b44d967f-hpx7w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali07a718128a3", MAC:"56:8c:70:d2:cf:f8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:23:26.115372 containerd[1472]: 2025-02-13 19:23:26.113 [INFO][4515] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="67216be933592c194b5f750ad0f7b80de2cd06ec84cefe5b3cce175dfc0a860d" Namespace="calico-apiserver" Pod="calico-apiserver-5b44d967f-hpx7w" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b44d967f--hpx7w-eth0" Feb 13 19:23:26.132452 containerd[1472]: time="2025-02-13T19:23:26.132308573Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:23:26.132452 containerd[1472]: time="2025-02-13T19:23:26.132430213Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:23:26.132452 containerd[1472]: time="2025-02-13T19:23:26.132451453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:23:26.132723 containerd[1472]: time="2025-02-13T19:23:26.132543773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:23:26.153782 systemd[1]: Started cri-containerd-67216be933592c194b5f750ad0f7b80de2cd06ec84cefe5b3cce175dfc0a860d.scope - libcontainer container 67216be933592c194b5f750ad0f7b80de2cd06ec84cefe5b3cce175dfc0a860d. Feb 13 19:23:26.164108 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:23:26.186840 containerd[1472]: time="2025-02-13T19:23:26.186591489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b44d967f-hpx7w,Uid:90b2170d-a844-4e38-873e-8af01cba6fe0,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"67216be933592c194b5f750ad0f7b80de2cd06ec84cefe5b3cce175dfc0a860d\"" Feb 13 19:23:26.189036 containerd[1472]: time="2025-02-13T19:23:26.189002572Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 19:23:26.197755 systemd-networkd[1387]: cali02725cd83ae: Link UP Feb 13 19:23:26.197937 systemd-networkd[1387]: cali02725cd83ae: Gained carrier Feb 13 19:23:26.216961 containerd[1472]: 2025-02-13 19:23:25.696 [INFO][4571] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:23:26.216961 containerd[1472]: 2025-02-13 19:23:25.736 [INFO][4571] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--58dfd6696--g69sl-eth0 calico-kube-controllers-58dfd6696- calico-system bd7b8e08-eba9-4ff4-a84a-26b9405284a6 676 0 2025-02-13 19:23:12 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:58dfd6696 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-58dfd6696-g69sl eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali02725cd83ae [] []}} ContainerID="87324165241ca1501f48955bfaee7993330e3c0e5d42009f89cffbc4c2f211f5" Namespace="calico-system" Pod="calico-kube-controllers-58dfd6696-g69sl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58dfd6696--g69sl-" Feb 13 19:23:26.216961 containerd[1472]: 2025-02-13 19:23:25.747 [INFO][4571] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="87324165241ca1501f48955bfaee7993330e3c0e5d42009f89cffbc4c2f211f5" Namespace="calico-system" Pod="calico-kube-controllers-58dfd6696-g69sl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58dfd6696--g69sl-eth0" Feb 13 19:23:26.216961 containerd[1472]: 2025-02-13 19:23:25.960 [INFO][4645] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="87324165241ca1501f48955bfaee7993330e3c0e5d42009f89cffbc4c2f211f5" HandleID="k8s-pod-network.87324165241ca1501f48955bfaee7993330e3c0e5d42009f89cffbc4c2f211f5" Workload="localhost-k8s-calico--kube--controllers--58dfd6696--g69sl-eth0" Feb 13 19:23:26.216961 containerd[1472]: 2025-02-13 19:23:26.069 [INFO][4645] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="87324165241ca1501f48955bfaee7993330e3c0e5d42009f89cffbc4c2f211f5" HandleID="k8s-pod-network.87324165241ca1501f48955bfaee7993330e3c0e5d42009f89cffbc4c2f211f5" Workload="localhost-k8s-calico--kube--controllers--58dfd6696--g69sl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003129b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-58dfd6696-g69sl", "timestamp":"2025-02-13 19:23:25.960691488 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:23:26.216961 containerd[1472]: 2025-02-13 19:23:26.069 [INFO][4645] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:23:26.216961 containerd[1472]: 2025-02-13 19:23:26.090 [INFO][4645] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:23:26.216961 containerd[1472]: 2025-02-13 19:23:26.090 [INFO][4645] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:23:26.216961 containerd[1472]: 2025-02-13 19:23:26.094 [INFO][4645] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.87324165241ca1501f48955bfaee7993330e3c0e5d42009f89cffbc4c2f211f5" host="localhost" Feb 13 19:23:26.216961 containerd[1472]: 2025-02-13 19:23:26.170 [INFO][4645] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:23:26.216961 containerd[1472]: 2025-02-13 19:23:26.175 [INFO][4645] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:23:26.216961 containerd[1472]: 2025-02-13 19:23:26.177 [INFO][4645] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:23:26.216961 containerd[1472]: 2025-02-13 19:23:26.179 [INFO][4645] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:23:26.216961 containerd[1472]: 2025-02-13 19:23:26.179 [INFO][4645] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.87324165241ca1501f48955bfaee7993330e3c0e5d42009f89cffbc4c2f211f5" host="localhost" Feb 13 19:23:26.216961 containerd[1472]: 2025-02-13 19:23:26.181 [INFO][4645] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.87324165241ca1501f48955bfaee7993330e3c0e5d42009f89cffbc4c2f211f5 Feb 13 19:23:26.216961 containerd[1472]: 2025-02-13 19:23:26.186 [INFO][4645] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.87324165241ca1501f48955bfaee7993330e3c0e5d42009f89cffbc4c2f211f5" host="localhost" Feb 13 19:23:26.216961 containerd[1472]: 2025-02-13 19:23:26.191 [INFO][4645] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.87324165241ca1501f48955bfaee7993330e3c0e5d42009f89cffbc4c2f211f5" host="localhost" Feb 13 19:23:26.216961 containerd[1472]: 2025-02-13 19:23:26.191 [INFO][4645] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.87324165241ca1501f48955bfaee7993330e3c0e5d42009f89cffbc4c2f211f5" host="localhost" Feb 13 19:23:26.216961 containerd[1472]: 2025-02-13 19:23:26.191 [INFO][4645] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:23:26.216961 containerd[1472]: 2025-02-13 19:23:26.191 [INFO][4645] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="87324165241ca1501f48955bfaee7993330e3c0e5d42009f89cffbc4c2f211f5" HandleID="k8s-pod-network.87324165241ca1501f48955bfaee7993330e3c0e5d42009f89cffbc4c2f211f5" Workload="localhost-k8s-calico--kube--controllers--58dfd6696--g69sl-eth0" Feb 13 19:23:26.217740 containerd[1472]: 2025-02-13 19:23:26.194 [INFO][4571] cni-plugin/k8s.go 386: Populated endpoint ContainerID="87324165241ca1501f48955bfaee7993330e3c0e5d42009f89cffbc4c2f211f5" Namespace="calico-system" Pod="calico-kube-controllers-58dfd6696-g69sl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58dfd6696--g69sl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--58dfd6696--g69sl-eth0", GenerateName:"calico-kube-controllers-58dfd6696-", Namespace:"calico-system", SelfLink:"", UID:"bd7b8e08-eba9-4ff4-a84a-26b9405284a6", ResourceVersion:"676", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 23, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58dfd6696", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-58dfd6696-g69sl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali02725cd83ae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:23:26.217740 containerd[1472]: 2025-02-13 19:23:26.194 [INFO][4571] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="87324165241ca1501f48955bfaee7993330e3c0e5d42009f89cffbc4c2f211f5" Namespace="calico-system" Pod="calico-kube-controllers-58dfd6696-g69sl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58dfd6696--g69sl-eth0" Feb 13 19:23:26.217740 containerd[1472]: 2025-02-13 19:23:26.194 [INFO][4571] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali02725cd83ae ContainerID="87324165241ca1501f48955bfaee7993330e3c0e5d42009f89cffbc4c2f211f5" Namespace="calico-system" Pod="calico-kube-controllers-58dfd6696-g69sl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58dfd6696--g69sl-eth0" Feb 13 19:23:26.217740 containerd[1472]: 2025-02-13 19:23:26.197 [INFO][4571] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="87324165241ca1501f48955bfaee7993330e3c0e5d42009f89cffbc4c2f211f5" Namespace="calico-system" Pod="calico-kube-controllers-58dfd6696-g69sl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58dfd6696--g69sl-eth0" Feb 13 19:23:26.217740 containerd[1472]: 2025-02-13 19:23:26.198 [INFO][4571] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="87324165241ca1501f48955bfaee7993330e3c0e5d42009f89cffbc4c2f211f5" Namespace="calico-system" Pod="calico-kube-controllers-58dfd6696-g69sl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58dfd6696--g69sl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--58dfd6696--g69sl-eth0", GenerateName:"calico-kube-controllers-58dfd6696-", Namespace:"calico-system", SelfLink:"", UID:"bd7b8e08-eba9-4ff4-a84a-26b9405284a6", ResourceVersion:"676", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 23, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58dfd6696", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"87324165241ca1501f48955bfaee7993330e3c0e5d42009f89cffbc4c2f211f5", Pod:"calico-kube-controllers-58dfd6696-g69sl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali02725cd83ae", MAC:"e2:fe:71:b0:3e:0c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:23:26.217740 containerd[1472]: 2025-02-13 19:23:26.207 [INFO][4571] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="87324165241ca1501f48955bfaee7993330e3c0e5d42009f89cffbc4c2f211f5" Namespace="calico-system" Pod="calico-kube-controllers-58dfd6696-g69sl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58dfd6696--g69sl-eth0" Feb 13 19:23:26.276699 containerd[1472]: time="2025-02-13T19:23:26.276402495Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:23:26.276699 containerd[1472]: time="2025-02-13T19:23:26.276495855Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:23:26.276699 containerd[1472]: time="2025-02-13T19:23:26.276522935Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:23:26.276975 containerd[1472]: time="2025-02-13T19:23:26.276666776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:23:26.303801 systemd[1]: Started cri-containerd-87324165241ca1501f48955bfaee7993330e3c0e5d42009f89cffbc4c2f211f5.scope - libcontainer container 87324165241ca1501f48955bfaee7993330e3c0e5d42009f89cffbc4c2f211f5. Feb 13 19:23:26.319014 systemd-networkd[1387]: cali257280f6ac6: Link UP Feb 13 19:23:26.320640 systemd-networkd[1387]: cali257280f6ac6: Gained carrier Feb 13 19:23:26.328669 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:23:26.338185 containerd[1472]: 2025-02-13 19:23:25.626 [INFO][4526] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:23:26.338185 containerd[1472]: 2025-02-13 19:23:25.716 [INFO][4526] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5b44d967f--p6w96-eth0 calico-apiserver-5b44d967f- calico-apiserver 400c37f6-81a9-403c-9b2f-cc1d18ee97aa 677 0 2025-02-13 19:23:11 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5b44d967f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5b44d967f-p6w96 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali257280f6ac6 [] []}} ContainerID="07b87bca9c7343e7e1c5fc6f900ce8903ad820d81883aac688b72f8774ed2ffb" Namespace="calico-apiserver" Pod="calico-apiserver-5b44d967f-p6w96" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b44d967f--p6w96-" Feb 13 19:23:26.338185 containerd[1472]: 2025-02-13 19:23:25.717 [INFO][4526] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="07b87bca9c7343e7e1c5fc6f900ce8903ad820d81883aac688b72f8774ed2ffb" Namespace="calico-apiserver" Pod="calico-apiserver-5b44d967f-p6w96" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b44d967f--p6w96-eth0" Feb 13 19:23:26.338185 containerd[1472]: 2025-02-13 19:23:25.958 [INFO][4624] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="07b87bca9c7343e7e1c5fc6f900ce8903ad820d81883aac688b72f8774ed2ffb" HandleID="k8s-pod-network.07b87bca9c7343e7e1c5fc6f900ce8903ad820d81883aac688b72f8774ed2ffb" Workload="localhost-k8s-calico--apiserver--5b44d967f--p6w96-eth0" Feb 13 19:23:26.338185 containerd[1472]: 2025-02-13 19:23:26.069 [INFO][4624] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="07b87bca9c7343e7e1c5fc6f900ce8903ad820d81883aac688b72f8774ed2ffb" HandleID="k8s-pod-network.07b87bca9c7343e7e1c5fc6f900ce8903ad820d81883aac688b72f8774ed2ffb" Workload="localhost-k8s-calico--apiserver--5b44d967f--p6w96-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000444510), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5b44d967f-p6w96", "timestamp":"2025-02-13 19:23:25.958860125 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:23:26.338185 containerd[1472]: 2025-02-13 19:23:26.069 [INFO][4624] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:23:26.338185 containerd[1472]: 2025-02-13 19:23:26.191 [INFO][4624] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:23:26.338185 containerd[1472]: 2025-02-13 19:23:26.191 [INFO][4624] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:23:26.338185 containerd[1472]: 2025-02-13 19:23:26.194 [INFO][4624] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.07b87bca9c7343e7e1c5fc6f900ce8903ad820d81883aac688b72f8774ed2ffb" host="localhost" Feb 13 19:23:26.338185 containerd[1472]: 2025-02-13 19:23:26.271 [INFO][4624] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:23:26.338185 containerd[1472]: 2025-02-13 19:23:26.279 [INFO][4624] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:23:26.338185 containerd[1472]: 2025-02-13 19:23:26.281 [INFO][4624] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:23:26.338185 containerd[1472]: 2025-02-13 19:23:26.284 [INFO][4624] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:23:26.338185 containerd[1472]: 2025-02-13 19:23:26.284 [INFO][4624] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.07b87bca9c7343e7e1c5fc6f900ce8903ad820d81883aac688b72f8774ed2ffb" host="localhost" Feb 13 19:23:26.338185 containerd[1472]: 2025-02-13 19:23:26.288 [INFO][4624] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.07b87bca9c7343e7e1c5fc6f900ce8903ad820d81883aac688b72f8774ed2ffb Feb 13 19:23:26.338185 containerd[1472]: 2025-02-13 19:23:26.298 [INFO][4624] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.07b87bca9c7343e7e1c5fc6f900ce8903ad820d81883aac688b72f8774ed2ffb" host="localhost" Feb 13 19:23:26.338185 containerd[1472]: 2025-02-13 19:23:26.305 [INFO][4624] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.07b87bca9c7343e7e1c5fc6f900ce8903ad820d81883aac688b72f8774ed2ffb" host="localhost" Feb 13 19:23:26.338185 containerd[1472]: 2025-02-13 19:23:26.305 [INFO][4624] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.07b87bca9c7343e7e1c5fc6f900ce8903ad820d81883aac688b72f8774ed2ffb" host="localhost" Feb 13 19:23:26.338185 containerd[1472]: 2025-02-13 19:23:26.305 [INFO][4624] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:23:26.338185 containerd[1472]: 2025-02-13 19:23:26.305 [INFO][4624] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="07b87bca9c7343e7e1c5fc6f900ce8903ad820d81883aac688b72f8774ed2ffb" HandleID="k8s-pod-network.07b87bca9c7343e7e1c5fc6f900ce8903ad820d81883aac688b72f8774ed2ffb" Workload="localhost-k8s-calico--apiserver--5b44d967f--p6w96-eth0" Feb 13 19:23:26.338790 containerd[1472]: 2025-02-13 19:23:26.314 [INFO][4526] cni-plugin/k8s.go 386: Populated endpoint ContainerID="07b87bca9c7343e7e1c5fc6f900ce8903ad820d81883aac688b72f8774ed2ffb" Namespace="calico-apiserver" Pod="calico-apiserver-5b44d967f-p6w96" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b44d967f--p6w96-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b44d967f--p6w96-eth0", GenerateName:"calico-apiserver-5b44d967f-", Namespace:"calico-apiserver", SelfLink:"", UID:"400c37f6-81a9-403c-9b2f-cc1d18ee97aa", ResourceVersion:"677", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 23, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b44d967f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5b44d967f-p6w96", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali257280f6ac6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:23:26.338790 containerd[1472]: 2025-02-13 19:23:26.314 [INFO][4526] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="07b87bca9c7343e7e1c5fc6f900ce8903ad820d81883aac688b72f8774ed2ffb" Namespace="calico-apiserver" Pod="calico-apiserver-5b44d967f-p6w96" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b44d967f--p6w96-eth0" Feb 13 19:23:26.338790 containerd[1472]: 2025-02-13 19:23:26.314 [INFO][4526] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali257280f6ac6 ContainerID="07b87bca9c7343e7e1c5fc6f900ce8903ad820d81883aac688b72f8774ed2ffb" Namespace="calico-apiserver" Pod="calico-apiserver-5b44d967f-p6w96" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b44d967f--p6w96-eth0" Feb 13 19:23:26.338790 containerd[1472]: 2025-02-13 19:23:26.318 [INFO][4526] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="07b87bca9c7343e7e1c5fc6f900ce8903ad820d81883aac688b72f8774ed2ffb" Namespace="calico-apiserver" Pod="calico-apiserver-5b44d967f-p6w96" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b44d967f--p6w96-eth0" Feb 13 19:23:26.338790 containerd[1472]: 2025-02-13 19:23:26.319 [INFO][4526] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="07b87bca9c7343e7e1c5fc6f900ce8903ad820d81883aac688b72f8774ed2ffb" Namespace="calico-apiserver" Pod="calico-apiserver-5b44d967f-p6w96" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b44d967f--p6w96-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5b44d967f--p6w96-eth0", GenerateName:"calico-apiserver-5b44d967f-", Namespace:"calico-apiserver", SelfLink:"", UID:"400c37f6-81a9-403c-9b2f-cc1d18ee97aa", ResourceVersion:"677", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 23, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b44d967f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"07b87bca9c7343e7e1c5fc6f900ce8903ad820d81883aac688b72f8774ed2ffb", Pod:"calico-apiserver-5b44d967f-p6w96", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali257280f6ac6", MAC:"56:77:c1:38:6e:a9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:23:26.338790 containerd[1472]: 2025-02-13 19:23:26.333 [INFO][4526] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="07b87bca9c7343e7e1c5fc6f900ce8903ad820d81883aac688b72f8774ed2ffb" Namespace="calico-apiserver" Pod="calico-apiserver-5b44d967f-p6w96" WorkloadEndpoint="localhost-k8s-calico--apiserver--5b44d967f--p6w96-eth0" Feb 13 19:23:26.364713 containerd[1472]: time="2025-02-13T19:23:26.364192379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58dfd6696-g69sl,Uid:bd7b8e08-eba9-4ff4-a84a-26b9405284a6,Namespace:calico-system,Attempt:5,} returns sandbox id \"87324165241ca1501f48955bfaee7993330e3c0e5d42009f89cffbc4c2f211f5\"" Feb 13 19:23:26.409608 containerd[1472]: time="2025-02-13T19:23:26.408966682Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:23:26.409608 containerd[1472]: time="2025-02-13T19:23:26.409497082Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:23:26.409608 containerd[1472]: time="2025-02-13T19:23:26.409510042Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:23:26.409832 containerd[1472]: time="2025-02-13T19:23:26.409687323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:23:26.431502 systemd[1]: Started cri-containerd-07b87bca9c7343e7e1c5fc6f900ce8903ad820d81883aac688b72f8774ed2ffb.scope - libcontainer container 07b87bca9c7343e7e1c5fc6f900ce8903ad820d81883aac688b72f8774ed2ffb. Feb 13 19:23:26.431542 systemd-networkd[1387]: cali87c386767e3: Link UP Feb 13 19:23:26.434067 systemd-networkd[1387]: cali87c386767e3: Gained carrier Feb 13 19:23:26.445400 containerd[1472]: 2025-02-13 19:23:25.676 [INFO][4569] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:23:26.445400 containerd[1472]: 2025-02-13 19:23:25.717 [INFO][4569] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--8shn2-eth0 coredns-6f6b679f8f- kube-system 11ebe699-3307-4e28-ac6a-e555af8a982c 671 0 2025-02-13 19:23:04 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-8shn2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali87c386767e3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="6037a094e620c7c4a206bbbf32343d1efab12b0be7c8b641d6b732897fa57ee4" Namespace="kube-system" Pod="coredns-6f6b679f8f-8shn2" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--8shn2-" Feb 13 19:23:26.445400 containerd[1472]: 2025-02-13 19:23:25.717 [INFO][4569] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6037a094e620c7c4a206bbbf32343d1efab12b0be7c8b641d6b732897fa57ee4" Namespace="kube-system" Pod="coredns-6f6b679f8f-8shn2" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--8shn2-eth0" Feb 13 19:23:26.445400 containerd[1472]: 2025-02-13 19:23:25.970 [INFO][4625] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6037a094e620c7c4a206bbbf32343d1efab12b0be7c8b641d6b732897fa57ee4" HandleID="k8s-pod-network.6037a094e620c7c4a206bbbf32343d1efab12b0be7c8b641d6b732897fa57ee4" Workload="localhost-k8s-coredns--6f6b679f8f--8shn2-eth0" Feb 13 19:23:26.445400 containerd[1472]: 2025-02-13 19:23:26.071 [INFO][4625] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6037a094e620c7c4a206bbbf32343d1efab12b0be7c8b641d6b732897fa57ee4" HandleID="k8s-pod-network.6037a094e620c7c4a206bbbf32343d1efab12b0be7c8b641d6b732897fa57ee4" Workload="localhost-k8s-coredns--6f6b679f8f--8shn2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000393bb0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-8shn2", "timestamp":"2025-02-13 19:23:25.970877663 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:23:26.445400 containerd[1472]: 2025-02-13 19:23:26.071 [INFO][4625] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:23:26.445400 containerd[1472]: 2025-02-13 19:23:26.305 [INFO][4625] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:23:26.445400 containerd[1472]: 2025-02-13 19:23:26.305 [INFO][4625] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:23:26.445400 containerd[1472]: 2025-02-13 19:23:26.308 [INFO][4625] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6037a094e620c7c4a206bbbf32343d1efab12b0be7c8b641d6b732897fa57ee4" host="localhost" Feb 13 19:23:26.445400 containerd[1472]: 2025-02-13 19:23:26.371 [INFO][4625] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:23:26.445400 containerd[1472]: 2025-02-13 19:23:26.384 [INFO][4625] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:23:26.445400 containerd[1472]: 2025-02-13 19:23:26.388 [INFO][4625] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:23:26.445400 containerd[1472]: 2025-02-13 19:23:26.391 [INFO][4625] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:23:26.445400 containerd[1472]: 2025-02-13 19:23:26.392 [INFO][4625] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6037a094e620c7c4a206bbbf32343d1efab12b0be7c8b641d6b732897fa57ee4" host="localhost" Feb 13 19:23:26.445400 containerd[1472]: 2025-02-13 19:23:26.395 [INFO][4625] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6037a094e620c7c4a206bbbf32343d1efab12b0be7c8b641d6b732897fa57ee4 Feb 13 19:23:26.445400 containerd[1472]: 2025-02-13 19:23:26.405 [INFO][4625] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6037a094e620c7c4a206bbbf32343d1efab12b0be7c8b641d6b732897fa57ee4" host="localhost" Feb 13 19:23:26.445400 containerd[1472]: 2025-02-13 19:23:26.414 [INFO][4625] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.6037a094e620c7c4a206bbbf32343d1efab12b0be7c8b641d6b732897fa57ee4" host="localhost" Feb 13 19:23:26.445400 containerd[1472]: 2025-02-13 19:23:26.414 [INFO][4625] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.6037a094e620c7c4a206bbbf32343d1efab12b0be7c8b641d6b732897fa57ee4" host="localhost" Feb 13 19:23:26.445400 containerd[1472]: 2025-02-13 19:23:26.414 [INFO][4625] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:23:26.445400 containerd[1472]: 2025-02-13 19:23:26.414 [INFO][4625] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="6037a094e620c7c4a206bbbf32343d1efab12b0be7c8b641d6b732897fa57ee4" HandleID="k8s-pod-network.6037a094e620c7c4a206bbbf32343d1efab12b0be7c8b641d6b732897fa57ee4" Workload="localhost-k8s-coredns--6f6b679f8f--8shn2-eth0" Feb 13 19:23:26.446572 containerd[1472]: 2025-02-13 19:23:26.419 [INFO][4569] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6037a094e620c7c4a206bbbf32343d1efab12b0be7c8b641d6b732897fa57ee4" Namespace="kube-system" Pod="coredns-6f6b679f8f-8shn2" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--8shn2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--8shn2-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"11ebe699-3307-4e28-ac6a-e555af8a982c", ResourceVersion:"671", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 23, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-8shn2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali87c386767e3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:23:26.446572 containerd[1472]: 2025-02-13 19:23:26.419 [INFO][4569] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="6037a094e620c7c4a206bbbf32343d1efab12b0be7c8b641d6b732897fa57ee4" Namespace="kube-system" Pod="coredns-6f6b679f8f-8shn2" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--8shn2-eth0" Feb 13 19:23:26.446572 containerd[1472]: 2025-02-13 19:23:26.419 [INFO][4569] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali87c386767e3 ContainerID="6037a094e620c7c4a206bbbf32343d1efab12b0be7c8b641d6b732897fa57ee4" Namespace="kube-system" Pod="coredns-6f6b679f8f-8shn2" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--8shn2-eth0" Feb 13 19:23:26.446572 containerd[1472]: 2025-02-13 19:23:26.434 [INFO][4569] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6037a094e620c7c4a206bbbf32343d1efab12b0be7c8b641d6b732897fa57ee4" Namespace="kube-system" Pod="coredns-6f6b679f8f-8shn2" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--8shn2-eth0" Feb 13 19:23:26.446572 containerd[1472]: 2025-02-13 19:23:26.434 [INFO][4569] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6037a094e620c7c4a206bbbf32343d1efab12b0be7c8b641d6b732897fa57ee4" Namespace="kube-system" Pod="coredns-6f6b679f8f-8shn2" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--8shn2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--8shn2-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"11ebe699-3307-4e28-ac6a-e555af8a982c", ResourceVersion:"671", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 23, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6037a094e620c7c4a206bbbf32343d1efab12b0be7c8b641d6b732897fa57ee4", Pod:"coredns-6f6b679f8f-8shn2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali87c386767e3", MAC:"c6:ad:08:9d:bb:34", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:23:26.446572 containerd[1472]: 2025-02-13 19:23:26.442 [INFO][4569] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6037a094e620c7c4a206bbbf32343d1efab12b0be7c8b641d6b732897fa57ee4" Namespace="kube-system" Pod="coredns-6f6b679f8f-8shn2" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--8shn2-eth0" Feb 13 19:23:26.462198 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:23:26.477383 containerd[1472]: time="2025-02-13T19:23:26.476864537Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:23:26.477383 containerd[1472]: time="2025-02-13T19:23:26.476922977Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:23:26.477383 containerd[1472]: time="2025-02-13T19:23:26.476942657Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:23:26.491222 containerd[1472]: time="2025-02-13T19:23:26.491156397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:23:26.497529 containerd[1472]: time="2025-02-13T19:23:26.497478606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b44d967f-p6w96,Uid:400c37f6-81a9-403c-9b2f-cc1d18ee97aa,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"07b87bca9c7343e7e1c5fc6f900ce8903ad820d81883aac688b72f8774ed2ffb\"" Feb 13 19:23:26.504647 systemd[1]: run-netns-cni\x2d5472d663\x2dabc6\x2d56fd\x2dd5ec\x2de59f80f0c997.mount: Deactivated successfully. Feb 13 19:23:26.505220 systemd[1]: run-netns-cni\x2d6e87aff6\x2d4729\x2d1f47\x2d9226\x2df7199b4d02c3.mount: Deactivated successfully. Feb 13 19:23:26.540785 systemd[1]: Started cri-containerd-6037a094e620c7c4a206bbbf32343d1efab12b0be7c8b641d6b732897fa57ee4.scope - libcontainer container 6037a094e620c7c4a206bbbf32343d1efab12b0be7c8b641d6b732897fa57ee4. Feb 13 19:23:26.541049 systemd-networkd[1387]: calic392f15dab5: Link UP Feb 13 19:23:26.544659 systemd-networkd[1387]: calic392f15dab5: Gained carrier Feb 13 19:23:26.560087 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:23:26.562107 containerd[1472]: 2025-02-13 19:23:25.681 [INFO][4540] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:23:26.562107 containerd[1472]: 2025-02-13 19:23:25.722 [INFO][4540] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--dtfv4-eth0 csi-node-driver- calico-system 278e43f1-bd8c-4a43-8396-436ddaca249b 594 0 2025-02-13 19:23:12 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-dtfv4 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic392f15dab5 [] []}} ContainerID="550ac9b64daf7c6adf00e615d2a9c057b2f93822967c9f57908a49bc6251ab50" Namespace="calico-system" Pod="csi-node-driver-dtfv4" WorkloadEndpoint="localhost-k8s-csi--node--driver--dtfv4-" Feb 13 19:23:26.562107 containerd[1472]: 2025-02-13 19:23:25.722 [INFO][4540] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="550ac9b64daf7c6adf00e615d2a9c057b2f93822967c9f57908a49bc6251ab50" Namespace="calico-system" Pod="csi-node-driver-dtfv4" WorkloadEndpoint="localhost-k8s-csi--node--driver--dtfv4-eth0" Feb 13 19:23:26.562107 containerd[1472]: 2025-02-13 19:23:25.975 [INFO][4626] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="550ac9b64daf7c6adf00e615d2a9c057b2f93822967c9f57908a49bc6251ab50" HandleID="k8s-pod-network.550ac9b64daf7c6adf00e615d2a9c057b2f93822967c9f57908a49bc6251ab50" Workload="localhost-k8s-csi--node--driver--dtfv4-eth0" Feb 13 19:23:26.562107 containerd[1472]: 2025-02-13 19:23:26.071 [INFO][4626] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="550ac9b64daf7c6adf00e615d2a9c057b2f93822967c9f57908a49bc6251ab50" HandleID="k8s-pod-network.550ac9b64daf7c6adf00e615d2a9c057b2f93822967c9f57908a49bc6251ab50" Workload="localhost-k8s-csi--node--driver--dtfv4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fdb40), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-dtfv4", "timestamp":"2025-02-13 19:23:25.975296269 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:23:26.562107 containerd[1472]: 2025-02-13 19:23:26.071 [INFO][4626] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:23:26.562107 containerd[1472]: 2025-02-13 19:23:26.414 [INFO][4626] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:23:26.562107 containerd[1472]: 2025-02-13 19:23:26.414 [INFO][4626] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:23:26.562107 containerd[1472]: 2025-02-13 19:23:26.419 [INFO][4626] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.550ac9b64daf7c6adf00e615d2a9c057b2f93822967c9f57908a49bc6251ab50" host="localhost" Feb 13 19:23:26.562107 containerd[1472]: 2025-02-13 19:23:26.472 [INFO][4626] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:23:26.562107 containerd[1472]: 2025-02-13 19:23:26.489 [INFO][4626] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:23:26.562107 containerd[1472]: 2025-02-13 19:23:26.491 [INFO][4626] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:23:26.562107 containerd[1472]: 2025-02-13 19:23:26.500 [INFO][4626] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:23:26.562107 containerd[1472]: 2025-02-13 19:23:26.501 [INFO][4626] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.550ac9b64daf7c6adf00e615d2a9c057b2f93822967c9f57908a49bc6251ab50" host="localhost" Feb 13 19:23:26.562107 containerd[1472]: 2025-02-13 19:23:26.503 [INFO][4626] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.550ac9b64daf7c6adf00e615d2a9c057b2f93822967c9f57908a49bc6251ab50 Feb 13 19:23:26.562107 containerd[1472]: 2025-02-13 19:23:26.519 [INFO][4626] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.550ac9b64daf7c6adf00e615d2a9c057b2f93822967c9f57908a49bc6251ab50" host="localhost" Feb 13 19:23:26.562107 containerd[1472]: 2025-02-13 19:23:26.527 [INFO][4626] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.550ac9b64daf7c6adf00e615d2a9c057b2f93822967c9f57908a49bc6251ab50" host="localhost" Feb 13 19:23:26.562107 containerd[1472]: 2025-02-13 19:23:26.527 [INFO][4626] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.550ac9b64daf7c6adf00e615d2a9c057b2f93822967c9f57908a49bc6251ab50" host="localhost" Feb 13 19:23:26.562107 containerd[1472]: 2025-02-13 19:23:26.527 [INFO][4626] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:23:26.562107 containerd[1472]: 2025-02-13 19:23:26.527 [INFO][4626] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="550ac9b64daf7c6adf00e615d2a9c057b2f93822967c9f57908a49bc6251ab50" HandleID="k8s-pod-network.550ac9b64daf7c6adf00e615d2a9c057b2f93822967c9f57908a49bc6251ab50" Workload="localhost-k8s-csi--node--driver--dtfv4-eth0" Feb 13 19:23:26.562713 containerd[1472]: 2025-02-13 19:23:26.533 [INFO][4540] cni-plugin/k8s.go 386: Populated endpoint ContainerID="550ac9b64daf7c6adf00e615d2a9c057b2f93822967c9f57908a49bc6251ab50" Namespace="calico-system" Pod="csi-node-driver-dtfv4" WorkloadEndpoint="localhost-k8s-csi--node--driver--dtfv4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--dtfv4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"278e43f1-bd8c-4a43-8396-436ddaca249b", ResourceVersion:"594", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 23, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-dtfv4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic392f15dab5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:23:26.562713 containerd[1472]: 2025-02-13 19:23:26.533 [INFO][4540] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="550ac9b64daf7c6adf00e615d2a9c057b2f93822967c9f57908a49bc6251ab50" Namespace="calico-system" Pod="csi-node-driver-dtfv4" WorkloadEndpoint="localhost-k8s-csi--node--driver--dtfv4-eth0" Feb 13 19:23:26.562713 containerd[1472]: 2025-02-13 19:23:26.533 [INFO][4540] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic392f15dab5 ContainerID="550ac9b64daf7c6adf00e615d2a9c057b2f93822967c9f57908a49bc6251ab50" Namespace="calico-system" Pod="csi-node-driver-dtfv4" WorkloadEndpoint="localhost-k8s-csi--node--driver--dtfv4-eth0" Feb 13 19:23:26.562713 containerd[1472]: 2025-02-13 19:23:26.540 [INFO][4540] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="550ac9b64daf7c6adf00e615d2a9c057b2f93822967c9f57908a49bc6251ab50" Namespace="calico-system" Pod="csi-node-driver-dtfv4" WorkloadEndpoint="localhost-k8s-csi--node--driver--dtfv4-eth0" Feb 13 19:23:26.562713 containerd[1472]: 2025-02-13 19:23:26.543 [INFO][4540] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="550ac9b64daf7c6adf00e615d2a9c057b2f93822967c9f57908a49bc6251ab50" Namespace="calico-system" Pod="csi-node-driver-dtfv4" WorkloadEndpoint="localhost-k8s-csi--node--driver--dtfv4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--dtfv4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"278e43f1-bd8c-4a43-8396-436ddaca249b", ResourceVersion:"594", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 23, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"550ac9b64daf7c6adf00e615d2a9c057b2f93822967c9f57908a49bc6251ab50", Pod:"csi-node-driver-dtfv4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic392f15dab5", MAC:"fe:51:ec:b0:fd:cf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:23:26.562713 containerd[1472]: 2025-02-13 19:23:26.559 [INFO][4540] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="550ac9b64daf7c6adf00e615d2a9c057b2f93822967c9f57908a49bc6251ab50" Namespace="calico-system" Pod="csi-node-driver-dtfv4" WorkloadEndpoint="localhost-k8s-csi--node--driver--dtfv4-eth0" Feb 13 19:23:26.568696 kernel: bpftool[5008]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 19:23:26.586404 containerd[1472]: time="2025-02-13T19:23:26.586363411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-8shn2,Uid:11ebe699-3307-4e28-ac6a-e555af8a982c,Namespace:kube-system,Attempt:5,} returns sandbox id \"6037a094e620c7c4a206bbbf32343d1efab12b0be7c8b641d6b732897fa57ee4\"" Feb 13 19:23:26.587968 kubelet[2544]: E0213 19:23:26.587824 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:23:26.599498 containerd[1472]: time="2025-02-13T19:23:26.596027065Z" level=info msg="CreateContainer within sandbox \"6037a094e620c7c4a206bbbf32343d1efab12b0be7c8b641d6b732897fa57ee4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:23:26.619692 containerd[1472]: time="2025-02-13T19:23:26.591479178Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:23:26.619692 containerd[1472]: time="2025-02-13T19:23:26.619129657Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:23:26.619692 containerd[1472]: time="2025-02-13T19:23:26.619146097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:23:26.619692 containerd[1472]: time="2025-02-13T19:23:26.619243297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:23:26.629987 kubelet[2544]: E0213 19:23:26.629771 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:23:26.650778 systemd[1]: Started cri-containerd-550ac9b64daf7c6adf00e615d2a9c057b2f93822967c9f57908a49bc6251ab50.scope - libcontainer container 550ac9b64daf7c6adf00e615d2a9c057b2f93822967c9f57908a49bc6251ab50. Feb 13 19:23:26.650780 systemd-networkd[1387]: cali2c54670ce48: Link UP Feb 13 19:23:26.653917 systemd-networkd[1387]: cali2c54670ce48: Gained carrier Feb 13 19:23:26.679779 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:23:26.685742 containerd[1472]: 2025-02-13 19:23:25.667 [INFO][4539] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:23:26.685742 containerd[1472]: 2025-02-13 19:23:25.716 [INFO][4539] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--slhww-eth0 coredns-6f6b679f8f- kube-system beb760d9-f48b-4afe-876e-eb78778e0f0b 675 0 2025-02-13 19:23:04 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-slhww eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2c54670ce48 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="b6d67fc8e1aba37ed5eb082b2ad15b10b1fb0098013de2089ee3d74ba8c14082" Namespace="kube-system" Pod="coredns-6f6b679f8f-slhww" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--slhww-" Feb 13 19:23:26.685742 containerd[1472]: 2025-02-13 19:23:25.717 [INFO][4539] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b6d67fc8e1aba37ed5eb082b2ad15b10b1fb0098013de2089ee3d74ba8c14082" Namespace="kube-system" Pod="coredns-6f6b679f8f-slhww" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--slhww-eth0" Feb 13 19:23:26.685742 containerd[1472]: 2025-02-13 19:23:25.976 [INFO][4623] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b6d67fc8e1aba37ed5eb082b2ad15b10b1fb0098013de2089ee3d74ba8c14082" HandleID="k8s-pod-network.b6d67fc8e1aba37ed5eb082b2ad15b10b1fb0098013de2089ee3d74ba8c14082" Workload="localhost-k8s-coredns--6f6b679f8f--slhww-eth0" Feb 13 19:23:26.685742 containerd[1472]: 2025-02-13 19:23:26.072 [INFO][4623] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b6d67fc8e1aba37ed5eb082b2ad15b10b1fb0098013de2089ee3d74ba8c14082" HandleID="k8s-pod-network.b6d67fc8e1aba37ed5eb082b2ad15b10b1fb0098013de2089ee3d74ba8c14082" Workload="localhost-k8s-coredns--6f6b679f8f--slhww-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003591a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-slhww", "timestamp":"2025-02-13 19:23:25.976586591 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:23:26.685742 containerd[1472]: 2025-02-13 19:23:26.072 [INFO][4623] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:23:26.685742 containerd[1472]: 2025-02-13 19:23:26.527 [INFO][4623] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:23:26.685742 containerd[1472]: 2025-02-13 19:23:26.527 [INFO][4623] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:23:26.685742 containerd[1472]: 2025-02-13 19:23:26.529 [INFO][4623] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b6d67fc8e1aba37ed5eb082b2ad15b10b1fb0098013de2089ee3d74ba8c14082" host="localhost" Feb 13 19:23:26.685742 containerd[1472]: 2025-02-13 19:23:26.574 [INFO][4623] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:23:26.685742 containerd[1472]: 2025-02-13 19:23:26.592 [INFO][4623] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:23:26.685742 containerd[1472]: 2025-02-13 19:23:26.600 [INFO][4623] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:23:26.685742 containerd[1472]: 2025-02-13 19:23:26.608 [INFO][4623] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:23:26.685742 containerd[1472]: 2025-02-13 19:23:26.608 [INFO][4623] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b6d67fc8e1aba37ed5eb082b2ad15b10b1fb0098013de2089ee3d74ba8c14082" host="localhost" Feb 13 19:23:26.685742 containerd[1472]: 2025-02-13 19:23:26.617 [INFO][4623] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b6d67fc8e1aba37ed5eb082b2ad15b10b1fb0098013de2089ee3d74ba8c14082 Feb 13 19:23:26.685742 containerd[1472]: 2025-02-13 19:23:26.630 [INFO][4623] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b6d67fc8e1aba37ed5eb082b2ad15b10b1fb0098013de2089ee3d74ba8c14082" host="localhost" Feb 13 19:23:26.685742 containerd[1472]: 2025-02-13 19:23:26.642 [INFO][4623] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.b6d67fc8e1aba37ed5eb082b2ad15b10b1fb0098013de2089ee3d74ba8c14082" host="localhost" Feb 13 19:23:26.685742 containerd[1472]: 2025-02-13 19:23:26.642 [INFO][4623] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.b6d67fc8e1aba37ed5eb082b2ad15b10b1fb0098013de2089ee3d74ba8c14082" host="localhost" Feb 13 19:23:26.685742 containerd[1472]: 2025-02-13 19:23:26.642 [INFO][4623] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:23:26.685742 containerd[1472]: 2025-02-13 19:23:26.642 [INFO][4623] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="b6d67fc8e1aba37ed5eb082b2ad15b10b1fb0098013de2089ee3d74ba8c14082" HandleID="k8s-pod-network.b6d67fc8e1aba37ed5eb082b2ad15b10b1fb0098013de2089ee3d74ba8c14082" Workload="localhost-k8s-coredns--6f6b679f8f--slhww-eth0" Feb 13 19:23:26.686932 containerd[1472]: 2025-02-13 19:23:26.646 [INFO][4539] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b6d67fc8e1aba37ed5eb082b2ad15b10b1fb0098013de2089ee3d74ba8c14082" Namespace="kube-system" Pod="coredns-6f6b679f8f-slhww" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--slhww-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--slhww-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"beb760d9-f48b-4afe-876e-eb78778e0f0b", ResourceVersion:"675", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 23, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-slhww", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2c54670ce48", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:23:26.686932 containerd[1472]: 2025-02-13 19:23:26.646 [INFO][4539] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="b6d67fc8e1aba37ed5eb082b2ad15b10b1fb0098013de2089ee3d74ba8c14082" Namespace="kube-system" Pod="coredns-6f6b679f8f-slhww" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--slhww-eth0" Feb 13 19:23:26.686932 containerd[1472]: 2025-02-13 19:23:26.646 [INFO][4539] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2c54670ce48 ContainerID="b6d67fc8e1aba37ed5eb082b2ad15b10b1fb0098013de2089ee3d74ba8c14082" Namespace="kube-system" Pod="coredns-6f6b679f8f-slhww" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--slhww-eth0" Feb 13 19:23:26.686932 containerd[1472]: 2025-02-13 19:23:26.660 [INFO][4539] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b6d67fc8e1aba37ed5eb082b2ad15b10b1fb0098013de2089ee3d74ba8c14082" Namespace="kube-system" Pod="coredns-6f6b679f8f-slhww" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--slhww-eth0" Feb 13 19:23:26.686932 containerd[1472]: 2025-02-13 19:23:26.660 [INFO][4539] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b6d67fc8e1aba37ed5eb082b2ad15b10b1fb0098013de2089ee3d74ba8c14082" Namespace="kube-system" Pod="coredns-6f6b679f8f-slhww" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--slhww-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--slhww-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"beb760d9-f48b-4afe-876e-eb78778e0f0b", ResourceVersion:"675", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 23, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b6d67fc8e1aba37ed5eb082b2ad15b10b1fb0098013de2089ee3d74ba8c14082", Pod:"coredns-6f6b679f8f-slhww", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2c54670ce48", MAC:"ee:a2:0e:a4:fe:7d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:23:26.686932 containerd[1472]: 2025-02-13 19:23:26.683 [INFO][4539] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b6d67fc8e1aba37ed5eb082b2ad15b10b1fb0098013de2089ee3d74ba8c14082" Namespace="kube-system" Pod="coredns-6f6b679f8f-slhww" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--slhww-eth0" Feb 13 19:23:26.687830 containerd[1472]: time="2025-02-13T19:23:26.687411753Z" level=info msg="CreateContainer within sandbox \"6037a094e620c7c4a206bbbf32343d1efab12b0be7c8b641d6b732897fa57ee4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cebf099d5c6437c10411c8782110642561c59aa4be54b56a22b97a01935a8e76\"" Feb 13 19:23:26.690304 containerd[1472]: time="2025-02-13T19:23:26.689413916Z" level=info msg="StartContainer for \"cebf099d5c6437c10411c8782110642561c59aa4be54b56a22b97a01935a8e76\"" Feb 13 19:23:26.725439 containerd[1472]: time="2025-02-13T19:23:26.724860646Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:23:26.725739 containerd[1472]: time="2025-02-13T19:23:26.725498167Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:23:26.725886 containerd[1472]: time="2025-02-13T19:23:26.725644367Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:23:26.727780 systemd[1]: Started cri-containerd-cebf099d5c6437c10411c8782110642561c59aa4be54b56a22b97a01935a8e76.scope - libcontainer container cebf099d5c6437c10411c8782110642561c59aa4be54b56a22b97a01935a8e76. Feb 13 19:23:26.728550 containerd[1472]: time="2025-02-13T19:23:26.728385451Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:23:26.729083 containerd[1472]: time="2025-02-13T19:23:26.728972212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dtfv4,Uid:278e43f1-bd8c-4a43-8396-436ddaca249b,Namespace:calico-system,Attempt:5,} returns sandbox id \"550ac9b64daf7c6adf00e615d2a9c057b2f93822967c9f57908a49bc6251ab50\"" Feb 13 19:23:26.754811 systemd[1]: Started cri-containerd-b6d67fc8e1aba37ed5eb082b2ad15b10b1fb0098013de2089ee3d74ba8c14082.scope - libcontainer container b6d67fc8e1aba37ed5eb082b2ad15b10b1fb0098013de2089ee3d74ba8c14082. Feb 13 19:23:26.776173 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:23:26.789049 containerd[1472]: time="2025-02-13T19:23:26.788140775Z" level=info msg="StartContainer for \"cebf099d5c6437c10411c8782110642561c59aa4be54b56a22b97a01935a8e76\" returns successfully" Feb 13 19:23:26.802411 containerd[1472]: time="2025-02-13T19:23:26.802066394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-slhww,Uid:beb760d9-f48b-4afe-876e-eb78778e0f0b,Namespace:kube-system,Attempt:5,} returns sandbox id \"b6d67fc8e1aba37ed5eb082b2ad15b10b1fb0098013de2089ee3d74ba8c14082\"" Feb 13 19:23:26.803238 kubelet[2544]: E0213 19:23:26.803172 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:23:26.809600 containerd[1472]: time="2025-02-13T19:23:26.807494282Z" level=info msg="CreateContainer within sandbox \"b6d67fc8e1aba37ed5eb082b2ad15b10b1fb0098013de2089ee3d74ba8c14082\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:23:26.826215 systemd-networkd[1387]: vxlan.calico: Link UP Feb 13 19:23:26.826224 systemd-networkd[1387]: vxlan.calico: Gained carrier Feb 13 19:23:26.842453 containerd[1472]: time="2025-02-13T19:23:26.842377971Z" level=info msg="CreateContainer within sandbox \"b6d67fc8e1aba37ed5eb082b2ad15b10b1fb0098013de2089ee3d74ba8c14082\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"36b11a83bf64cc1fd8e787025c23213ff2910b4f2d57b378e110a8414cb0566a\"" Feb 13 19:23:26.844617 containerd[1472]: time="2025-02-13T19:23:26.843533453Z" level=info msg="StartContainer for \"36b11a83bf64cc1fd8e787025c23213ff2910b4f2d57b378e110a8414cb0566a\"" Feb 13 19:23:26.903923 systemd[1]: Started cri-containerd-36b11a83bf64cc1fd8e787025c23213ff2910b4f2d57b378e110a8414cb0566a.scope - libcontainer container 36b11a83bf64cc1fd8e787025c23213ff2910b4f2d57b378e110a8414cb0566a. Feb 13 19:23:26.940128 containerd[1472]: time="2025-02-13T19:23:26.940065509Z" level=info msg="StartContainer for \"36b11a83bf64cc1fd8e787025c23213ff2910b4f2d57b378e110a8414cb0566a\" returns successfully" Feb 13 19:23:27.484716 systemd-networkd[1387]: cali07a718128a3: Gained IPv6LL Feb 13 19:23:27.492433 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount468288721.mount: Deactivated successfully. Feb 13 19:23:27.612877 systemd-networkd[1387]: cali257280f6ac6: Gained IPv6LL Feb 13 19:23:27.644067 kubelet[2544]: E0213 19:23:27.644010 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:23:27.652484 kubelet[2544]: E0213 19:23:27.648664 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:23:27.663365 kubelet[2544]: I0213 19:23:27.662888 2544 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-8shn2" podStartSLOduration=23.662836387 podStartE2EDuration="23.662836387s" podCreationTimestamp="2025-02-13 19:23:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:23:27.661845745 +0000 UTC m=+28.500815363" watchObservedRunningTime="2025-02-13 19:23:27.662836387 +0000 UTC m=+28.501806005" Feb 13 19:23:27.691915 systemd[1]: Started sshd@7-10.0.0.135:22-10.0.0.1:50218.service - OpenSSH per-connection server daemon (10.0.0.1:50218). Feb 13 19:23:27.741756 systemd-networkd[1387]: cali02725cd83ae: Gained IPv6LL Feb 13 19:23:27.756944 sshd[5304]: Accepted publickey for core from 10.0.0.1 port 50218 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:23:27.758940 sshd-session[5304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:23:27.765100 systemd-logind[1449]: New session 8 of user core. Feb 13 19:23:27.769754 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 19:23:27.805678 systemd-networkd[1387]: cali87c386767e3: Gained IPv6LL Feb 13 19:23:27.932840 systemd-networkd[1387]: cali2c54670ce48: Gained IPv6LL Feb 13 19:23:28.029288 containerd[1472]: time="2025-02-13T19:23:28.029168387Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:23:28.030346 containerd[1472]: time="2025-02-13T19:23:28.030310149Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Feb 13 19:23:28.031816 containerd[1472]: time="2025-02-13T19:23:28.031780030Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:23:28.035127 containerd[1472]: time="2025-02-13T19:23:28.035061194Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:23:28.035940 containerd[1472]: time="2025-02-13T19:23:28.035910996Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 1.846872744s" Feb 13 19:23:28.036027 containerd[1472]: time="2025-02-13T19:23:28.035943916Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Feb 13 19:23:28.038588 containerd[1472]: time="2025-02-13T19:23:28.038559559Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Feb 13 19:23:28.041075 containerd[1472]: time="2025-02-13T19:23:28.041045122Z" level=info msg="CreateContainer within sandbox \"67216be933592c194b5f750ad0f7b80de2cd06ec84cefe5b3cce175dfc0a860d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 19:23:28.058782 containerd[1472]: time="2025-02-13T19:23:28.058717224Z" level=info msg="CreateContainer within sandbox \"67216be933592c194b5f750ad0f7b80de2cd06ec84cefe5b3cce175dfc0a860d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"271f08864b40014b1b7215ff0f6adb8c57f43bbc9e67d32462944177ab03f9ea\"" Feb 13 19:23:28.059649 containerd[1472]: time="2025-02-13T19:23:28.059622785Z" level=info msg="StartContainer for \"271f08864b40014b1b7215ff0f6adb8c57f43bbc9e67d32462944177ab03f9ea\"" Feb 13 19:23:28.094252 sshd[5311]: Connection closed by 10.0.0.1 port 50218 Feb 13 19:23:28.094542 sshd-session[5304]: pam_unix(sshd:session): session closed for user core Feb 13 19:23:28.098618 systemd[1]: sshd@7-10.0.0.135:22-10.0.0.1:50218.service: Deactivated successfully. Feb 13 19:23:28.104987 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 19:23:28.107226 systemd-logind[1449]: Session 8 logged out. Waiting for processes to exit. Feb 13 19:23:28.113927 systemd[1]: Started cri-containerd-271f08864b40014b1b7215ff0f6adb8c57f43bbc9e67d32462944177ab03f9ea.scope - libcontainer container 271f08864b40014b1b7215ff0f6adb8c57f43bbc9e67d32462944177ab03f9ea. Feb 13 19:23:28.114786 systemd-logind[1449]: Removed session 8. Feb 13 19:23:28.143097 containerd[1472]: time="2025-02-13T19:23:28.143056008Z" level=info msg="StartContainer for \"271f08864b40014b1b7215ff0f6adb8c57f43bbc9e67d32462944177ab03f9ea\" returns successfully" Feb 13 19:23:28.190172 systemd-networkd[1387]: vxlan.calico: Gained IPv6LL Feb 13 19:23:28.574545 systemd-networkd[1387]: calic392f15dab5: Gained IPv6LL Feb 13 19:23:28.663558 kubelet[2544]: E0213 19:23:28.662104 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:23:28.663558 kubelet[2544]: E0213 19:23:28.662422 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:23:28.672978 kubelet[2544]: I0213 19:23:28.672925 2544 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-slhww" podStartSLOduration=24.672864103 podStartE2EDuration="24.672864103s" podCreationTimestamp="2025-02-13 19:23:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:23:27.69610099 +0000 UTC m=+28.535070608" watchObservedRunningTime="2025-02-13 19:23:28.672864103 +0000 UTC m=+29.511833721" Feb 13 19:23:28.673134 kubelet[2544]: I0213 19:23:28.673055 2544 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5b44d967f-hpx7w" podStartSLOduration=15.822729835 podStartE2EDuration="17.673050383s" podCreationTimestamp="2025-02-13 19:23:11 +0000 UTC" firstStartedPulling="2025-02-13 19:23:26.188093811 +0000 UTC m=+27.027063429" lastFinishedPulling="2025-02-13 19:23:28.038414359 +0000 UTC m=+28.877383977" observedRunningTime="2025-02-13 19:23:28.672192182 +0000 UTC m=+29.511161800" watchObservedRunningTime="2025-02-13 19:23:28.673050383 +0000 UTC m=+29.512019961" Feb 13 19:23:29.471121 containerd[1472]: time="2025-02-13T19:23:29.471076653Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:23:29.472291 containerd[1472]: time="2025-02-13T19:23:29.471781734Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Feb 13 19:23:29.472449 containerd[1472]: time="2025-02-13T19:23:29.472424014Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:23:29.474691 containerd[1472]: time="2025-02-13T19:23:29.474644217Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:23:29.481191 containerd[1472]: time="2025-02-13T19:23:29.481138225Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 1.442541705s" Feb 13 19:23:29.481191 containerd[1472]: time="2025-02-13T19:23:29.481179625Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Feb 13 19:23:29.482733 containerd[1472]: time="2025-02-13T19:23:29.482591226Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 19:23:29.491342 containerd[1472]: time="2025-02-13T19:23:29.491297316Z" level=info msg="CreateContainer within sandbox \"87324165241ca1501f48955bfaee7993330e3c0e5d42009f89cffbc4c2f211f5\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 13 19:23:29.502912 containerd[1472]: time="2025-02-13T19:23:29.502874090Z" level=info msg="CreateContainer within sandbox \"87324165241ca1501f48955bfaee7993330e3c0e5d42009f89cffbc4c2f211f5\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"cd766423894ada9bdb773534732fba2547e2278c1d089be13f85d5bdfe01233e\"" Feb 13 19:23:29.504443 containerd[1472]: time="2025-02-13T19:23:29.503967291Z" level=info msg="StartContainer for \"cd766423894ada9bdb773534732fba2547e2278c1d089be13f85d5bdfe01233e\"" Feb 13 19:23:29.534770 systemd[1]: Started cri-containerd-cd766423894ada9bdb773534732fba2547e2278c1d089be13f85d5bdfe01233e.scope - libcontainer container cd766423894ada9bdb773534732fba2547e2278c1d089be13f85d5bdfe01233e. Feb 13 19:23:29.565399 containerd[1472]: time="2025-02-13T19:23:29.565356802Z" level=info msg="StartContainer for \"cd766423894ada9bdb773534732fba2547e2278c1d089be13f85d5bdfe01233e\" returns successfully" Feb 13 19:23:29.672362 kubelet[2544]: I0213 19:23:29.671931 2544 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:23:29.673299 kubelet[2544]: E0213 19:23:29.673081 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:23:29.698966 kubelet[2544]: I0213 19:23:29.698908 2544 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-58dfd6696-g69sl" podStartSLOduration=14.582673633 podStartE2EDuration="17.698863877s" podCreationTimestamp="2025-02-13 19:23:12 +0000 UTC" firstStartedPulling="2025-02-13 19:23:26.365763821 +0000 UTC m=+27.204733439" lastFinishedPulling="2025-02-13 19:23:29.481954065 +0000 UTC m=+30.320923683" observedRunningTime="2025-02-13 19:23:29.697432755 +0000 UTC m=+30.536402373" watchObservedRunningTime="2025-02-13 19:23:29.698863877 +0000 UTC m=+30.537833495" Feb 13 19:23:29.785607 containerd[1472]: time="2025-02-13T19:23:29.785480537Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:23:29.786456 containerd[1472]: time="2025-02-13T19:23:29.786407378Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Feb 13 19:23:29.788717 containerd[1472]: time="2025-02-13T19:23:29.788685941Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 306.052395ms" Feb 13 19:23:29.788717 containerd[1472]: time="2025-02-13T19:23:29.788718941Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Feb 13 19:23:29.789867 containerd[1472]: time="2025-02-13T19:23:29.789836182Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 19:23:29.790983 containerd[1472]: time="2025-02-13T19:23:29.790928583Z" level=info msg="CreateContainer within sandbox \"07b87bca9c7343e7e1c5fc6f900ce8903ad820d81883aac688b72f8774ed2ffb\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 19:23:29.801391 containerd[1472]: time="2025-02-13T19:23:29.801273235Z" level=info msg="CreateContainer within sandbox \"07b87bca9c7343e7e1c5fc6f900ce8903ad820d81883aac688b72f8774ed2ffb\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c9f3630ae09b28bf4ae8b74eb2ad7fe485ff18674282374a6e67802710d45f99\"" Feb 13 19:23:29.801946 containerd[1472]: time="2025-02-13T19:23:29.801917356Z" level=info msg="StartContainer for \"c9f3630ae09b28bf4ae8b74eb2ad7fe485ff18674282374a6e67802710d45f99\"" Feb 13 19:23:29.826760 systemd[1]: Started cri-containerd-c9f3630ae09b28bf4ae8b74eb2ad7fe485ff18674282374a6e67802710d45f99.scope - libcontainer container c9f3630ae09b28bf4ae8b74eb2ad7fe485ff18674282374a6e67802710d45f99. Feb 13 19:23:29.855533 containerd[1472]: time="2025-02-13T19:23:29.855478738Z" level=info msg="StartContainer for \"c9f3630ae09b28bf4ae8b74eb2ad7fe485ff18674282374a6e67802710d45f99\" returns successfully" Feb 13 19:23:30.680392 kubelet[2544]: I0213 19:23:30.680358 2544 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:23:30.698397 kubelet[2544]: I0213 19:23:30.698140 2544 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5b44d967f-p6w96" podStartSLOduration=16.411114335 podStartE2EDuration="19.698121904s" podCreationTimestamp="2025-02-13 19:23:11 +0000 UTC" firstStartedPulling="2025-02-13 19:23:26.502459133 +0000 UTC m=+27.341428751" lastFinishedPulling="2025-02-13 19:23:29.789466702 +0000 UTC m=+30.628436320" observedRunningTime="2025-02-13 19:23:30.697247303 +0000 UTC m=+31.536216961" watchObservedRunningTime="2025-02-13 19:23:30.698121904 +0000 UTC m=+31.537091522" Feb 13 19:23:30.829859 containerd[1472]: time="2025-02-13T19:23:30.829515127Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:23:30.830910 containerd[1472]: time="2025-02-13T19:23:30.830855888Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Feb 13 19:23:30.832171 containerd[1472]: time="2025-02-13T19:23:30.832128250Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:23:30.834654 containerd[1472]: time="2025-02-13T19:23:30.834330492Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:23:30.835234 containerd[1472]: time="2025-02-13T19:23:30.835183573Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.045224271s" Feb 13 19:23:30.835303 containerd[1472]: time="2025-02-13T19:23:30.835245013Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Feb 13 19:23:30.838001 containerd[1472]: time="2025-02-13T19:23:30.837967736Z" level=info msg="CreateContainer within sandbox \"550ac9b64daf7c6adf00e615d2a9c057b2f93822967c9f57908a49bc6251ab50\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 19:23:30.856300 containerd[1472]: time="2025-02-13T19:23:30.856243756Z" level=info msg="CreateContainer within sandbox \"550ac9b64daf7c6adf00e615d2a9c057b2f93822967c9f57908a49bc6251ab50\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"b6ba59e27f1357c7a9ee910d3f32c145147f4d8cc52dfea4c6821f285135b487\"" Feb 13 19:23:30.857221 containerd[1472]: time="2025-02-13T19:23:30.857182797Z" level=info msg="StartContainer for \"b6ba59e27f1357c7a9ee910d3f32c145147f4d8cc52dfea4c6821f285135b487\"" Feb 13 19:23:30.893756 systemd[1]: Started cri-containerd-b6ba59e27f1357c7a9ee910d3f32c145147f4d8cc52dfea4c6821f285135b487.scope - libcontainer container b6ba59e27f1357c7a9ee910d3f32c145147f4d8cc52dfea4c6821f285135b487. Feb 13 19:23:30.930948 containerd[1472]: time="2025-02-13T19:23:30.930832237Z" level=info msg="StartContainer for \"b6ba59e27f1357c7a9ee910d3f32c145147f4d8cc52dfea4c6821f285135b487\" returns successfully" Feb 13 19:23:30.932622 containerd[1472]: time="2025-02-13T19:23:30.932563999Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 19:23:31.487685 systemd[1]: run-containerd-runc-k8s.io-b6ba59e27f1357c7a9ee910d3f32c145147f4d8cc52dfea4c6821f285135b487-runc.EoLo1h.mount: Deactivated successfully. Feb 13 19:23:31.687955 kubelet[2544]: I0213 19:23:31.687923 2544 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:23:32.033169 containerd[1472]: time="2025-02-13T19:23:32.033119682Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:23:32.034331 containerd[1472]: time="2025-02-13T19:23:32.034167923Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Feb 13 19:23:32.035557 containerd[1472]: time="2025-02-13T19:23:32.035521284Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:23:32.038607 containerd[1472]: time="2025-02-13T19:23:32.038559167Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:23:32.039435 containerd[1472]: time="2025-02-13T19:23:32.039094368Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.106495649s" Feb 13 19:23:32.039435 containerd[1472]: time="2025-02-13T19:23:32.039127048Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Feb 13 19:23:32.041487 containerd[1472]: time="2025-02-13T19:23:32.041444290Z" level=info msg="CreateContainer within sandbox \"550ac9b64daf7c6adf00e615d2a9c057b2f93822967c9f57908a49bc6251ab50\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 19:23:32.063092 containerd[1472]: time="2025-02-13T19:23:32.063026791Z" level=info msg="CreateContainer within sandbox \"550ac9b64daf7c6adf00e615d2a9c057b2f93822967c9f57908a49bc6251ab50\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"86b214bbc67f259b72863fdef20c9fe29f83d57c5e7b30a8f5fb2c19b1ce49f3\"" Feb 13 19:23:32.063629 containerd[1472]: time="2025-02-13T19:23:32.063508391Z" level=info msg="StartContainer for \"86b214bbc67f259b72863fdef20c9fe29f83d57c5e7b30a8f5fb2c19b1ce49f3\"" Feb 13 19:23:32.097815 systemd[1]: Started cri-containerd-86b214bbc67f259b72863fdef20c9fe29f83d57c5e7b30a8f5fb2c19b1ce49f3.scope - libcontainer container 86b214bbc67f259b72863fdef20c9fe29f83d57c5e7b30a8f5fb2c19b1ce49f3. Feb 13 19:23:32.146979 containerd[1472]: time="2025-02-13T19:23:32.146844511Z" level=info msg="StartContainer for \"86b214bbc67f259b72863fdef20c9fe29f83d57c5e7b30a8f5fb2c19b1ce49f3\" returns successfully" Feb 13 19:23:32.326517 kubelet[2544]: I0213 19:23:32.326370 2544 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 19:23:32.330014 kubelet[2544]: I0213 19:23:32.329975 2544 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 19:23:32.487684 systemd[1]: run-containerd-runc-k8s.io-86b214bbc67f259b72863fdef20c9fe29f83d57c5e7b30a8f5fb2c19b1ce49f3-runc.Hj027v.mount: Deactivated successfully. Feb 13 19:23:32.705170 kubelet[2544]: I0213 19:23:32.704829 2544 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-dtfv4" podStartSLOduration=15.399022613 podStartE2EDuration="20.704811883s" podCreationTimestamp="2025-02-13 19:23:12 +0000 UTC" firstStartedPulling="2025-02-13 19:23:26.734197659 +0000 UTC m=+27.573167277" lastFinishedPulling="2025-02-13 19:23:32.039986929 +0000 UTC m=+32.878956547" observedRunningTime="2025-02-13 19:23:32.703737722 +0000 UTC m=+33.542707340" watchObservedRunningTime="2025-02-13 19:23:32.704811883 +0000 UTC m=+33.543781501" Feb 13 19:23:33.105310 systemd[1]: Started sshd@8-10.0.0.135:22-10.0.0.1:46924.service - OpenSSH per-connection server daemon (10.0.0.1:46924). Feb 13 19:23:33.164040 sshd[5562]: Accepted publickey for core from 10.0.0.1 port 46924 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:23:33.165823 sshd-session[5562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:23:33.170408 systemd-logind[1449]: New session 9 of user core. Feb 13 19:23:33.177807 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 19:23:33.290884 kubelet[2544]: I0213 19:23:33.290433 2544 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:23:33.317222 systemd[1]: run-containerd-runc-k8s.io-cd766423894ada9bdb773534732fba2547e2278c1d089be13f85d5bdfe01233e-runc.06EJKG.mount: Deactivated successfully. Feb 13 19:23:33.356221 systemd[1]: run-containerd-runc-k8s.io-cd766423894ada9bdb773534732fba2547e2278c1d089be13f85d5bdfe01233e-runc.m0la1G.mount: Deactivated successfully. Feb 13 19:23:33.437394 sshd[5564]: Connection closed by 10.0.0.1 port 46924 Feb 13 19:23:33.437741 sshd-session[5562]: pam_unix(sshd:session): session closed for user core Feb 13 19:23:33.440440 systemd[1]: sshd@8-10.0.0.135:22-10.0.0.1:46924.service: Deactivated successfully. Feb 13 19:23:33.442207 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 19:23:33.443408 systemd-logind[1449]: Session 9 logged out. Waiting for processes to exit. Feb 13 19:23:33.444206 systemd-logind[1449]: Removed session 9. Feb 13 19:23:37.233283 kubelet[2544]: I0213 19:23:37.233233 2544 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:23:38.461455 systemd[1]: Started sshd@9-10.0.0.135:22-10.0.0.1:46928.service - OpenSSH per-connection server daemon (10.0.0.1:46928). Feb 13 19:23:38.502838 sshd[5630]: Accepted publickey for core from 10.0.0.1 port 46928 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:23:38.504093 sshd-session[5630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:23:38.508891 systemd-logind[1449]: New session 10 of user core. Feb 13 19:23:38.519825 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 19:23:38.655241 sshd[5632]: Connection closed by 10.0.0.1 port 46928 Feb 13 19:23:38.655828 sshd-session[5630]: pam_unix(sshd:session): session closed for user core Feb 13 19:23:38.668292 systemd[1]: sshd@9-10.0.0.135:22-10.0.0.1:46928.service: Deactivated successfully. Feb 13 19:23:38.669989 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 19:23:38.671341 systemd-logind[1449]: Session 10 logged out. Waiting for processes to exit. Feb 13 19:23:38.672846 systemd[1]: Started sshd@10-10.0.0.135:22-10.0.0.1:46942.service - OpenSSH per-connection server daemon (10.0.0.1:46942). Feb 13 19:23:38.674127 systemd-logind[1449]: Removed session 10. Feb 13 19:23:38.727326 sshd[5645]: Accepted publickey for core from 10.0.0.1 port 46942 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:23:38.728719 sshd-session[5645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:23:38.734790 systemd-logind[1449]: New session 11 of user core. Feb 13 19:23:38.748805 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 19:23:38.940254 sshd[5647]: Connection closed by 10.0.0.1 port 46942 Feb 13 19:23:38.940847 sshd-session[5645]: pam_unix(sshd:session): session closed for user core Feb 13 19:23:38.950917 systemd[1]: sshd@10-10.0.0.135:22-10.0.0.1:46942.service: Deactivated successfully. Feb 13 19:23:38.953827 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 19:23:38.957985 systemd-logind[1449]: Session 11 logged out. Waiting for processes to exit. Feb 13 19:23:38.971247 systemd[1]: Started sshd@11-10.0.0.135:22-10.0.0.1:46950.service - OpenSSH per-connection server daemon (10.0.0.1:46950). Feb 13 19:23:38.972585 systemd-logind[1449]: Removed session 11. Feb 13 19:23:39.007684 sshd[5657]: Accepted publickey for core from 10.0.0.1 port 46950 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:23:39.008916 sshd-session[5657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:23:39.013567 systemd-logind[1449]: New session 12 of user core. Feb 13 19:23:39.021801 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 19:23:39.151497 sshd[5659]: Connection closed by 10.0.0.1 port 46950 Feb 13 19:23:39.151857 sshd-session[5657]: pam_unix(sshd:session): session closed for user core Feb 13 19:23:39.155076 systemd[1]: sshd@11-10.0.0.135:22-10.0.0.1:46950.service: Deactivated successfully. Feb 13 19:23:39.156757 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 19:23:39.157306 systemd-logind[1449]: Session 12 logged out. Waiting for processes to exit. Feb 13 19:23:39.158094 systemd-logind[1449]: Removed session 12. Feb 13 19:23:44.168928 systemd[1]: Started sshd@12-10.0.0.135:22-10.0.0.1:53802.service - OpenSSH per-connection server daemon (10.0.0.1:53802). Feb 13 19:23:44.218488 sshd[5678]: Accepted publickey for core from 10.0.0.1 port 53802 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:23:44.220167 sshd-session[5678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:23:44.226451 systemd-logind[1449]: New session 13 of user core. Feb 13 19:23:44.234845 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 19:23:44.472760 sshd[5680]: Connection closed by 10.0.0.1 port 53802 Feb 13 19:23:44.473344 sshd-session[5678]: pam_unix(sshd:session): session closed for user core Feb 13 19:23:44.483846 systemd[1]: sshd@12-10.0.0.135:22-10.0.0.1:53802.service: Deactivated successfully. Feb 13 19:23:44.485584 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 19:23:44.487381 systemd-logind[1449]: Session 13 logged out. Waiting for processes to exit. Feb 13 19:23:44.488262 systemd-logind[1449]: Removed session 13. Feb 13 19:23:47.975918 kubelet[2544]: E0213 19:23:47.975864 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:23:49.487428 systemd[1]: Started sshd@13-10.0.0.135:22-10.0.0.1:53814.service - OpenSSH per-connection server daemon (10.0.0.1:53814). Feb 13 19:23:49.542865 sshd[5723]: Accepted publickey for core from 10.0.0.1 port 53814 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:23:49.544452 sshd-session[5723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:23:49.548034 systemd-logind[1449]: New session 14 of user core. Feb 13 19:23:49.563767 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 19:23:49.731697 sshd[5725]: Connection closed by 10.0.0.1 port 53814 Feb 13 19:23:49.732050 sshd-session[5723]: pam_unix(sshd:session): session closed for user core Feb 13 19:23:49.744014 systemd[1]: sshd@13-10.0.0.135:22-10.0.0.1:53814.service: Deactivated successfully. Feb 13 19:23:49.746126 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 19:23:49.747739 systemd-logind[1449]: Session 14 logged out. Waiting for processes to exit. Feb 13 19:23:49.755905 systemd[1]: Started sshd@14-10.0.0.135:22-10.0.0.1:53824.service - OpenSSH per-connection server daemon (10.0.0.1:53824). Feb 13 19:23:49.757628 systemd-logind[1449]: Removed session 14. Feb 13 19:23:49.791881 sshd[5737]: Accepted publickey for core from 10.0.0.1 port 53824 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:23:49.792994 sshd-session[5737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:23:49.797540 systemd-logind[1449]: New session 15 of user core. Feb 13 19:23:49.803856 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 19:23:49.994897 sshd[5739]: Connection closed by 10.0.0.1 port 53824 Feb 13 19:23:49.995514 sshd-session[5737]: pam_unix(sshd:session): session closed for user core Feb 13 19:23:50.009218 systemd[1]: sshd@14-10.0.0.135:22-10.0.0.1:53824.service: Deactivated successfully. Feb 13 19:23:50.010701 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 19:23:50.012994 systemd-logind[1449]: Session 15 logged out. Waiting for processes to exit. Feb 13 19:23:50.014893 systemd[1]: Started sshd@15-10.0.0.135:22-10.0.0.1:53838.service - OpenSSH per-connection server daemon (10.0.0.1:53838). Feb 13 19:23:50.015955 systemd-logind[1449]: Removed session 15. Feb 13 19:23:50.073006 sshd[5750]: Accepted publickey for core from 10.0.0.1 port 53838 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:23:50.074289 sshd-session[5750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:23:50.078320 systemd-logind[1449]: New session 16 of user core. Feb 13 19:23:50.087772 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 19:23:50.254029 kubelet[2544]: I0213 19:23:50.253928 2544 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:23:51.712350 sshd[5752]: Connection closed by 10.0.0.1 port 53838 Feb 13 19:23:51.714473 sshd-session[5750]: pam_unix(sshd:session): session closed for user core Feb 13 19:23:51.725822 systemd[1]: sshd@15-10.0.0.135:22-10.0.0.1:53838.service: Deactivated successfully. Feb 13 19:23:51.729824 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 19:23:51.732431 systemd-logind[1449]: Session 16 logged out. Waiting for processes to exit. Feb 13 19:23:51.738937 systemd[1]: Started sshd@16-10.0.0.135:22-10.0.0.1:53846.service - OpenSSH per-connection server daemon (10.0.0.1:53846). Feb 13 19:23:51.740233 systemd-logind[1449]: Removed session 16. Feb 13 19:23:51.787191 sshd[5774]: Accepted publickey for core from 10.0.0.1 port 53846 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:23:51.788681 sshd-session[5774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:23:51.793345 systemd-logind[1449]: New session 17 of user core. Feb 13 19:23:51.799817 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 19:23:52.158314 sshd[5776]: Connection closed by 10.0.0.1 port 53846 Feb 13 19:23:52.158580 sshd-session[5774]: pam_unix(sshd:session): session closed for user core Feb 13 19:23:52.167503 systemd[1]: sshd@16-10.0.0.135:22-10.0.0.1:53846.service: Deactivated successfully. Feb 13 19:23:52.169309 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 19:23:52.171123 systemd-logind[1449]: Session 17 logged out. Waiting for processes to exit. Feb 13 19:23:52.179697 systemd[1]: Started sshd@17-10.0.0.135:22-10.0.0.1:53854.service - OpenSSH per-connection server daemon (10.0.0.1:53854). Feb 13 19:23:52.183257 systemd-logind[1449]: Removed session 17. Feb 13 19:23:52.217436 sshd[5787]: Accepted publickey for core from 10.0.0.1 port 53854 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:23:52.218749 sshd-session[5787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:23:52.223616 systemd-logind[1449]: New session 18 of user core. Feb 13 19:23:52.227961 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 19:23:52.344628 sshd[5789]: Connection closed by 10.0.0.1 port 53854 Feb 13 19:23:52.345938 sshd-session[5787]: pam_unix(sshd:session): session closed for user core Feb 13 19:23:52.348417 systemd[1]: sshd@17-10.0.0.135:22-10.0.0.1:53854.service: Deactivated successfully. Feb 13 19:23:52.350259 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 19:23:52.351732 systemd-logind[1449]: Session 18 logged out. Waiting for processes to exit. Feb 13 19:23:52.352566 systemd-logind[1449]: Removed session 18. Feb 13 19:23:57.369835 systemd[1]: Started sshd@18-10.0.0.135:22-10.0.0.1:59596.service - OpenSSH per-connection server daemon (10.0.0.1:59596). Feb 13 19:23:57.412639 sshd[5805]: Accepted publickey for core from 10.0.0.1 port 59596 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:23:57.414068 sshd-session[5805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:23:57.426814 systemd-logind[1449]: New session 19 of user core. Feb 13 19:23:57.436875 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 19:23:57.589675 sshd[5807]: Connection closed by 10.0.0.1 port 59596 Feb 13 19:23:57.589563 sshd-session[5805]: pam_unix(sshd:session): session closed for user core Feb 13 19:23:57.593542 systemd[1]: sshd@18-10.0.0.135:22-10.0.0.1:59596.service: Deactivated successfully. Feb 13 19:23:57.595339 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 19:23:57.598429 systemd-logind[1449]: Session 19 logged out. Waiting for processes to exit. Feb 13 19:23:57.599629 systemd-logind[1449]: Removed session 19. Feb 13 19:23:59.240201 containerd[1472]: time="2025-02-13T19:23:59.240154173Z" level=info msg="StopPodSandbox for \"dd11beb574d3fbe9971dfab17c87cd30120129847d48f855e440b73909b823f2\"" Feb 13 19:23:59.240798 containerd[1472]: time="2025-02-13T19:23:59.240270613Z" level=info msg="TearDown network for sandbox \"dd11beb574d3fbe9971dfab17c87cd30120129847d48f855e440b73909b823f2\" successfully" Feb 13 19:23:59.240798 containerd[1472]: time="2025-02-13T19:23:59.240281653Z" level=info msg="StopPodSandbox for \"dd11beb574d3fbe9971dfab17c87cd30120129847d48f855e440b73909b823f2\" returns successfully" Feb 13 19:23:59.242361 containerd[1472]: time="2025-02-13T19:23:59.241149053Z" level=info msg="RemovePodSandbox for \"dd11beb574d3fbe9971dfab17c87cd30120129847d48f855e440b73909b823f2\"" Feb 13 19:23:59.242361 containerd[1472]: time="2025-02-13T19:23:59.241182173Z" level=info msg="Forcibly stopping sandbox \"dd11beb574d3fbe9971dfab17c87cd30120129847d48f855e440b73909b823f2\"" Feb 13 19:23:59.242361 containerd[1472]: time="2025-02-13T19:23:59.241252293Z" level=info msg="TearDown network for sandbox \"dd11beb574d3fbe9971dfab17c87cd30120129847d48f855e440b73909b823f2\" successfully" Feb 13 19:23:59.270439 containerd[1472]: time="2025-02-13T19:23:59.270379538Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dd11beb574d3fbe9971dfab17c87cd30120129847d48f855e440b73909b823f2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:23:59.270544 containerd[1472]: time="2025-02-13T19:23:59.270468298Z" level=info msg="RemovePodSandbox \"dd11beb574d3fbe9971dfab17c87cd30120129847d48f855e440b73909b823f2\" returns successfully" Feb 13 19:23:59.271365 containerd[1472]: time="2025-02-13T19:23:59.271155898Z" level=info msg="StopPodSandbox for \"9c587b1c1873a4da0b2c59ad781386932a84f435d9277e390c1c647867dcf9c6\"" Feb 13 19:23:59.271365 containerd[1472]: time="2025-02-13T19:23:59.271344858Z" level=info msg="TearDown network for sandbox \"9c587b1c1873a4da0b2c59ad781386932a84f435d9277e390c1c647867dcf9c6\" successfully" Feb 13 19:23:59.271586 containerd[1472]: time="2025-02-13T19:23:59.271494618Z" level=info msg="StopPodSandbox for \"9c587b1c1873a4da0b2c59ad781386932a84f435d9277e390c1c647867dcf9c6\" returns successfully" Feb 13 19:23:59.273064 containerd[1472]: time="2025-02-13T19:23:59.271768738Z" level=info msg="RemovePodSandbox for \"9c587b1c1873a4da0b2c59ad781386932a84f435d9277e390c1c647867dcf9c6\"" Feb 13 19:23:59.273064 containerd[1472]: time="2025-02-13T19:23:59.271797418Z" level=info msg="Forcibly stopping sandbox \"9c587b1c1873a4da0b2c59ad781386932a84f435d9277e390c1c647867dcf9c6\"" Feb 13 19:23:59.273064 containerd[1472]: time="2025-02-13T19:23:59.271853218Z" level=info msg="TearDown network for sandbox \"9c587b1c1873a4da0b2c59ad781386932a84f435d9277e390c1c647867dcf9c6\" successfully" Feb 13 19:23:59.277024 containerd[1472]: time="2025-02-13T19:23:59.276991819Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9c587b1c1873a4da0b2c59ad781386932a84f435d9277e390c1c647867dcf9c6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:23:59.277153 containerd[1472]: time="2025-02-13T19:23:59.277133499Z" level=info msg="RemovePodSandbox \"9c587b1c1873a4da0b2c59ad781386932a84f435d9277e390c1c647867dcf9c6\" returns successfully" Feb 13 19:23:59.287075 containerd[1472]: time="2025-02-13T19:23:59.283929460Z" level=info msg="StopPodSandbox for \"7c2a32b901b31e27ae1a20a1fe61ade17e69b2e6fa8b5f8f14ee716821af402d\"" Feb 13 19:23:59.287075 containerd[1472]: time="2025-02-13T19:23:59.287010780Z" level=info msg="TearDown network for sandbox \"7c2a32b901b31e27ae1a20a1fe61ade17e69b2e6fa8b5f8f14ee716821af402d\" successfully" Feb 13 19:23:59.287248 containerd[1472]: time="2025-02-13T19:23:59.287095420Z" level=info msg="StopPodSandbox for \"7c2a32b901b31e27ae1a20a1fe61ade17e69b2e6fa8b5f8f14ee716821af402d\" returns successfully" Feb 13 19:23:59.287554 containerd[1472]: time="2025-02-13T19:23:59.287525021Z" level=info msg="RemovePodSandbox for \"7c2a32b901b31e27ae1a20a1fe61ade17e69b2e6fa8b5f8f14ee716821af402d\"" Feb 13 19:23:59.287554 containerd[1472]: time="2025-02-13T19:23:59.287553381Z" level=info msg="Forcibly stopping sandbox \"7c2a32b901b31e27ae1a20a1fe61ade17e69b2e6fa8b5f8f14ee716821af402d\"" Feb 13 19:23:59.287650 containerd[1472]: time="2025-02-13T19:23:59.287634141Z" level=info msg="TearDown network for sandbox \"7c2a32b901b31e27ae1a20a1fe61ade17e69b2e6fa8b5f8f14ee716821af402d\" successfully" Feb 13 19:23:59.292404 containerd[1472]: time="2025-02-13T19:23:59.292360101Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7c2a32b901b31e27ae1a20a1fe61ade17e69b2e6fa8b5f8f14ee716821af402d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:23:59.292490 containerd[1472]: time="2025-02-13T19:23:59.292449981Z" level=info msg="RemovePodSandbox \"7c2a32b901b31e27ae1a20a1fe61ade17e69b2e6fa8b5f8f14ee716821af402d\" returns successfully" Feb 13 19:23:59.292914 containerd[1472]: time="2025-02-13T19:23:59.292883901Z" level=info msg="StopPodSandbox for \"6016e85c998124b83ac646fb6a3dabbbc1920d9fd029722e7215ebdffb13a5be\"" Feb 13 19:23:59.292976 containerd[1472]: time="2025-02-13T19:23:59.292965781Z" level=info msg="TearDown network for sandbox \"6016e85c998124b83ac646fb6a3dabbbc1920d9fd029722e7215ebdffb13a5be\" successfully" Feb 13 19:23:59.293001 containerd[1472]: time="2025-02-13T19:23:59.292977141Z" level=info msg="StopPodSandbox for \"6016e85c998124b83ac646fb6a3dabbbc1920d9fd029722e7215ebdffb13a5be\" returns successfully" Feb 13 19:23:59.293244 containerd[1472]: time="2025-02-13T19:23:59.293224382Z" level=info msg="RemovePodSandbox for \"6016e85c998124b83ac646fb6a3dabbbc1920d9fd029722e7215ebdffb13a5be\"" Feb 13 19:23:59.293302 containerd[1472]: time="2025-02-13T19:23:59.293246022Z" level=info msg="Forcibly stopping sandbox \"6016e85c998124b83ac646fb6a3dabbbc1920d9fd029722e7215ebdffb13a5be\"" Feb 13 19:23:59.293329 containerd[1472]: time="2025-02-13T19:23:59.293311302Z" level=info msg="TearDown network for sandbox \"6016e85c998124b83ac646fb6a3dabbbc1920d9fd029722e7215ebdffb13a5be\" successfully" Feb 13 19:23:59.308396 containerd[1472]: time="2025-02-13T19:23:59.308344104Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6016e85c998124b83ac646fb6a3dabbbc1920d9fd029722e7215ebdffb13a5be\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:23:59.308635 containerd[1472]: time="2025-02-13T19:23:59.308405664Z" level=info msg="RemovePodSandbox \"6016e85c998124b83ac646fb6a3dabbbc1920d9fd029722e7215ebdffb13a5be\" returns successfully" Feb 13 19:23:59.309221 containerd[1472]: time="2025-02-13T19:23:59.309166024Z" level=info msg="StopPodSandbox for \"7e6feab27645ffd61b92d460846c135a9d5977d67d273cbde0a2439d3c194c38\"" Feb 13 19:23:59.309284 containerd[1472]: time="2025-02-13T19:23:59.309265664Z" level=info msg="TearDown network for sandbox \"7e6feab27645ffd61b92d460846c135a9d5977d67d273cbde0a2439d3c194c38\" successfully" Feb 13 19:23:59.309284 containerd[1472]: time="2025-02-13T19:23:59.309279944Z" level=info msg="StopPodSandbox for \"7e6feab27645ffd61b92d460846c135a9d5977d67d273cbde0a2439d3c194c38\" returns successfully" Feb 13 19:23:59.309662 containerd[1472]: time="2025-02-13T19:23:59.309641384Z" level=info msg="RemovePodSandbox for \"7e6feab27645ffd61b92d460846c135a9d5977d67d273cbde0a2439d3c194c38\"" Feb 13 19:23:59.309716 containerd[1472]: time="2025-02-13T19:23:59.309665744Z" level=info msg="Forcibly stopping sandbox \"7e6feab27645ffd61b92d460846c135a9d5977d67d273cbde0a2439d3c194c38\"" Feb 13 19:23:59.309739 containerd[1472]: time="2025-02-13T19:23:59.309723384Z" level=info msg="TearDown network for sandbox \"7e6feab27645ffd61b92d460846c135a9d5977d67d273cbde0a2439d3c194c38\" successfully" Feb 13 19:23:59.314711 containerd[1472]: time="2025-02-13T19:23:59.314615665Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7e6feab27645ffd61b92d460846c135a9d5977d67d273cbde0a2439d3c194c38\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:23:59.314836 containerd[1472]: time="2025-02-13T19:23:59.314720505Z" level=info msg="RemovePodSandbox \"7e6feab27645ffd61b92d460846c135a9d5977d67d273cbde0a2439d3c194c38\" returns successfully" Feb 13 19:23:59.315181 containerd[1472]: time="2025-02-13T19:23:59.315112065Z" level=info msg="StopPodSandbox for \"79cc9bd063882a3af3beda81b96a936ae1bf51ac45f2a1def3f15fdb84de30f2\"" Feb 13 19:23:59.315300 containerd[1472]: time="2025-02-13T19:23:59.315208345Z" level=info msg="TearDown network for sandbox \"79cc9bd063882a3af3beda81b96a936ae1bf51ac45f2a1def3f15fdb84de30f2\" successfully" Feb 13 19:23:59.315300 containerd[1472]: time="2025-02-13T19:23:59.315220145Z" level=info msg="StopPodSandbox for \"79cc9bd063882a3af3beda81b96a936ae1bf51ac45f2a1def3f15fdb84de30f2\" returns successfully" Feb 13 19:23:59.315655 containerd[1472]: time="2025-02-13T19:23:59.315473945Z" level=info msg="RemovePodSandbox for \"79cc9bd063882a3af3beda81b96a936ae1bf51ac45f2a1def3f15fdb84de30f2\"" Feb 13 19:23:59.315655 containerd[1472]: time="2025-02-13T19:23:59.315502025Z" level=info msg="Forcibly stopping sandbox \"79cc9bd063882a3af3beda81b96a936ae1bf51ac45f2a1def3f15fdb84de30f2\"" Feb 13 19:23:59.315655 containerd[1472]: time="2025-02-13T19:23:59.315569945Z" level=info msg="TearDown network for sandbox \"79cc9bd063882a3af3beda81b96a936ae1bf51ac45f2a1def3f15fdb84de30f2\" successfully" Feb 13 19:23:59.321953 containerd[1472]: time="2025-02-13T19:23:59.321917826Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"79cc9bd063882a3af3beda81b96a936ae1bf51ac45f2a1def3f15fdb84de30f2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:23:59.322011 containerd[1472]: time="2025-02-13T19:23:59.321971746Z" level=info msg="RemovePodSandbox \"79cc9bd063882a3af3beda81b96a936ae1bf51ac45f2a1def3f15fdb84de30f2\" returns successfully" Feb 13 19:23:59.322466 containerd[1472]: time="2025-02-13T19:23:59.322431426Z" level=info msg="StopPodSandbox for \"f08c19b9a372aabce29703f12b709d1b46d9e7576abfa63f9ca341a1ee693428\"" Feb 13 19:23:59.322974 containerd[1472]: time="2025-02-13T19:23:59.322761186Z" level=info msg="TearDown network for sandbox \"f08c19b9a372aabce29703f12b709d1b46d9e7576abfa63f9ca341a1ee693428\" successfully" Feb 13 19:23:59.322974 containerd[1472]: time="2025-02-13T19:23:59.322849866Z" level=info msg="StopPodSandbox for \"f08c19b9a372aabce29703f12b709d1b46d9e7576abfa63f9ca341a1ee693428\" returns successfully" Feb 13 19:23:59.323222 containerd[1472]: time="2025-02-13T19:23:59.323191667Z" level=info msg="RemovePodSandbox for \"f08c19b9a372aabce29703f12b709d1b46d9e7576abfa63f9ca341a1ee693428\"" Feb 13 19:23:59.323260 containerd[1472]: time="2025-02-13T19:23:59.323229987Z" level=info msg="Forcibly stopping sandbox \"f08c19b9a372aabce29703f12b709d1b46d9e7576abfa63f9ca341a1ee693428\"" Feb 13 19:23:59.323317 containerd[1472]: time="2025-02-13T19:23:59.323303347Z" level=info msg="TearDown network for sandbox \"f08c19b9a372aabce29703f12b709d1b46d9e7576abfa63f9ca341a1ee693428\" successfully" Feb 13 19:23:59.325502 containerd[1472]: time="2025-02-13T19:23:59.325470787Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f08c19b9a372aabce29703f12b709d1b46d9e7576abfa63f9ca341a1ee693428\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:23:59.325581 containerd[1472]: time="2025-02-13T19:23:59.325520267Z" level=info msg="RemovePodSandbox \"f08c19b9a372aabce29703f12b709d1b46d9e7576abfa63f9ca341a1ee693428\" returns successfully" Feb 13 19:23:59.326028 containerd[1472]: time="2025-02-13T19:23:59.325878707Z" level=info msg="StopPodSandbox for \"fff6122e0e7af1c4d61552d62a901428e4905f43cb9d5e0cb796801cf8c19caa\"" Feb 13 19:23:59.326028 containerd[1472]: time="2025-02-13T19:23:59.325959907Z" level=info msg="TearDown network for sandbox \"fff6122e0e7af1c4d61552d62a901428e4905f43cb9d5e0cb796801cf8c19caa\" successfully" Feb 13 19:23:59.326028 containerd[1472]: time="2025-02-13T19:23:59.325969267Z" level=info msg="StopPodSandbox for \"fff6122e0e7af1c4d61552d62a901428e4905f43cb9d5e0cb796801cf8c19caa\" returns successfully" Feb 13 19:23:59.326473 containerd[1472]: time="2025-02-13T19:23:59.326324947Z" level=info msg="RemovePodSandbox for \"fff6122e0e7af1c4d61552d62a901428e4905f43cb9d5e0cb796801cf8c19caa\"" Feb 13 19:23:59.326473 containerd[1472]: time="2025-02-13T19:23:59.326347107Z" level=info msg="Forcibly stopping sandbox \"fff6122e0e7af1c4d61552d62a901428e4905f43cb9d5e0cb796801cf8c19caa\"" Feb 13 19:23:59.326473 containerd[1472]: time="2025-02-13T19:23:59.326423427Z" level=info msg="TearDown network for sandbox \"fff6122e0e7af1c4d61552d62a901428e4905f43cb9d5e0cb796801cf8c19caa\" successfully" Feb 13 19:23:59.329406 containerd[1472]: time="2025-02-13T19:23:59.329140788Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fff6122e0e7af1c4d61552d62a901428e4905f43cb9d5e0cb796801cf8c19caa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:23:59.329406 containerd[1472]: time="2025-02-13T19:23:59.329257148Z" level=info msg="RemovePodSandbox \"fff6122e0e7af1c4d61552d62a901428e4905f43cb9d5e0cb796801cf8c19caa\" returns successfully" Feb 13 19:23:59.329819 containerd[1472]: time="2025-02-13T19:23:59.329659108Z" level=info msg="StopPodSandbox for \"8838278d5d134ca219edb457e7749500a22cc1ccd0695e4995e6fa55293cd05e\"" Feb 13 19:23:59.329819 containerd[1472]: time="2025-02-13T19:23:59.329742508Z" level=info msg="TearDown network for sandbox \"8838278d5d134ca219edb457e7749500a22cc1ccd0695e4995e6fa55293cd05e\" successfully" Feb 13 19:23:59.329819 containerd[1472]: time="2025-02-13T19:23:59.329762508Z" level=info msg="StopPodSandbox for \"8838278d5d134ca219edb457e7749500a22cc1ccd0695e4995e6fa55293cd05e\" returns successfully" Feb 13 19:23:59.330278 containerd[1472]: time="2025-02-13T19:23:59.330215508Z" level=info msg="RemovePodSandbox for \"8838278d5d134ca219edb457e7749500a22cc1ccd0695e4995e6fa55293cd05e\"" Feb 13 19:23:59.330278 containerd[1472]: time="2025-02-13T19:23:59.330240108Z" level=info msg="Forcibly stopping sandbox \"8838278d5d134ca219edb457e7749500a22cc1ccd0695e4995e6fa55293cd05e\"" Feb 13 19:23:59.330982 containerd[1472]: time="2025-02-13T19:23:59.330463788Z" level=info msg="TearDown network for sandbox \"8838278d5d134ca219edb457e7749500a22cc1ccd0695e4995e6fa55293cd05e\" successfully" Feb 13 19:23:59.333113 containerd[1472]: time="2025-02-13T19:23:59.333073908Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8838278d5d134ca219edb457e7749500a22cc1ccd0695e4995e6fa55293cd05e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:23:59.333240 containerd[1472]: time="2025-02-13T19:23:59.333221668Z" level=info msg="RemovePodSandbox \"8838278d5d134ca219edb457e7749500a22cc1ccd0695e4995e6fa55293cd05e\" returns successfully" Feb 13 19:23:59.333827 containerd[1472]: time="2025-02-13T19:23:59.333752188Z" level=info msg="StopPodSandbox for \"8eac2bdb041882bb940afbf935efc0cca1713f5e69cccf9d0bf977711d5c0c18\"" Feb 13 19:23:59.333966 containerd[1472]: time="2025-02-13T19:23:59.333840508Z" level=info msg="TearDown network for sandbox \"8eac2bdb041882bb940afbf935efc0cca1713f5e69cccf9d0bf977711d5c0c18\" successfully" Feb 13 19:23:59.333966 containerd[1472]: time="2025-02-13T19:23:59.333853748Z" level=info msg="StopPodSandbox for \"8eac2bdb041882bb940afbf935efc0cca1713f5e69cccf9d0bf977711d5c0c18\" returns successfully" Feb 13 19:23:59.334282 containerd[1472]: time="2025-02-13T19:23:59.334247228Z" level=info msg="RemovePodSandbox for \"8eac2bdb041882bb940afbf935efc0cca1713f5e69cccf9d0bf977711d5c0c18\"" Feb 13 19:23:59.334487 containerd[1472]: time="2025-02-13T19:23:59.334468388Z" level=info msg="Forcibly stopping sandbox \"8eac2bdb041882bb940afbf935efc0cca1713f5e69cccf9d0bf977711d5c0c18\"" Feb 13 19:23:59.334647 containerd[1472]: time="2025-02-13T19:23:59.334629828Z" level=info msg="TearDown network for sandbox \"8eac2bdb041882bb940afbf935efc0cca1713f5e69cccf9d0bf977711d5c0c18\" successfully" Feb 13 19:23:59.337550 containerd[1472]: time="2025-02-13T19:23:59.337417549Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8eac2bdb041882bb940afbf935efc0cca1713f5e69cccf9d0bf977711d5c0c18\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:23:59.337550 containerd[1472]: time="2025-02-13T19:23:59.337469069Z" level=info msg="RemovePodSandbox \"8eac2bdb041882bb940afbf935efc0cca1713f5e69cccf9d0bf977711d5c0c18\" returns successfully" Feb 13 19:23:59.338322 containerd[1472]: time="2025-02-13T19:23:59.338018749Z" level=info msg="StopPodSandbox for \"9238bb01d809702cf71989d83c983b9c52956861a8e39fcabd6dd1ed7ee329dd\"" Feb 13 19:23:59.338322 containerd[1472]: time="2025-02-13T19:23:59.338176029Z" level=info msg="TearDown network for sandbox \"9238bb01d809702cf71989d83c983b9c52956861a8e39fcabd6dd1ed7ee329dd\" successfully" Feb 13 19:23:59.338322 containerd[1472]: time="2025-02-13T19:23:59.338251949Z" level=info msg="StopPodSandbox for \"9238bb01d809702cf71989d83c983b9c52956861a8e39fcabd6dd1ed7ee329dd\" returns successfully" Feb 13 19:23:59.338668 containerd[1472]: time="2025-02-13T19:23:59.338627229Z" level=info msg="RemovePodSandbox for \"9238bb01d809702cf71989d83c983b9c52956861a8e39fcabd6dd1ed7ee329dd\"" Feb 13 19:23:59.338843 containerd[1472]: time="2025-02-13T19:23:59.338651549Z" level=info msg="Forcibly stopping sandbox \"9238bb01d809702cf71989d83c983b9c52956861a8e39fcabd6dd1ed7ee329dd\"" Feb 13 19:23:59.338843 containerd[1472]: time="2025-02-13T19:23:59.338802909Z" level=info msg="TearDown network for sandbox \"9238bb01d809702cf71989d83c983b9c52956861a8e39fcabd6dd1ed7ee329dd\" successfully" Feb 13 19:23:59.341517 containerd[1472]: time="2025-02-13T19:23:59.341399270Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9238bb01d809702cf71989d83c983b9c52956861a8e39fcabd6dd1ed7ee329dd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:23:59.341517 containerd[1472]: time="2025-02-13T19:23:59.341447590Z" level=info msg="RemovePodSandbox \"9238bb01d809702cf71989d83c983b9c52956861a8e39fcabd6dd1ed7ee329dd\" returns successfully" Feb 13 19:23:59.341963 containerd[1472]: time="2025-02-13T19:23:59.341791270Z" level=info msg="StopPodSandbox for \"1a65ecae1d3e89163ada04189f06c243b3b5a17116154cd6cbede404e3524543\"" Feb 13 19:23:59.341963 containerd[1472]: time="2025-02-13T19:23:59.341879190Z" level=info msg="TearDown network for sandbox \"1a65ecae1d3e89163ada04189f06c243b3b5a17116154cd6cbede404e3524543\" successfully" Feb 13 19:23:59.341963 containerd[1472]: time="2025-02-13T19:23:59.341889230Z" level=info msg="StopPodSandbox for \"1a65ecae1d3e89163ada04189f06c243b3b5a17116154cd6cbede404e3524543\" returns successfully" Feb 13 19:23:59.342216 containerd[1472]: time="2025-02-13T19:23:59.342158150Z" level=info msg="RemovePodSandbox for \"1a65ecae1d3e89163ada04189f06c243b3b5a17116154cd6cbede404e3524543\"" Feb 13 19:23:59.342216 containerd[1472]: time="2025-02-13T19:23:59.342185390Z" level=info msg="Forcibly stopping sandbox \"1a65ecae1d3e89163ada04189f06c243b3b5a17116154cd6cbede404e3524543\"" Feb 13 19:23:59.342321 containerd[1472]: time="2025-02-13T19:23:59.342257270Z" level=info msg="TearDown network for sandbox \"1a65ecae1d3e89163ada04189f06c243b3b5a17116154cd6cbede404e3524543\" successfully" Feb 13 19:23:59.344561 containerd[1472]: time="2025-02-13T19:23:59.344530070Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1a65ecae1d3e89163ada04189f06c243b3b5a17116154cd6cbede404e3524543\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:23:59.344661 containerd[1472]: time="2025-02-13T19:23:59.344581390Z" level=info msg="RemovePodSandbox \"1a65ecae1d3e89163ada04189f06c243b3b5a17116154cd6cbede404e3524543\" returns successfully" Feb 13 19:23:59.345128 containerd[1472]: time="2025-02-13T19:23:59.344953230Z" level=info msg="StopPodSandbox for \"5c69bd3f5517904d9d167931362b4913dc35adc9a2c26a7101f072289741fcfe\"" Feb 13 19:23:59.345128 containerd[1472]: time="2025-02-13T19:23:59.345044070Z" level=info msg="TearDown network for sandbox \"5c69bd3f5517904d9d167931362b4913dc35adc9a2c26a7101f072289741fcfe\" successfully" Feb 13 19:23:59.345128 containerd[1472]: time="2025-02-13T19:23:59.345053230Z" level=info msg="StopPodSandbox for \"5c69bd3f5517904d9d167931362b4913dc35adc9a2c26a7101f072289741fcfe\" returns successfully" Feb 13 19:23:59.345583 containerd[1472]: time="2025-02-13T19:23:59.345410430Z" level=info msg="RemovePodSandbox for \"5c69bd3f5517904d9d167931362b4913dc35adc9a2c26a7101f072289741fcfe\"" Feb 13 19:23:59.345583 containerd[1472]: time="2025-02-13T19:23:59.345436750Z" level=info msg="Forcibly stopping sandbox \"5c69bd3f5517904d9d167931362b4913dc35adc9a2c26a7101f072289741fcfe\"" Feb 13 19:23:59.346630 containerd[1472]: time="2025-02-13T19:23:59.345563670Z" level=info msg="TearDown network for sandbox \"5c69bd3f5517904d9d167931362b4913dc35adc9a2c26a7101f072289741fcfe\" successfully" Feb 13 19:23:59.348163 containerd[1472]: time="2025-02-13T19:23:59.348098271Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5c69bd3f5517904d9d167931362b4913dc35adc9a2c26a7101f072289741fcfe\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:23:59.348163 containerd[1472]: time="2025-02-13T19:23:59.348157191Z" level=info msg="RemovePodSandbox \"5c69bd3f5517904d9d167931362b4913dc35adc9a2c26a7101f072289741fcfe\" returns successfully" Feb 13 19:23:59.348499 containerd[1472]: time="2025-02-13T19:23:59.348479751Z" level=info msg="StopPodSandbox for \"48811930cf72c6624d608457afdcfffb68cadf11dbe5cbc1d4081b85d2fea001\"" Feb 13 19:23:59.348570 containerd[1472]: time="2025-02-13T19:23:59.348555591Z" level=info msg="TearDown network for sandbox \"48811930cf72c6624d608457afdcfffb68cadf11dbe5cbc1d4081b85d2fea001\" successfully" Feb 13 19:23:59.348606 containerd[1472]: time="2025-02-13T19:23:59.348569311Z" level=info msg="StopPodSandbox for \"48811930cf72c6624d608457afdcfffb68cadf11dbe5cbc1d4081b85d2fea001\" returns successfully" Feb 13 19:23:59.348826 containerd[1472]: time="2025-02-13T19:23:59.348801031Z" level=info msg="RemovePodSandbox for \"48811930cf72c6624d608457afdcfffb68cadf11dbe5cbc1d4081b85d2fea001\"" Feb 13 19:23:59.349610 containerd[1472]: time="2025-02-13T19:23:59.348980511Z" level=info msg="Forcibly stopping sandbox \"48811930cf72c6624d608457afdcfffb68cadf11dbe5cbc1d4081b85d2fea001\"" Feb 13 19:23:59.349610 containerd[1472]: time="2025-02-13T19:23:59.349053751Z" level=info msg="TearDown network for sandbox \"48811930cf72c6624d608457afdcfffb68cadf11dbe5cbc1d4081b85d2fea001\" successfully" Feb 13 19:23:59.351402 containerd[1472]: time="2025-02-13T19:23:59.351363071Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"48811930cf72c6624d608457afdcfffb68cadf11dbe5cbc1d4081b85d2fea001\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:23:59.351457 containerd[1472]: time="2025-02-13T19:23:59.351421391Z" level=info msg="RemovePodSandbox \"48811930cf72c6624d608457afdcfffb68cadf11dbe5cbc1d4081b85d2fea001\" returns successfully" Feb 13 19:23:59.351936 containerd[1472]: time="2025-02-13T19:23:59.351780551Z" level=info msg="StopPodSandbox for \"25cadf9b39de722cdb51a441f232474c54a2727e8241363fb7678024b0cc2939\"" Feb 13 19:23:59.351936 containerd[1472]: time="2025-02-13T19:23:59.351871231Z" level=info msg="TearDown network for sandbox \"25cadf9b39de722cdb51a441f232474c54a2727e8241363fb7678024b0cc2939\" successfully" Feb 13 19:23:59.351936 containerd[1472]: time="2025-02-13T19:23:59.351881311Z" level=info msg="StopPodSandbox for \"25cadf9b39de722cdb51a441f232474c54a2727e8241363fb7678024b0cc2939\" returns successfully" Feb 13 19:23:59.352206 containerd[1472]: time="2025-02-13T19:23:59.352180471Z" level=info msg="RemovePodSandbox for \"25cadf9b39de722cdb51a441f232474c54a2727e8241363fb7678024b0cc2939\"" Feb 13 19:23:59.352263 containerd[1472]: time="2025-02-13T19:23:59.352209751Z" level=info msg="Forcibly stopping sandbox \"25cadf9b39de722cdb51a441f232474c54a2727e8241363fb7678024b0cc2939\"" Feb 13 19:23:59.352299 containerd[1472]: time="2025-02-13T19:23:59.352283991Z" level=info msg="TearDown network for sandbox \"25cadf9b39de722cdb51a441f232474c54a2727e8241363fb7678024b0cc2939\" successfully" Feb 13 19:23:59.354524 containerd[1472]: time="2025-02-13T19:23:59.354494352Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"25cadf9b39de722cdb51a441f232474c54a2727e8241363fb7678024b0cc2939\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:23:59.354524 containerd[1472]: time="2025-02-13T19:23:59.354545352Z" level=info msg="RemovePodSandbox \"25cadf9b39de722cdb51a441f232474c54a2727e8241363fb7678024b0cc2939\" returns successfully" Feb 13 19:23:59.354972 containerd[1472]: time="2025-02-13T19:23:59.354944152Z" level=info msg="StopPodSandbox for \"7a00c42bb521f7bef7a32da102cdf4d4a156bae2448cfacf8ccc9caa13e70634\"" Feb 13 19:23:59.355050 containerd[1472]: time="2025-02-13T19:23:59.355033312Z" level=info msg="TearDown network for sandbox \"7a00c42bb521f7bef7a32da102cdf4d4a156bae2448cfacf8ccc9caa13e70634\" successfully" Feb 13 19:23:59.355050 containerd[1472]: time="2025-02-13T19:23:59.355047192Z" level=info msg="StopPodSandbox for \"7a00c42bb521f7bef7a32da102cdf4d4a156bae2448cfacf8ccc9caa13e70634\" returns successfully" Feb 13 19:23:59.355675 containerd[1472]: time="2025-02-13T19:23:59.355305752Z" level=info msg="RemovePodSandbox for \"7a00c42bb521f7bef7a32da102cdf4d4a156bae2448cfacf8ccc9caa13e70634\"" Feb 13 19:23:59.355675 containerd[1472]: time="2025-02-13T19:23:59.355327712Z" level=info msg="Forcibly stopping sandbox \"7a00c42bb521f7bef7a32da102cdf4d4a156bae2448cfacf8ccc9caa13e70634\"" Feb 13 19:23:59.355675 containerd[1472]: time="2025-02-13T19:23:59.355387512Z" level=info msg="TearDown network for sandbox \"7a00c42bb521f7bef7a32da102cdf4d4a156bae2448cfacf8ccc9caa13e70634\" successfully" Feb 13 19:23:59.358068 containerd[1472]: time="2025-02-13T19:23:59.358012792Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7a00c42bb521f7bef7a32da102cdf4d4a156bae2448cfacf8ccc9caa13e70634\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:23:59.358249 containerd[1472]: time="2025-02-13T19:23:59.358228712Z" level=info msg="RemovePodSandbox \"7a00c42bb521f7bef7a32da102cdf4d4a156bae2448cfacf8ccc9caa13e70634\" returns successfully" Feb 13 19:23:59.358628 containerd[1472]: time="2025-02-13T19:23:59.358603512Z" level=info msg="StopPodSandbox for \"096da91385a3c8839c09bc5456755139dacce2271f80a0a43253c557ac7e758d\"" Feb 13 19:23:59.358928 containerd[1472]: time="2025-02-13T19:23:59.358826672Z" level=info msg="TearDown network for sandbox \"096da91385a3c8839c09bc5456755139dacce2271f80a0a43253c557ac7e758d\" successfully" Feb 13 19:23:59.358928 containerd[1472]: time="2025-02-13T19:23:59.358859392Z" level=info msg="StopPodSandbox for \"096da91385a3c8839c09bc5456755139dacce2271f80a0a43253c557ac7e758d\" returns successfully" Feb 13 19:23:59.359215 containerd[1472]: time="2025-02-13T19:23:59.359188753Z" level=info msg="RemovePodSandbox for \"096da91385a3c8839c09bc5456755139dacce2271f80a0a43253c557ac7e758d\"" Feb 13 19:23:59.359272 containerd[1472]: time="2025-02-13T19:23:59.359220433Z" level=info msg="Forcibly stopping sandbox \"096da91385a3c8839c09bc5456755139dacce2271f80a0a43253c557ac7e758d\"" Feb 13 19:23:59.359343 containerd[1472]: time="2025-02-13T19:23:59.359298953Z" level=info msg="TearDown network for sandbox \"096da91385a3c8839c09bc5456755139dacce2271f80a0a43253c557ac7e758d\" successfully" Feb 13 19:23:59.361723 containerd[1472]: time="2025-02-13T19:23:59.361685833Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"096da91385a3c8839c09bc5456755139dacce2271f80a0a43253c557ac7e758d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:23:59.361783 containerd[1472]: time="2025-02-13T19:23:59.361745593Z" level=info msg="RemovePodSandbox \"096da91385a3c8839c09bc5456755139dacce2271f80a0a43253c557ac7e758d\" returns successfully" Feb 13 19:23:59.362203 containerd[1472]: time="2025-02-13T19:23:59.362031273Z" level=info msg="StopPodSandbox for \"d89d3553643149b3994db4a0653fdbd12da912d00bb5ab4315973581b6b660b5\"" Feb 13 19:23:59.362203 containerd[1472]: time="2025-02-13T19:23:59.362124953Z" level=info msg="TearDown network for sandbox \"d89d3553643149b3994db4a0653fdbd12da912d00bb5ab4315973581b6b660b5\" successfully" Feb 13 19:23:59.362203 containerd[1472]: time="2025-02-13T19:23:59.362136233Z" level=info msg="StopPodSandbox for \"d89d3553643149b3994db4a0653fdbd12da912d00bb5ab4315973581b6b660b5\" returns successfully" Feb 13 19:23:59.363723 containerd[1472]: time="2025-02-13T19:23:59.362500553Z" level=info msg="RemovePodSandbox for \"d89d3553643149b3994db4a0653fdbd12da912d00bb5ab4315973581b6b660b5\"" Feb 13 19:23:59.363723 containerd[1472]: time="2025-02-13T19:23:59.362523593Z" level=info msg="Forcibly stopping sandbox \"d89d3553643149b3994db4a0653fdbd12da912d00bb5ab4315973581b6b660b5\"" Feb 13 19:23:59.363723 containerd[1472]: time="2025-02-13T19:23:59.362608753Z" level=info msg="TearDown network for sandbox \"d89d3553643149b3994db4a0653fdbd12da912d00bb5ab4315973581b6b660b5\" successfully" Feb 13 19:23:59.365239 containerd[1472]: time="2025-02-13T19:23:59.365189994Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d89d3553643149b3994db4a0653fdbd12da912d00bb5ab4315973581b6b660b5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:23:59.365405 containerd[1472]: time="2025-02-13T19:23:59.365352794Z" level=info msg="RemovePodSandbox \"d89d3553643149b3994db4a0653fdbd12da912d00bb5ab4315973581b6b660b5\" returns successfully" Feb 13 19:23:59.365842 containerd[1472]: time="2025-02-13T19:23:59.365814314Z" level=info msg="StopPodSandbox for \"d6250d08a5ceb4ef45dd21cda6947799ccd29ddbdf4b402d69923e57de5f0de1\"" Feb 13 19:23:59.365942 containerd[1472]: time="2025-02-13T19:23:59.365908994Z" level=info msg="TearDown network for sandbox \"d6250d08a5ceb4ef45dd21cda6947799ccd29ddbdf4b402d69923e57de5f0de1\" successfully" Feb 13 19:23:59.365942 containerd[1472]: time="2025-02-13T19:23:59.365919994Z" level=info msg="StopPodSandbox for \"d6250d08a5ceb4ef45dd21cda6947799ccd29ddbdf4b402d69923e57de5f0de1\" returns successfully" Feb 13 19:23:59.366143 containerd[1472]: time="2025-02-13T19:23:59.366121354Z" level=info msg="RemovePodSandbox for \"d6250d08a5ceb4ef45dd21cda6947799ccd29ddbdf4b402d69923e57de5f0de1\"" Feb 13 19:23:59.366189 containerd[1472]: time="2025-02-13T19:23:59.366146954Z" level=info msg="Forcibly stopping sandbox \"d6250d08a5ceb4ef45dd21cda6947799ccd29ddbdf4b402d69923e57de5f0de1\"" Feb 13 19:23:59.366224 containerd[1472]: time="2025-02-13T19:23:59.366206114Z" level=info msg="TearDown network for sandbox \"d6250d08a5ceb4ef45dd21cda6947799ccd29ddbdf4b402d69923e57de5f0de1\" successfully" Feb 13 19:23:59.368645 containerd[1472]: time="2025-02-13T19:23:59.368609274Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d6250d08a5ceb4ef45dd21cda6947799ccd29ddbdf4b402d69923e57de5f0de1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:23:59.368849 containerd[1472]: time="2025-02-13T19:23:59.368662194Z" level=info msg="RemovePodSandbox \"d6250d08a5ceb4ef45dd21cda6947799ccd29ddbdf4b402d69923e57de5f0de1\" returns successfully" Feb 13 19:23:59.369154 containerd[1472]: time="2025-02-13T19:23:59.369018554Z" level=info msg="StopPodSandbox for \"cc7af871476d637ce9bba215c4499eefedcfe396348ddcced2e24cbaf8c8b503\"" Feb 13 19:23:59.369154 containerd[1472]: time="2025-02-13T19:23:59.369118714Z" level=info msg="TearDown network for sandbox \"cc7af871476d637ce9bba215c4499eefedcfe396348ddcced2e24cbaf8c8b503\" successfully" Feb 13 19:23:59.369154 containerd[1472]: time="2025-02-13T19:23:59.369129434Z" level=info msg="StopPodSandbox for \"cc7af871476d637ce9bba215c4499eefedcfe396348ddcced2e24cbaf8c8b503\" returns successfully" Feb 13 19:23:59.369477 containerd[1472]: time="2025-02-13T19:23:59.369417554Z" level=info msg="RemovePodSandbox for \"cc7af871476d637ce9bba215c4499eefedcfe396348ddcced2e24cbaf8c8b503\"" Feb 13 19:23:59.369477 containerd[1472]: time="2025-02-13T19:23:59.369445314Z" level=info msg="Forcibly stopping sandbox \"cc7af871476d637ce9bba215c4499eefedcfe396348ddcced2e24cbaf8c8b503\"" Feb 13 19:23:59.369567 containerd[1472]: time="2025-02-13T19:23:59.369507954Z" level=info msg="TearDown network for sandbox \"cc7af871476d637ce9bba215c4499eefedcfe396348ddcced2e24cbaf8c8b503\" successfully" Feb 13 19:23:59.376899 containerd[1472]: time="2025-02-13T19:23:59.376756915Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cc7af871476d637ce9bba215c4499eefedcfe396348ddcced2e24cbaf8c8b503\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:23:59.376899 containerd[1472]: time="2025-02-13T19:23:59.376818395Z" level=info msg="RemovePodSandbox \"cc7af871476d637ce9bba215c4499eefedcfe396348ddcced2e24cbaf8c8b503\" returns successfully" Feb 13 19:23:59.377243 containerd[1472]: time="2025-02-13T19:23:59.377195436Z" level=info msg="StopPodSandbox for \"50fc7b5a1212a7f5c0f58d01289dd6423491fe5b48f97ad072a542ba7bf3a842\"" Feb 13 19:23:59.377321 containerd[1472]: time="2025-02-13T19:23:59.377298636Z" level=info msg="TearDown network for sandbox \"50fc7b5a1212a7f5c0f58d01289dd6423491fe5b48f97ad072a542ba7bf3a842\" successfully" Feb 13 19:23:59.377321 containerd[1472]: time="2025-02-13T19:23:59.377316436Z" level=info msg="StopPodSandbox for \"50fc7b5a1212a7f5c0f58d01289dd6423491fe5b48f97ad072a542ba7bf3a842\" returns successfully" Feb 13 19:23:59.378882 containerd[1472]: time="2025-02-13T19:23:59.377611636Z" level=info msg="RemovePodSandbox for \"50fc7b5a1212a7f5c0f58d01289dd6423491fe5b48f97ad072a542ba7bf3a842\"" Feb 13 19:23:59.378882 containerd[1472]: time="2025-02-13T19:23:59.377641516Z" level=info msg="Forcibly stopping sandbox \"50fc7b5a1212a7f5c0f58d01289dd6423491fe5b48f97ad072a542ba7bf3a842\"" Feb 13 19:23:59.378882 containerd[1472]: time="2025-02-13T19:23:59.377708156Z" level=info msg="TearDown network for sandbox \"50fc7b5a1212a7f5c0f58d01289dd6423491fe5b48f97ad072a542ba7bf3a842\" successfully" Feb 13 19:23:59.380209 containerd[1472]: time="2025-02-13T19:23:59.380170396Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"50fc7b5a1212a7f5c0f58d01289dd6423491fe5b48f97ad072a542ba7bf3a842\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:23:59.380346 containerd[1472]: time="2025-02-13T19:23:59.380329476Z" level=info msg="RemovePodSandbox \"50fc7b5a1212a7f5c0f58d01289dd6423491fe5b48f97ad072a542ba7bf3a842\" returns successfully" Feb 13 19:23:59.380848 containerd[1472]: time="2025-02-13T19:23:59.380821316Z" level=info msg="StopPodSandbox for \"b9dc128c66258cebc145fb5927188ffde0a7a2ce1077cde74816d4066ba9c705\"" Feb 13 19:23:59.380943 containerd[1472]: time="2025-02-13T19:23:59.380926636Z" level=info msg="TearDown network for sandbox \"b9dc128c66258cebc145fb5927188ffde0a7a2ce1077cde74816d4066ba9c705\" successfully" Feb 13 19:23:59.380972 containerd[1472]: time="2025-02-13T19:23:59.380942236Z" level=info msg="StopPodSandbox for \"b9dc128c66258cebc145fb5927188ffde0a7a2ce1077cde74816d4066ba9c705\" returns successfully" Feb 13 19:23:59.381433 containerd[1472]: time="2025-02-13T19:23:59.381410836Z" level=info msg="RemovePodSandbox for \"b9dc128c66258cebc145fb5927188ffde0a7a2ce1077cde74816d4066ba9c705\"" Feb 13 19:23:59.381472 containerd[1472]: time="2025-02-13T19:23:59.381436396Z" level=info msg="Forcibly stopping sandbox \"b9dc128c66258cebc145fb5927188ffde0a7a2ce1077cde74816d4066ba9c705\"" Feb 13 19:23:59.381516 containerd[1472]: time="2025-02-13T19:23:59.381501036Z" level=info msg="TearDown network for sandbox \"b9dc128c66258cebc145fb5927188ffde0a7a2ce1077cde74816d4066ba9c705\" successfully" Feb 13 19:23:59.387943 containerd[1472]: time="2025-02-13T19:23:59.387899997Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b9dc128c66258cebc145fb5927188ffde0a7a2ce1077cde74816d4066ba9c705\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:23:59.388278 containerd[1472]: time="2025-02-13T19:23:59.388083677Z" level=info msg="RemovePodSandbox \"b9dc128c66258cebc145fb5927188ffde0a7a2ce1077cde74816d4066ba9c705\" returns successfully" Feb 13 19:23:59.388542 containerd[1472]: time="2025-02-13T19:23:59.388517837Z" level=info msg="StopPodSandbox for \"ba474e5db3af83c6e13348802c48deb52dcfa25885a7fa5ba47c3f19ac08cda7\"" Feb 13 19:23:59.388656 containerd[1472]: time="2025-02-13T19:23:59.388638277Z" level=info msg="TearDown network for sandbox \"ba474e5db3af83c6e13348802c48deb52dcfa25885a7fa5ba47c3f19ac08cda7\" successfully" Feb 13 19:23:59.388656 containerd[1472]: time="2025-02-13T19:23:59.388654717Z" level=info msg="StopPodSandbox for \"ba474e5db3af83c6e13348802c48deb52dcfa25885a7fa5ba47c3f19ac08cda7\" returns successfully" Feb 13 19:23:59.389045 containerd[1472]: time="2025-02-13T19:23:59.389023798Z" level=info msg="RemovePodSandbox for \"ba474e5db3af83c6e13348802c48deb52dcfa25885a7fa5ba47c3f19ac08cda7\"" Feb 13 19:23:59.390329 containerd[1472]: time="2025-02-13T19:23:59.389121038Z" level=info msg="Forcibly stopping sandbox \"ba474e5db3af83c6e13348802c48deb52dcfa25885a7fa5ba47c3f19ac08cda7\"" Feb 13 19:23:59.390329 containerd[1472]: time="2025-02-13T19:23:59.389192398Z" level=info msg="TearDown network for sandbox \"ba474e5db3af83c6e13348802c48deb52dcfa25885a7fa5ba47c3f19ac08cda7\" successfully" Feb 13 19:23:59.391628 containerd[1472]: time="2025-02-13T19:23:59.391570598Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ba474e5db3af83c6e13348802c48deb52dcfa25885a7fa5ba47c3f19ac08cda7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:23:59.391743 containerd[1472]: time="2025-02-13T19:23:59.391725198Z" level=info msg="RemovePodSandbox \"ba474e5db3af83c6e13348802c48deb52dcfa25885a7fa5ba47c3f19ac08cda7\" returns successfully" Feb 13 19:23:59.392136 containerd[1472]: time="2025-02-13T19:23:59.392108758Z" level=info msg="StopPodSandbox for \"cb7be6a4712632dbe090eebc2eeaede27df77149aef06ceca10bc6d335e26b9b\"" Feb 13 19:23:59.392273 containerd[1472]: time="2025-02-13T19:23:59.392257758Z" level=info msg="TearDown network for sandbox \"cb7be6a4712632dbe090eebc2eeaede27df77149aef06ceca10bc6d335e26b9b\" successfully" Feb 13 19:23:59.392395 containerd[1472]: time="2025-02-13T19:23:59.392383038Z" level=info msg="StopPodSandbox for \"cb7be6a4712632dbe090eebc2eeaede27df77149aef06ceca10bc6d335e26b9b\" returns successfully" Feb 13 19:23:59.392929 containerd[1472]: time="2025-02-13T19:23:59.392875598Z" level=info msg="RemovePodSandbox for \"cb7be6a4712632dbe090eebc2eeaede27df77149aef06ceca10bc6d335e26b9b\"" Feb 13 19:23:59.392929 containerd[1472]: time="2025-02-13T19:23:59.392916718Z" level=info msg="Forcibly stopping sandbox \"cb7be6a4712632dbe090eebc2eeaede27df77149aef06ceca10bc6d335e26b9b\"" Feb 13 19:23:59.393023 containerd[1472]: time="2025-02-13T19:23:59.392977718Z" level=info msg="TearDown network for sandbox \"cb7be6a4712632dbe090eebc2eeaede27df77149aef06ceca10bc6d335e26b9b\" successfully" Feb 13 19:23:59.402632 containerd[1472]: time="2025-02-13T19:23:59.402551560Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cb7be6a4712632dbe090eebc2eeaede27df77149aef06ceca10bc6d335e26b9b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:23:59.402823 containerd[1472]: time="2025-02-13T19:23:59.402803240Z" level=info msg="RemovePodSandbox \"cb7be6a4712632dbe090eebc2eeaede27df77149aef06ceca10bc6d335e26b9b\" returns successfully" Feb 13 19:23:59.403406 containerd[1472]: time="2025-02-13T19:23:59.403358840Z" level=info msg="StopPodSandbox for \"d47414dc28e478d0c0959c2bb8276a7437c645c4dbfafff1a1efab07a3da7b2f\"" Feb 13 19:23:59.403691 containerd[1472]: time="2025-02-13T19:23:59.403670200Z" level=info msg="TearDown network for sandbox \"d47414dc28e478d0c0959c2bb8276a7437c645c4dbfafff1a1efab07a3da7b2f\" successfully" Feb 13 19:23:59.403887 containerd[1472]: time="2025-02-13T19:23:59.403860640Z" level=info msg="StopPodSandbox for \"d47414dc28e478d0c0959c2bb8276a7437c645c4dbfafff1a1efab07a3da7b2f\" returns successfully" Feb 13 19:23:59.404379 containerd[1472]: time="2025-02-13T19:23:59.404330400Z" level=info msg="RemovePodSandbox for \"d47414dc28e478d0c0959c2bb8276a7437c645c4dbfafff1a1efab07a3da7b2f\"" Feb 13 19:23:59.404493 containerd[1472]: time="2025-02-13T19:23:59.404480120Z" level=info msg="Forcibly stopping sandbox \"d47414dc28e478d0c0959c2bb8276a7437c645c4dbfafff1a1efab07a3da7b2f\"" Feb 13 19:23:59.405737 containerd[1472]: time="2025-02-13T19:23:59.404730920Z" level=info msg="TearDown network for sandbox \"d47414dc28e478d0c0959c2bb8276a7437c645c4dbfafff1a1efab07a3da7b2f\" successfully" Feb 13 19:23:59.417652 containerd[1472]: time="2025-02-13T19:23:59.417608722Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d47414dc28e478d0c0959c2bb8276a7437c645c4dbfafff1a1efab07a3da7b2f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:23:59.417833 containerd[1472]: time="2025-02-13T19:23:59.417814402Z" level=info msg="RemovePodSandbox \"d47414dc28e478d0c0959c2bb8276a7437c645c4dbfafff1a1efab07a3da7b2f\" returns successfully" Feb 13 19:23:59.418395 containerd[1472]: time="2025-02-13T19:23:59.418370082Z" level=info msg="StopPodSandbox for \"9886a774b5ddfc176ce9f280222642978f1f162c12b106b23d0358bf9b5a6fe4\"" Feb 13 19:23:59.418663 containerd[1472]: time="2025-02-13T19:23:59.418640402Z" level=info msg="TearDown network for sandbox \"9886a774b5ddfc176ce9f280222642978f1f162c12b106b23d0358bf9b5a6fe4\" successfully" Feb 13 19:23:59.418849 containerd[1472]: time="2025-02-13T19:23:59.418832803Z" level=info msg="StopPodSandbox for \"9886a774b5ddfc176ce9f280222642978f1f162c12b106b23d0358bf9b5a6fe4\" returns successfully" Feb 13 19:23:59.419233 containerd[1472]: time="2025-02-13T19:23:59.419196003Z" level=info msg="RemovePodSandbox for \"9886a774b5ddfc176ce9f280222642978f1f162c12b106b23d0358bf9b5a6fe4\"" Feb 13 19:23:59.419233 containerd[1472]: time="2025-02-13T19:23:59.419232923Z" level=info msg="Forcibly stopping sandbox \"9886a774b5ddfc176ce9f280222642978f1f162c12b106b23d0358bf9b5a6fe4\"" Feb 13 19:23:59.419316 containerd[1472]: time="2025-02-13T19:23:59.419298363Z" level=info msg="TearDown network for sandbox \"9886a774b5ddfc176ce9f280222642978f1f162c12b106b23d0358bf9b5a6fe4\" successfully" Feb 13 19:23:59.421731 containerd[1472]: time="2025-02-13T19:23:59.421692443Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9886a774b5ddfc176ce9f280222642978f1f162c12b106b23d0358bf9b5a6fe4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:23:59.421731 containerd[1472]: time="2025-02-13T19:23:59.421756963Z" level=info msg="RemovePodSandbox \"9886a774b5ddfc176ce9f280222642978f1f162c12b106b23d0358bf9b5a6fe4\" returns successfully" Feb 13 19:23:59.422303 containerd[1472]: time="2025-02-13T19:23:59.422134483Z" level=info msg="StopPodSandbox for \"a4926452b9909d806af29b6189b194ae88d64402549efa6a8aa8157676ffeab6\"" Feb 13 19:23:59.422303 containerd[1472]: time="2025-02-13T19:23:59.422224963Z" level=info msg="TearDown network for sandbox \"a4926452b9909d806af29b6189b194ae88d64402549efa6a8aa8157676ffeab6\" successfully" Feb 13 19:23:59.422303 containerd[1472]: time="2025-02-13T19:23:59.422235123Z" level=info msg="StopPodSandbox for \"a4926452b9909d806af29b6189b194ae88d64402549efa6a8aa8157676ffeab6\" returns successfully" Feb 13 19:23:59.422905 containerd[1472]: time="2025-02-13T19:23:59.422756483Z" level=info msg="RemovePodSandbox for \"a4926452b9909d806af29b6189b194ae88d64402549efa6a8aa8157676ffeab6\"" Feb 13 19:23:59.422905 containerd[1472]: time="2025-02-13T19:23:59.422789043Z" level=info msg="Forcibly stopping sandbox \"a4926452b9909d806af29b6189b194ae88d64402549efa6a8aa8157676ffeab6\"" Feb 13 19:23:59.422905 containerd[1472]: time="2025-02-13T19:23:59.422853043Z" level=info msg="TearDown network for sandbox \"a4926452b9909d806af29b6189b194ae88d64402549efa6a8aa8157676ffeab6\" successfully" Feb 13 19:23:59.425265 containerd[1472]: time="2025-02-13T19:23:59.425230124Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a4926452b9909d806af29b6189b194ae88d64402549efa6a8aa8157676ffeab6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:23:59.425340 containerd[1472]: time="2025-02-13T19:23:59.425291644Z" level=info msg="RemovePodSandbox \"a4926452b9909d806af29b6189b194ae88d64402549efa6a8aa8157676ffeab6\" returns successfully" Feb 13 19:23:59.426063 containerd[1472]: time="2025-02-13T19:23:59.425631164Z" level=info msg="StopPodSandbox for \"de7676c8254aaae60409033dd601bfd2dfc40b59d6b06333583a5604aef4c02d\"" Feb 13 19:23:59.426063 containerd[1472]: time="2025-02-13T19:23:59.425716884Z" level=info msg="TearDown network for sandbox \"de7676c8254aaae60409033dd601bfd2dfc40b59d6b06333583a5604aef4c02d\" successfully" Feb 13 19:23:59.426063 containerd[1472]: time="2025-02-13T19:23:59.425730284Z" level=info msg="StopPodSandbox for \"de7676c8254aaae60409033dd601bfd2dfc40b59d6b06333583a5604aef4c02d\" returns successfully" Feb 13 19:23:59.426202 containerd[1472]: time="2025-02-13T19:23:59.426139484Z" level=info msg="RemovePodSandbox for \"de7676c8254aaae60409033dd601bfd2dfc40b59d6b06333583a5604aef4c02d\"" Feb 13 19:23:59.426202 containerd[1472]: time="2025-02-13T19:23:59.426169284Z" level=info msg="Forcibly stopping sandbox \"de7676c8254aaae60409033dd601bfd2dfc40b59d6b06333583a5604aef4c02d\"" Feb 13 19:23:59.426244 containerd[1472]: time="2025-02-13T19:23:59.426235924Z" level=info msg="TearDown network for sandbox \"de7676c8254aaae60409033dd601bfd2dfc40b59d6b06333583a5604aef4c02d\" successfully" Feb 13 19:23:59.429207 containerd[1472]: time="2025-02-13T19:23:59.428678684Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"de7676c8254aaae60409033dd601bfd2dfc40b59d6b06333583a5604aef4c02d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:23:59.429207 containerd[1472]: time="2025-02-13T19:23:59.428737284Z" level=info msg="RemovePodSandbox \"de7676c8254aaae60409033dd601bfd2dfc40b59d6b06333583a5604aef4c02d\" returns successfully" Feb 13 19:23:59.429207 containerd[1472]: time="2025-02-13T19:23:59.429049084Z" level=info msg="StopPodSandbox for \"e9ddff8643dc18c5cc1376aa87d108844c569928eb57728a5442cfa00994edcf\"" Feb 13 19:23:59.429207 containerd[1472]: time="2025-02-13T19:23:59.429153284Z" level=info msg="TearDown network for sandbox \"e9ddff8643dc18c5cc1376aa87d108844c569928eb57728a5442cfa00994edcf\" successfully" Feb 13 19:23:59.429207 containerd[1472]: time="2025-02-13T19:23:59.429164204Z" level=info msg="StopPodSandbox for \"e9ddff8643dc18c5cc1376aa87d108844c569928eb57728a5442cfa00994edcf\" returns successfully" Feb 13 19:23:59.429424 containerd[1472]: time="2025-02-13T19:23:59.429406964Z" level=info msg="RemovePodSandbox for \"e9ddff8643dc18c5cc1376aa87d108844c569928eb57728a5442cfa00994edcf\"" Feb 13 19:23:59.429424 containerd[1472]: time="2025-02-13T19:23:59.429425484Z" level=info msg="Forcibly stopping sandbox \"e9ddff8643dc18c5cc1376aa87d108844c569928eb57728a5442cfa00994edcf\"" Feb 13 19:23:59.429541 containerd[1472]: time="2025-02-13T19:23:59.429517324Z" level=info msg="TearDown network for sandbox \"e9ddff8643dc18c5cc1376aa87d108844c569928eb57728a5442cfa00994edcf\" successfully" Feb 13 19:23:59.433878 containerd[1472]: time="2025-02-13T19:23:59.433678645Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e9ddff8643dc18c5cc1376aa87d108844c569928eb57728a5442cfa00994edcf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:23:59.433878 containerd[1472]: time="2025-02-13T19:23:59.433832605Z" level=info msg="RemovePodSandbox \"e9ddff8643dc18c5cc1376aa87d108844c569928eb57728a5442cfa00994edcf\" returns successfully" Feb 13 19:23:59.434562 containerd[1472]: time="2025-02-13T19:23:59.434495685Z" level=info msg="StopPodSandbox for \"3f96dcbafa5ad411894d41a029bfeebfc46605011bf1ff99a6c49a5b1cc57236\"" Feb 13 19:23:59.434984 containerd[1472]: time="2025-02-13T19:23:59.434716805Z" level=info msg="TearDown network for sandbox \"3f96dcbafa5ad411894d41a029bfeebfc46605011bf1ff99a6c49a5b1cc57236\" successfully" Feb 13 19:23:59.434984 containerd[1472]: time="2025-02-13T19:23:59.434732725Z" level=info msg="StopPodSandbox for \"3f96dcbafa5ad411894d41a029bfeebfc46605011bf1ff99a6c49a5b1cc57236\" returns successfully" Feb 13 19:23:59.435194 containerd[1472]: time="2025-02-13T19:23:59.435170845Z" level=info msg="RemovePodSandbox for \"3f96dcbafa5ad411894d41a029bfeebfc46605011bf1ff99a6c49a5b1cc57236\"" Feb 13 19:23:59.435705 containerd[1472]: time="2025-02-13T19:23:59.435313725Z" level=info msg="Forcibly stopping sandbox \"3f96dcbafa5ad411894d41a029bfeebfc46605011bf1ff99a6c49a5b1cc57236\"" Feb 13 19:23:59.435705 containerd[1472]: time="2025-02-13T19:23:59.435395565Z" level=info msg="TearDown network for sandbox \"3f96dcbafa5ad411894d41a029bfeebfc46605011bf1ff99a6c49a5b1cc57236\" successfully" Feb 13 19:23:59.438827 containerd[1472]: time="2025-02-13T19:23:59.438755246Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3f96dcbafa5ad411894d41a029bfeebfc46605011bf1ff99a6c49a5b1cc57236\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:23:59.439044 containerd[1472]: time="2025-02-13T19:23:59.439007286Z" level=info msg="RemovePodSandbox \"3f96dcbafa5ad411894d41a029bfeebfc46605011bf1ff99a6c49a5b1cc57236\" returns successfully" Feb 13 19:24:02.606140 systemd[1]: Started sshd@19-10.0.0.135:22-10.0.0.1:58116.service - OpenSSH per-connection server daemon (10.0.0.1:58116). Feb 13 19:24:02.650433 sshd[5821]: Accepted publickey for core from 10.0.0.1 port 58116 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:24:02.651654 sshd-session[5821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:24:02.659817 systemd-logind[1449]: New session 20 of user core. Feb 13 19:24:02.663789 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 19:24:02.825660 sshd[5823]: Connection closed by 10.0.0.1 port 58116 Feb 13 19:24:02.825797 sshd-session[5821]: pam_unix(sshd:session): session closed for user core Feb 13 19:24:02.828398 systemd[1]: sshd@19-10.0.0.135:22-10.0.0.1:58116.service: Deactivated successfully. Feb 13 19:24:02.830838 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 19:24:02.831676 systemd-logind[1449]: Session 20 logged out. Waiting for processes to exit. Feb 13 19:24:02.832534 systemd-logind[1449]: Removed session 20. Feb 13 19:24:07.836300 systemd[1]: Started sshd@20-10.0.0.135:22-10.0.0.1:58132.service - OpenSSH per-connection server daemon (10.0.0.1:58132). Feb 13 19:24:07.897391 sshd[5868]: Accepted publickey for core from 10.0.0.1 port 58132 ssh2: RSA SHA256:QXe3dvBtpmjdNzzmgM+v4loZfINSAkPIhXq9u0qbeYg Feb 13 19:24:07.899004 sshd-session[5868]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:24:07.903636 systemd-logind[1449]: New session 21 of user core. Feb 13 19:24:07.911465 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 19:24:08.095011 sshd[5870]: Connection closed by 10.0.0.1 port 58132 Feb 13 19:24:08.095284 sshd-session[5868]: pam_unix(sshd:session): session closed for user core Feb 13 19:24:08.097934 systemd[1]: sshd@20-10.0.0.135:22-10.0.0.1:58132.service: Deactivated successfully. Feb 13 19:24:08.102317 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 19:24:08.104864 systemd-logind[1449]: Session 21 logged out. Waiting for processes to exit. Feb 13 19:24:08.106294 systemd-logind[1449]: Removed session 21.