Feb 13 19:29:31.892836 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 17:41:03 -00 2025 Feb 13 19:29:31.892866 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=015d1d9e5e601f6a4e226c935072d3d0819e7eb2da20e68715973498f21aa3fe Feb 13 19:29:31.892881 kernel: BIOS-provided physical RAM map: Feb 13 19:29:31.892890 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 13 19:29:31.892899 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Feb 13 19:29:31.892907 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Feb 13 19:29:31.892918 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Feb 13 19:29:31.892924 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Feb 13 19:29:31.892931 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Feb 13 19:29:31.892937 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Feb 13 19:29:31.892944 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Feb 13 19:29:31.892953 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Feb 13 19:29:31.892960 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Feb 13 19:29:31.892966 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Feb 13 19:29:31.892974 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Feb 13 19:29:31.892981 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Feb 13 19:29:31.892991 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce91fff] usable Feb 13 19:29:31.892998 kernel: BIOS-e820: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Feb 13 19:29:31.893004 kernel: BIOS-e820: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Feb 13 19:29:31.893011 kernel: BIOS-e820: [mem 0x000000009ce98000-0x000000009cedbfff] usable Feb 13 19:29:31.893018 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Feb 13 19:29:31.893025 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Feb 13 19:29:31.893032 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Feb 13 19:29:31.893039 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 19:29:31.893046 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Feb 13 19:29:31.893053 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Feb 13 19:29:31.893060 kernel: NX (Execute Disable) protection: active Feb 13 19:29:31.893099 kernel: APIC: Static calls initialized Feb 13 19:29:31.893106 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Feb 13 19:29:31.893123 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Feb 13 19:29:31.893130 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Feb 13 19:29:31.893137 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Feb 13 19:29:31.893144 kernel: extended physical RAM map: Feb 13 19:29:31.893151 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 13 19:29:31.893158 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Feb 13 19:29:31.893165 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Feb 13 19:29:31.893172 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Feb 13 19:29:31.893179 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Feb 13 19:29:31.893186 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Feb 13 19:29:31.893196 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Feb 13 19:29:31.893207 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b314017] usable Feb 13 19:29:31.893214 kernel: reserve setup_data: [mem 0x000000009b314018-0x000000009b350e57] usable Feb 13 19:29:31.893222 kernel: reserve setup_data: [mem 0x000000009b350e58-0x000000009b351017] usable Feb 13 19:29:31.893229 kernel: reserve setup_data: [mem 0x000000009b351018-0x000000009b35ac57] usable Feb 13 19:29:31.893236 kernel: reserve setup_data: [mem 0x000000009b35ac58-0x000000009bd3efff] usable Feb 13 19:29:31.893245 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Feb 13 19:29:31.893253 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Feb 13 19:29:31.893260 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Feb 13 19:29:31.893267 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Feb 13 19:29:31.893274 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Feb 13 19:29:31.893282 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce91fff] usable Feb 13 19:29:31.893289 kernel: reserve setup_data: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Feb 13 19:29:31.893296 kernel: reserve setup_data: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Feb 13 19:29:31.893303 kernel: reserve setup_data: [mem 0x000000009ce98000-0x000000009cedbfff] usable Feb 13 19:29:31.893313 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Feb 13 19:29:31.893320 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Feb 13 19:29:31.893327 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Feb 13 19:29:31.893334 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 19:29:31.893342 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Feb 13 19:29:31.893349 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Feb 13 19:29:31.893356 kernel: efi: EFI v2.7 by EDK II Feb 13 19:29:31.893363 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9ba0d198 RNG=0x9cb73018 Feb 13 19:29:31.893371 kernel: random: crng init done Feb 13 19:29:31.893378 kernel: efi: Remove mem142: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Feb 13 19:29:31.893385 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Feb 13 19:29:31.893392 kernel: secureboot: Secure boot disabled Feb 13 19:29:31.893402 kernel: SMBIOS 2.8 present. Feb 13 19:29:31.893409 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Feb 13 19:29:31.893416 kernel: Hypervisor detected: KVM Feb 13 19:29:31.893423 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 19:29:31.893431 kernel: kvm-clock: using sched offset of 2670431548 cycles Feb 13 19:29:31.893438 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 19:29:31.893446 kernel: tsc: Detected 2794.748 MHz processor Feb 13 19:29:31.893454 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 19:29:31.893461 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 19:29:31.893469 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Feb 13 19:29:31.893479 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Feb 13 19:29:31.893486 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 19:29:31.893493 kernel: Using GB pages for direct mapping Feb 13 19:29:31.893501 kernel: ACPI: Early table checksum verification disabled Feb 13 19:29:31.893508 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Feb 13 19:29:31.893516 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Feb 13 19:29:31.893523 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:29:31.893531 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:29:31.893538 kernel: ACPI: FACS 0x000000009CBDD000 000040 Feb 13 19:29:31.893548 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:29:31.893556 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:29:31.893563 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:29:31.893570 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:29:31.893578 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Feb 13 19:29:31.893585 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Feb 13 19:29:31.893592 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Feb 13 19:29:31.893600 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Feb 13 19:29:31.893607 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Feb 13 19:29:31.893617 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Feb 13 19:29:31.893624 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Feb 13 19:29:31.893631 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Feb 13 19:29:31.893639 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Feb 13 19:29:31.893646 kernel: No NUMA configuration found Feb 13 19:29:31.893653 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Feb 13 19:29:31.893661 kernel: NODE_DATA(0) allocated [mem 0x9ce3a000-0x9ce3ffff] Feb 13 19:29:31.893668 kernel: Zone ranges: Feb 13 19:29:31.893675 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 19:29:31.893685 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Feb 13 19:29:31.893692 kernel: Normal empty Feb 13 19:29:31.893699 kernel: Movable zone start for each node Feb 13 19:29:31.893707 kernel: Early memory node ranges Feb 13 19:29:31.893714 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 13 19:29:31.893721 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Feb 13 19:29:31.893728 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Feb 13 19:29:31.893736 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Feb 13 19:29:31.893743 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Feb 13 19:29:31.893750 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Feb 13 19:29:31.893760 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce91fff] Feb 13 19:29:31.893767 kernel: node 0: [mem 0x000000009ce98000-0x000000009cedbfff] Feb 13 19:29:31.893774 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Feb 13 19:29:31.893781 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 19:29:31.893789 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 13 19:29:31.893804 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Feb 13 19:29:31.893814 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 19:29:31.893821 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Feb 13 19:29:31.893829 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Feb 13 19:29:31.893836 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Feb 13 19:29:31.893844 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Feb 13 19:29:31.893852 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Feb 13 19:29:31.893861 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 13 19:29:31.893869 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 19:29:31.893877 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 19:29:31.893884 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 19:29:31.893892 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 19:29:31.893902 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 19:29:31.893910 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 19:29:31.893917 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 19:29:31.893925 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 19:29:31.893933 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 19:29:31.893940 kernel: TSC deadline timer available Feb 13 19:29:31.893948 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Feb 13 19:29:31.893956 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 19:29:31.893964 kernel: kvm-guest: KVM setup pv remote TLB flush Feb 13 19:29:31.893973 kernel: kvm-guest: setup PV sched yield Feb 13 19:29:31.893981 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Feb 13 19:29:31.893989 kernel: Booting paravirtualized kernel on KVM Feb 13 19:29:31.893997 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 19:29:31.894004 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Feb 13 19:29:31.894012 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Feb 13 19:29:31.894020 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Feb 13 19:29:31.894027 kernel: pcpu-alloc: [0] 0 1 2 3 Feb 13 19:29:31.894035 kernel: kvm-guest: PV spinlocks enabled Feb 13 19:29:31.894045 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 19:29:31.894054 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=015d1d9e5e601f6a4e226c935072d3d0819e7eb2da20e68715973498f21aa3fe Feb 13 19:29:31.894062 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:29:31.894085 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:29:31.894093 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:29:31.894100 kernel: Fallback order for Node 0: 0 Feb 13 19:29:31.894115 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629460 Feb 13 19:29:31.894123 kernel: Policy zone: DMA32 Feb 13 19:29:31.894133 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:29:31.894141 kernel: Memory: 2387720K/2565800K available (14336K kernel code, 2301K rwdata, 22800K rodata, 43320K init, 1752K bss, 177824K reserved, 0K cma-reserved) Feb 13 19:29:31.894149 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 19:29:31.894157 kernel: ftrace: allocating 37893 entries in 149 pages Feb 13 19:29:31.894165 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 19:29:31.894173 kernel: Dynamic Preempt: voluntary Feb 13 19:29:31.894181 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:29:31.894189 kernel: rcu: RCU event tracing is enabled. Feb 13 19:29:31.894197 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 19:29:31.894207 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:29:31.894215 kernel: Rude variant of Tasks RCU enabled. Feb 13 19:29:31.894222 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:29:31.894230 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:29:31.894238 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 19:29:31.894246 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Feb 13 19:29:31.894254 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:29:31.894261 kernel: Console: colour dummy device 80x25 Feb 13 19:29:31.894269 kernel: printk: console [ttyS0] enabled Feb 13 19:29:31.894279 kernel: ACPI: Core revision 20230628 Feb 13 19:29:31.894287 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 13 19:29:31.894294 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 19:29:31.894302 kernel: x2apic enabled Feb 13 19:29:31.894310 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 19:29:31.894317 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Feb 13 19:29:31.894325 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Feb 13 19:29:31.894333 kernel: kvm-guest: setup PV IPIs Feb 13 19:29:31.894341 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 13 19:29:31.894350 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 13 19:29:31.894358 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Feb 13 19:29:31.894366 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 13 19:29:31.894373 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Feb 13 19:29:31.894381 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Feb 13 19:29:31.894389 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 19:29:31.894396 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 19:29:31.894404 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 19:29:31.894412 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 19:29:31.894422 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Feb 13 19:29:31.894429 kernel: RETBleed: Mitigation: untrained return thunk Feb 13 19:29:31.894437 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 19:29:31.894445 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 19:29:31.894453 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Feb 13 19:29:31.894461 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Feb 13 19:29:31.894469 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Feb 13 19:29:31.894477 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 19:29:31.894487 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 19:29:31.894494 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 19:29:31.894502 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 19:29:31.894510 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Feb 13 19:29:31.894518 kernel: Freeing SMP alternatives memory: 32K Feb 13 19:29:31.894525 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:29:31.894533 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:29:31.894541 kernel: landlock: Up and running. Feb 13 19:29:31.894548 kernel: SELinux: Initializing. Feb 13 19:29:31.894558 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:29:31.894566 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:29:31.894574 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Feb 13 19:29:31.894581 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:29:31.894589 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:29:31.894597 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:29:31.894605 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Feb 13 19:29:31.894613 kernel: ... version: 0 Feb 13 19:29:31.894620 kernel: ... bit width: 48 Feb 13 19:29:31.894630 kernel: ... generic registers: 6 Feb 13 19:29:31.894638 kernel: ... value mask: 0000ffffffffffff Feb 13 19:29:31.894645 kernel: ... max period: 00007fffffffffff Feb 13 19:29:31.894653 kernel: ... fixed-purpose events: 0 Feb 13 19:29:31.894661 kernel: ... event mask: 000000000000003f Feb 13 19:29:31.894668 kernel: signal: max sigframe size: 1776 Feb 13 19:29:31.894676 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:29:31.894684 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:29:31.894692 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:29:31.894701 kernel: smpboot: x86: Booting SMP configuration: Feb 13 19:29:31.894709 kernel: .... node #0, CPUs: #1 #2 #3 Feb 13 19:29:31.894717 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 19:29:31.894724 kernel: smpboot: Max logical packages: 1 Feb 13 19:29:31.894732 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Feb 13 19:29:31.894740 kernel: devtmpfs: initialized Feb 13 19:29:31.894747 kernel: x86/mm: Memory block size: 128MB Feb 13 19:29:31.894755 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Feb 13 19:29:31.894763 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Feb 13 19:29:31.894771 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Feb 13 19:29:31.894781 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Feb 13 19:29:31.894789 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce96000-0x9ce97fff] (8192 bytes) Feb 13 19:29:31.894797 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Feb 13 19:29:31.894804 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:29:31.894812 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 19:29:31.894820 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:29:31.894827 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:29:31.894835 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:29:31.894845 kernel: audit: type=2000 audit(1739474972.570:1): state=initialized audit_enabled=0 res=1 Feb 13 19:29:31.894852 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:29:31.894860 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 19:29:31.894868 kernel: cpuidle: using governor menu Feb 13 19:29:31.894875 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:29:31.894883 kernel: dca service started, version 1.12.1 Feb 13 19:29:31.894891 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Feb 13 19:29:31.894899 kernel: PCI: Using configuration type 1 for base access Feb 13 19:29:31.894906 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 19:29:31.894916 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:29:31.894924 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:29:31.894932 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:29:31.894939 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:29:31.894947 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:29:31.894954 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:29:31.894962 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:29:31.894970 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:29:31.894977 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 19:29:31.894987 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 19:29:31.894995 kernel: ACPI: Interpreter enabled Feb 13 19:29:31.895003 kernel: ACPI: PM: (supports S0 S3 S5) Feb 13 19:29:31.895010 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 19:29:31.895018 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 19:29:31.895026 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 19:29:31.895033 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Feb 13 19:29:31.895041 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 19:29:31.895238 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:29:31.895391 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Feb 13 19:29:31.895513 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Feb 13 19:29:31.895523 kernel: PCI host bridge to bus 0000:00 Feb 13 19:29:31.895648 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 19:29:31.895761 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 19:29:31.895873 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 19:29:31.895990 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Feb 13 19:29:31.896135 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Feb 13 19:29:31.896250 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Feb 13 19:29:31.896360 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 19:29:31.896502 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Feb 13 19:29:31.896633 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Feb 13 19:29:31.896754 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Feb 13 19:29:31.896880 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Feb 13 19:29:31.897000 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Feb 13 19:29:31.897147 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Feb 13 19:29:31.897270 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 19:29:31.897401 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 19:29:31.897522 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Feb 13 19:29:31.897647 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Feb 13 19:29:31.897767 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] Feb 13 19:29:31.897899 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Feb 13 19:29:31.898020 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Feb 13 19:29:31.898183 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Feb 13 19:29:31.898306 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] Feb 13 19:29:31.898435 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 13 19:29:31.898563 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Feb 13 19:29:31.898685 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Feb 13 19:29:31.898806 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] Feb 13 19:29:31.898928 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Feb 13 19:29:31.899057 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Feb 13 19:29:31.899222 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Feb 13 19:29:31.899351 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Feb 13 19:29:31.899476 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Feb 13 19:29:31.899597 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Feb 13 19:29:31.899724 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Feb 13 19:29:31.899844 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Feb 13 19:29:31.899854 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 19:29:31.899862 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 19:29:31.899870 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 19:29:31.899881 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 19:29:31.899889 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Feb 13 19:29:31.899897 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Feb 13 19:29:31.899904 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Feb 13 19:29:31.899912 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Feb 13 19:29:31.899920 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Feb 13 19:29:31.899928 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Feb 13 19:29:31.899935 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Feb 13 19:29:31.899943 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Feb 13 19:29:31.899953 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Feb 13 19:29:31.899961 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Feb 13 19:29:31.899969 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Feb 13 19:29:31.899977 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Feb 13 19:29:31.899984 kernel: iommu: Default domain type: Translated Feb 13 19:29:31.899992 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 19:29:31.900000 kernel: efivars: Registered efivars operations Feb 13 19:29:31.900007 kernel: PCI: Using ACPI for IRQ routing Feb 13 19:29:31.900015 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 19:29:31.900023 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Feb 13 19:29:31.900033 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Feb 13 19:29:31.900040 kernel: e820: reserve RAM buffer [mem 0x9b314018-0x9bffffff] Feb 13 19:29:31.900048 kernel: e820: reserve RAM buffer [mem 0x9b351018-0x9bffffff] Feb 13 19:29:31.900056 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Feb 13 19:29:31.900076 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Feb 13 19:29:31.900084 kernel: e820: reserve RAM buffer [mem 0x9ce92000-0x9fffffff] Feb 13 19:29:31.900091 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Feb 13 19:29:31.900222 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Feb 13 19:29:31.900346 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Feb 13 19:29:31.900464 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 19:29:31.900475 kernel: vgaarb: loaded Feb 13 19:29:31.900483 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 13 19:29:31.900490 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 13 19:29:31.900498 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 19:29:31.900506 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:29:31.900514 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:29:31.900522 kernel: pnp: PnP ACPI init Feb 13 19:29:31.900657 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Feb 13 19:29:31.900669 kernel: pnp: PnP ACPI: found 6 devices Feb 13 19:29:31.900677 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 19:29:31.900685 kernel: NET: Registered PF_INET protocol family Feb 13 19:29:31.900709 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:29:31.900720 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 19:29:31.900730 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:29:31.900738 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:29:31.900749 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 19:29:31.900757 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 19:29:31.900765 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:29:31.900773 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:29:31.900781 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:29:31.900789 kernel: NET: Registered PF_XDP protocol family Feb 13 19:29:31.900918 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Feb 13 19:29:31.901042 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Feb 13 19:29:31.901222 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 19:29:31.901333 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 19:29:31.901466 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 19:29:31.901622 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Feb 13 19:29:31.901735 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Feb 13 19:29:31.901845 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Feb 13 19:29:31.901856 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:29:31.901865 kernel: Initialise system trusted keyrings Feb 13 19:29:31.901877 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 19:29:31.901885 kernel: Key type asymmetric registered Feb 13 19:29:31.901894 kernel: Asymmetric key parser 'x509' registered Feb 13 19:29:31.901902 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 19:29:31.901910 kernel: io scheduler mq-deadline registered Feb 13 19:29:31.901918 kernel: io scheduler kyber registered Feb 13 19:29:31.901926 kernel: io scheduler bfq registered Feb 13 19:29:31.901934 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 19:29:31.901943 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Feb 13 19:29:31.901954 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Feb 13 19:29:31.901964 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Feb 13 19:29:31.901972 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:29:31.901980 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 19:29:31.901989 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 19:29:31.901997 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 19:29:31.902008 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 19:29:31.902152 kernel: rtc_cmos 00:04: RTC can wake from S4 Feb 13 19:29:31.902166 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 19:29:31.902279 kernel: rtc_cmos 00:04: registered as rtc0 Feb 13 19:29:31.902392 kernel: rtc_cmos 00:04: setting system clock to 2025-02-13T19:29:31 UTC (1739474971) Feb 13 19:29:31.902505 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Feb 13 19:29:31.902516 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Feb 13 19:29:31.902524 kernel: efifb: probing for efifb Feb 13 19:29:31.902536 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Feb 13 19:29:31.902544 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Feb 13 19:29:31.902553 kernel: efifb: scrolling: redraw Feb 13 19:29:31.902561 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 13 19:29:31.902569 kernel: Console: switching to colour frame buffer device 160x50 Feb 13 19:29:31.902577 kernel: fb0: EFI VGA frame buffer device Feb 13 19:29:31.902585 kernel: pstore: Using crash dump compression: deflate Feb 13 19:29:31.902593 kernel: pstore: Registered efi_pstore as persistent store backend Feb 13 19:29:31.902602 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:29:31.902612 kernel: Segment Routing with IPv6 Feb 13 19:29:31.902621 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:29:31.902629 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:29:31.902637 kernel: Key type dns_resolver registered Feb 13 19:29:31.902645 kernel: IPI shorthand broadcast: enabled Feb 13 19:29:31.902653 kernel: sched_clock: Marking stable (572003221, 151631506)->(768054596, -44419869) Feb 13 19:29:31.902661 kernel: registered taskstats version 1 Feb 13 19:29:31.902669 kernel: Loading compiled-in X.509 certificates Feb 13 19:29:31.902678 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: b3acedbed401b3cd9632ee9302ddcce254d8924d' Feb 13 19:29:31.902690 kernel: Key type .fscrypt registered Feb 13 19:29:31.902698 kernel: Key type fscrypt-provisioning registered Feb 13 19:29:31.902706 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:29:31.902714 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:29:31.902722 kernel: ima: No architecture policies found Feb 13 19:29:31.902730 kernel: clk: Disabling unused clocks Feb 13 19:29:31.902738 kernel: Freeing unused kernel image (initmem) memory: 43320K Feb 13 19:29:31.902747 kernel: Write protecting the kernel read-only data: 38912k Feb 13 19:29:31.902755 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Feb 13 19:29:31.902765 kernel: Run /init as init process Feb 13 19:29:31.902773 kernel: with arguments: Feb 13 19:29:31.902782 kernel: /init Feb 13 19:29:31.902790 kernel: with environment: Feb 13 19:29:31.902798 kernel: HOME=/ Feb 13 19:29:31.902806 kernel: TERM=linux Feb 13 19:29:31.902814 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:29:31.902824 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:29:31.902837 systemd[1]: Detected virtualization kvm. Feb 13 19:29:31.902846 systemd[1]: Detected architecture x86-64. Feb 13 19:29:31.902854 systemd[1]: Running in initrd. Feb 13 19:29:31.902863 systemd[1]: No hostname configured, using default hostname. Feb 13 19:29:31.902871 systemd[1]: Hostname set to . Feb 13 19:29:31.902880 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:29:31.902889 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:29:31.902897 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:29:31.902908 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:29:31.902918 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:29:31.902926 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:29:31.902935 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:29:31.902944 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:29:31.902955 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:29:31.902966 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:29:31.902975 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:29:31.902984 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:29:31.902992 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:29:31.903001 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:29:31.903009 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:29:31.903018 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:29:31.903027 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:29:31.903035 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:29:31.903046 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:29:31.903055 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:29:31.903075 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:29:31.903084 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:29:31.903093 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:29:31.903102 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:29:31.903117 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:29:31.903126 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:29:31.903135 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:29:31.903147 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:29:31.903155 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:29:31.903164 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:29:31.903173 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:29:31.903182 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:29:31.903190 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:29:31.903199 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:29:31.903228 systemd-journald[192]: Collecting audit messages is disabled. Feb 13 19:29:31.903250 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:29:31.903259 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:29:31.903268 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:29:31.903278 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:29:31.903286 systemd-journald[192]: Journal started Feb 13 19:29:31.903304 systemd-journald[192]: Runtime Journal (/run/log/journal/620196325ad845298b6e128554d866ac) is 6.0M, max 48.2M, 42.2M free. Feb 13 19:29:31.882007 systemd-modules-load[194]: Inserted module 'overlay' Feb 13 19:29:31.908497 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:29:31.911094 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:29:31.912026 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:29:31.915014 kernel: Bridge firewalling registered Feb 13 19:29:31.915007 systemd-modules-load[194]: Inserted module 'br_netfilter' Feb 13 19:29:31.916523 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:29:31.919181 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:29:31.921656 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:29:31.924522 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:29:31.926427 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:29:31.928960 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:29:31.931675 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:29:31.940900 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:29:31.943005 dracut-cmdline[223]: dracut-dracut-053 Feb 13 19:29:31.945718 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=015d1d9e5e601f6a4e226c935072d3d0819e7eb2da20e68715973498f21aa3fe Feb 13 19:29:31.954258 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:29:31.983061 systemd-resolved[239]: Positive Trust Anchors: Feb 13 19:29:31.983085 systemd-resolved[239]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:29:31.983122 systemd-resolved[239]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:29:31.985570 systemd-resolved[239]: Defaulting to hostname 'linux'. Feb 13 19:29:31.986568 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:29:31.993745 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:29:32.031093 kernel: SCSI subsystem initialized Feb 13 19:29:32.040087 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:29:32.051093 kernel: iscsi: registered transport (tcp) Feb 13 19:29:32.071088 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:29:32.071112 kernel: QLogic iSCSI HBA Driver Feb 13 19:29:32.114176 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:29:32.133186 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:29:32.158572 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:29:32.158604 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:29:32.159584 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:29:32.200092 kernel: raid6: avx2x4 gen() 23028 MB/s Feb 13 19:29:32.217087 kernel: raid6: avx2x2 gen() 22328 MB/s Feb 13 19:29:32.234173 kernel: raid6: avx2x1 gen() 18733 MB/s Feb 13 19:29:32.234192 kernel: raid6: using algorithm avx2x4 gen() 23028 MB/s Feb 13 19:29:32.252161 kernel: raid6: .... xor() 7463 MB/s, rmw enabled Feb 13 19:29:32.252197 kernel: raid6: using avx2x2 recovery algorithm Feb 13 19:29:32.272088 kernel: xor: automatically using best checksumming function avx Feb 13 19:29:32.417105 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:29:32.430319 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:29:32.447210 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:29:32.458722 systemd-udevd[415]: Using default interface naming scheme 'v255'. Feb 13 19:29:32.462916 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:29:32.474242 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:29:32.487152 dracut-pre-trigger[418]: rd.md=0: removing MD RAID activation Feb 13 19:29:32.517669 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:29:32.531223 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:29:32.592456 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:29:32.601280 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:29:32.613355 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:29:32.615670 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:29:32.617390 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:29:32.619848 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:29:32.626089 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Feb 13 19:29:32.650645 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 19:29:32.650810 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 19:29:32.650822 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:29:32.650839 kernel: GPT:9289727 != 19775487 Feb 13 19:29:32.650850 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:29:32.650860 kernel: GPT:9289727 != 19775487 Feb 13 19:29:32.650870 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:29:32.650880 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:29:32.650890 kernel: libata version 3.00 loaded. Feb 13 19:29:32.627260 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:29:32.637816 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:29:32.658111 kernel: ahci 0000:00:1f.2: version 3.0 Feb 13 19:29:32.684261 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Feb 13 19:29:32.684282 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 19:29:32.684292 kernel: AES CTR mode by8 optimization enabled Feb 13 19:29:32.684303 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Feb 13 19:29:32.684457 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Feb 13 19:29:32.684602 kernel: scsi host0: ahci Feb 13 19:29:32.684756 kernel: scsi host1: ahci Feb 13 19:29:32.684926 kernel: scsi host2: ahci Feb 13 19:29:32.687501 kernel: scsi host3: ahci Feb 13 19:29:32.687649 kernel: scsi host4: ahci Feb 13 19:29:32.687792 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (466) Feb 13 19:29:32.687809 kernel: scsi host5: ahci Feb 13 19:29:32.687952 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Feb 13 19:29:32.687963 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Feb 13 19:29:32.687974 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Feb 13 19:29:32.687984 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Feb 13 19:29:32.687994 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Feb 13 19:29:32.688004 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Feb 13 19:29:32.688014 kernel: BTRFS: device fsid c7adc9b8-df7f-4a5f-93bf-204def2767a9 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (465) Feb 13 19:29:32.670368 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:29:32.670536 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:29:32.673841 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:29:32.675210 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:29:32.675423 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:29:32.677161 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:29:32.683627 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:29:32.703574 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 19:29:32.709266 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:29:32.715022 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 19:29:32.726244 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:29:32.731235 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 19:29:32.732476 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 19:29:32.747201 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:29:32.748975 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:29:32.755806 disk-uuid[557]: Primary Header is updated. Feb 13 19:29:32.755806 disk-uuid[557]: Secondary Entries is updated. Feb 13 19:29:32.755806 disk-uuid[557]: Secondary Header is updated. Feb 13 19:29:32.760104 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:29:32.765111 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:29:32.770692 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:29:32.992094 kernel: ata2: SATA link down (SStatus 0 SControl 300) Feb 13 19:29:32.992178 kernel: ata1: SATA link down (SStatus 0 SControl 300) Feb 13 19:29:32.992190 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Feb 13 19:29:32.993891 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Feb 13 19:29:32.993917 kernel: ata3.00: applying bridge limits Feb 13 19:29:32.994089 kernel: ata3.00: configured for UDMA/100 Feb 13 19:29:32.995118 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 13 19:29:33.000093 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 13 19:29:33.000109 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 13 19:29:33.001100 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 13 19:29:33.053590 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Feb 13 19:29:33.075632 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 19:29:33.075646 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Feb 13 19:29:33.765764 disk-uuid[558]: The operation has completed successfully. Feb 13 19:29:33.767299 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:29:33.793477 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:29:33.793604 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:29:33.818220 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:29:33.821168 sh[594]: Success Feb 13 19:29:33.833111 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Feb 13 19:29:33.866209 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:29:33.890995 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:29:33.893518 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:29:33.905344 kernel: BTRFS info (device dm-0): first mount of filesystem c7adc9b8-df7f-4a5f-93bf-204def2767a9 Feb 13 19:29:33.905391 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:29:33.905402 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:29:33.907391 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:29:33.907416 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:29:33.912313 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:29:33.913015 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:29:33.913894 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:29:33.915545 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:29:33.930095 kernel: BTRFS info (device vda6): first mount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:29:33.930129 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:29:33.930141 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:29:33.933098 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:29:33.941529 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:29:33.943214 kernel: BTRFS info (device vda6): last unmount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:29:33.952437 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:29:33.960195 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:29:34.016874 ignition[694]: Ignition 2.20.0 Feb 13 19:29:34.016885 ignition[694]: Stage: fetch-offline Feb 13 19:29:34.016919 ignition[694]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:29:34.016929 ignition[694]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:29:34.017033 ignition[694]: parsed url from cmdline: "" Feb 13 19:29:34.017038 ignition[694]: no config URL provided Feb 13 19:29:34.017045 ignition[694]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:29:34.017087 ignition[694]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:29:34.017120 ignition[694]: op(1): [started] loading QEMU firmware config module Feb 13 19:29:34.017127 ignition[694]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 19:29:34.026874 ignition[694]: op(1): [finished] loading QEMU firmware config module Feb 13 19:29:34.035943 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:29:34.046201 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:29:34.069594 systemd-networkd[784]: lo: Link UP Feb 13 19:29:34.069604 systemd-networkd[784]: lo: Gained carrier Feb 13 19:29:34.071836 ignition[694]: parsing config with SHA512: 2bcb3f42914b4a639d8da66d4114beb324cefcf21c2adf3ba73846026b1d7410bf5a28c3761157437c07f32756b5800962c9f473a32cbcc545ac93cac49e6df2 Feb 13 19:29:34.072677 systemd-networkd[784]: Enumeration completed Feb 13 19:29:34.072757 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:29:34.073907 systemd[1]: Reached target network.target - Network. Feb 13 19:29:34.076094 unknown[694]: fetched base config from "system" Feb 13 19:29:34.076109 unknown[694]: fetched user config from "qemu" Feb 13 19:29:34.077805 ignition[694]: fetch-offline: fetch-offline passed Feb 13 19:29:34.077457 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:29:34.077874 ignition[694]: Ignition finished successfully Feb 13 19:29:34.077461 systemd-networkd[784]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:29:34.080217 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:29:34.083165 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 19:29:34.088533 systemd-networkd[784]: eth0: Link UP Feb 13 19:29:34.088541 systemd-networkd[784]: eth0: Gained carrier Feb 13 19:29:34.088549 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:29:34.092189 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:29:34.103120 systemd-networkd[784]: eth0: DHCPv4 address 10.0.0.116/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:29:34.104173 ignition[787]: Ignition 2.20.0 Feb 13 19:29:34.104182 ignition[787]: Stage: kargs Feb 13 19:29:34.104349 ignition[787]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:29:34.104364 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:29:34.105326 ignition[787]: kargs: kargs passed Feb 13 19:29:34.105369 ignition[787]: Ignition finished successfully Feb 13 19:29:34.108979 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:29:34.121209 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:29:34.132141 ignition[797]: Ignition 2.20.0 Feb 13 19:29:34.132153 ignition[797]: Stage: disks Feb 13 19:29:34.132335 ignition[797]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:29:34.132349 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:29:34.133222 ignition[797]: disks: disks passed Feb 13 19:29:34.133275 ignition[797]: Ignition finished successfully Feb 13 19:29:34.138813 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:29:34.139082 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:29:34.141842 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:29:34.142047 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:29:34.142391 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:29:34.142717 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:29:34.156202 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:29:34.166356 systemd-resolved[239]: Detected conflict on linux IN A 10.0.0.116 Feb 13 19:29:34.166375 systemd-resolved[239]: Hostname conflict, changing published hostname from 'linux' to 'linux10'. Feb 13 19:29:34.168367 systemd-fsck[808]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 19:29:34.174552 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:29:34.181163 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:29:34.265093 kernel: EXT4-fs (vda9): mounted filesystem 7d46b70d-4c30-46e6-9935-e1f7fb523560 r/w with ordered data mode. Quota mode: none. Feb 13 19:29:34.265143 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:29:34.266586 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:29:34.279143 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:29:34.280905 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:29:34.283633 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:29:34.283695 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:29:34.294060 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (816) Feb 13 19:29:34.294097 kernel: BTRFS info (device vda6): first mount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:29:34.294109 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:29:34.294120 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:29:34.283724 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:29:34.297346 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:29:34.289376 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:29:34.295012 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:29:34.299509 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:29:34.330120 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:29:34.333919 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:29:34.337739 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:29:34.341347 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:29:34.424108 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:29:34.436189 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:29:34.437922 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:29:34.448091 kernel: BTRFS info (device vda6): last unmount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:29:34.462814 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:29:34.469986 ignition[930]: INFO : Ignition 2.20.0 Feb 13 19:29:34.469986 ignition[930]: INFO : Stage: mount Feb 13 19:29:34.471747 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:29:34.471747 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:29:34.471747 ignition[930]: INFO : mount: mount passed Feb 13 19:29:34.471747 ignition[930]: INFO : Ignition finished successfully Feb 13 19:29:34.473624 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:29:34.484151 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:29:34.904630 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:29:34.917219 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:29:34.926376 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (943) Feb 13 19:29:34.926422 kernel: BTRFS info (device vda6): first mount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:29:34.926438 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:29:34.928078 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:29:34.931092 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:29:34.931997 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:29:34.956082 ignition[960]: INFO : Ignition 2.20.0 Feb 13 19:29:34.956082 ignition[960]: INFO : Stage: files Feb 13 19:29:34.958019 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:29:34.958019 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:29:34.958019 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:29:34.962101 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:29:34.962101 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:29:34.966938 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:29:34.968489 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:29:34.970417 unknown[960]: wrote ssh authorized keys file for user: core Feb 13 19:29:34.971625 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:29:34.974075 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 19:29:34.976075 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 13 19:29:35.028634 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 19:29:35.317136 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 19:29:35.317136 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:29:35.321321 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:29:35.321321 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:29:35.321321 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:29:35.321321 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:29:35.321321 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:29:35.321321 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:29:35.321321 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:29:35.321321 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:29:35.321321 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:29:35.321321 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 19:29:35.321321 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 19:29:35.321321 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 19:29:35.321321 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Feb 13 19:29:35.624847 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 19:29:35.753614 systemd-networkd[784]: eth0: Gained IPv6LL Feb 13 19:29:35.989398 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 19:29:35.989398 ignition[960]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 19:29:35.993289 ignition[960]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:29:35.995521 ignition[960]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:29:35.995521 ignition[960]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 19:29:35.995521 ignition[960]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Feb 13 19:29:35.999815 ignition[960]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:29:36.001714 ignition[960]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:29:36.001714 ignition[960]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Feb 13 19:29:36.001714 ignition[960]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 19:29:36.027040 ignition[960]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:29:36.032424 ignition[960]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:29:36.034126 ignition[960]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 19:29:36.034126 ignition[960]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Feb 13 19:29:36.034126 ignition[960]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 19:29:36.034126 ignition[960]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:29:36.034126 ignition[960]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:29:36.034126 ignition[960]: INFO : files: files passed Feb 13 19:29:36.034126 ignition[960]: INFO : Ignition finished successfully Feb 13 19:29:36.046919 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:29:36.055386 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:29:36.059005 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:29:36.062271 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:29:36.063491 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:29:36.069238 initrd-setup-root-after-ignition[989]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 19:29:36.073054 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:29:36.073054 initrd-setup-root-after-ignition[991]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:29:36.076363 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:29:36.078984 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:29:36.079270 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:29:36.093293 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:29:36.121490 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:29:36.121645 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:29:36.125084 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:29:36.125165 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:29:36.127332 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:29:36.128249 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:29:36.146731 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:29:36.155293 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:29:36.164327 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:29:36.166657 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:29:36.169051 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:29:36.170913 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:29:36.171927 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:29:36.174450 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:29:36.176536 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:29:36.178398 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:29:36.180608 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:29:36.182970 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:29:36.185230 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:29:36.187316 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:29:36.189808 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:29:36.191895 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:29:36.193946 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:29:36.195595 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:29:36.196606 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:29:36.198864 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:29:36.201077 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:29:36.203448 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:29:36.204413 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:29:36.207011 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:29:36.208002 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:29:36.210281 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:29:36.211367 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:29:36.213726 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:29:36.215511 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:29:36.216608 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:29:36.219321 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:29:36.221174 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:29:36.223044 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:29:36.223918 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:29:36.225882 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:29:36.226779 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:29:36.228849 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:29:36.230032 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:29:36.232586 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:29:36.233586 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:29:36.247202 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:29:36.249765 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:29:36.251548 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:29:36.252651 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:29:36.255302 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:29:36.255445 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:29:36.259494 ignition[1016]: INFO : Ignition 2.20.0 Feb 13 19:29:36.259494 ignition[1016]: INFO : Stage: umount Feb 13 19:29:36.259494 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:29:36.259494 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:29:36.259494 ignition[1016]: INFO : umount: umount passed Feb 13 19:29:36.259494 ignition[1016]: INFO : Ignition finished successfully Feb 13 19:29:36.266606 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:29:36.267610 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:29:36.271509 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:29:36.272537 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:29:36.276106 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:29:36.277871 systemd[1]: Stopped target network.target - Network. Feb 13 19:29:36.279718 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:29:36.280653 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:29:36.282785 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:29:36.282840 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:29:36.285946 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:29:36.286862 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:29:36.288989 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:29:36.289050 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:29:36.292803 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:29:36.295031 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:29:36.297280 systemd-networkd[784]: eth0: DHCPv6 lease lost Feb 13 19:29:36.299356 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:29:36.300568 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:29:36.303104 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:29:36.304172 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:29:36.308313 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:29:36.308371 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:29:36.318139 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:29:36.318210 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:29:36.318263 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:29:36.321230 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:29:36.321278 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:29:36.323324 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:29:36.323377 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:29:36.325751 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:29:36.325802 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:29:36.328039 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:29:36.345569 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:29:36.345784 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:29:36.348214 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:29:36.348323 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:29:36.350751 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:29:36.350820 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:29:36.352135 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:29:36.352173 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:29:36.354119 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:29:36.354169 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:29:36.355528 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:29:36.355574 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:29:36.356009 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:29:36.356055 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:29:36.369258 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:29:36.370400 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:29:36.370473 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:29:36.371642 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 19:29:36.371704 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:29:36.373872 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:29:36.373930 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:29:36.376261 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:29:36.376310 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:29:36.379976 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:29:36.380116 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:29:36.427913 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:29:36.428105 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:29:36.429408 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:29:36.430856 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:29:36.430909 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:29:36.457272 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:29:36.466735 systemd[1]: Switching root. Feb 13 19:29:36.500633 systemd-journald[192]: Journal stopped Feb 13 19:29:37.515516 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Feb 13 19:29:37.515588 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:29:37.515608 kernel: SELinux: policy capability open_perms=1 Feb 13 19:29:37.515620 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:29:37.515631 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:29:37.515645 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:29:37.515659 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:29:37.515670 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:29:37.515681 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:29:37.515693 kernel: audit: type=1403 audit(1739474976.785:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:29:37.515705 systemd[1]: Successfully loaded SELinux policy in 41.303ms. Feb 13 19:29:37.515726 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 15.817ms. Feb 13 19:29:37.515741 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:29:37.515756 systemd[1]: Detected virtualization kvm. Feb 13 19:29:37.515771 systemd[1]: Detected architecture x86-64. Feb 13 19:29:37.515783 systemd[1]: Detected first boot. Feb 13 19:29:37.515795 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:29:37.515807 zram_generator::config[1061]: No configuration found. Feb 13 19:29:37.515820 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:29:37.515832 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 19:29:37.515844 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 19:29:37.515857 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 19:29:37.515872 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:29:37.515885 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:29:37.515897 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:29:37.515909 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:29:37.515921 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:29:37.515935 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:29:37.515948 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:29:37.515959 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:29:37.515988 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:29:37.516000 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:29:37.516013 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:29:37.516025 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:29:37.516038 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:29:37.516051 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:29:37.516074 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 19:29:37.516086 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:29:37.516098 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 19:29:37.516113 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 19:29:37.516125 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 19:29:37.516137 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:29:37.516149 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:29:37.516161 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:29:37.516174 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:29:37.516185 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:29:37.516198 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:29:37.516212 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:29:37.516226 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:29:37.516238 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:29:37.516251 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:29:37.516263 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:29:37.516274 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:29:37.516286 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:29:37.516299 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:29:37.516311 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:29:37.516326 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:29:37.516338 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:29:37.516350 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:29:37.516362 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:29:37.516374 systemd[1]: Reached target machines.target - Containers. Feb 13 19:29:37.516386 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:29:37.516398 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:29:37.516411 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:29:37.516427 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:29:37.516439 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:29:37.516451 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:29:37.516463 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:29:37.516475 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:29:37.516488 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:29:37.516500 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:29:37.516512 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 19:29:37.516525 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 19:29:37.516540 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 19:29:37.516563 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 19:29:37.516575 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:29:37.516587 kernel: fuse: init (API version 7.39) Feb 13 19:29:37.516599 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:29:37.516611 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:29:37.516623 kernel: loop: module loaded Feb 13 19:29:37.516634 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:29:37.516647 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:29:37.516661 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 19:29:37.516673 systemd[1]: Stopped verity-setup.service. Feb 13 19:29:37.516686 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:29:37.516698 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:29:37.516710 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:29:37.516742 systemd-journald[1133]: Collecting audit messages is disabled. Feb 13 19:29:37.516769 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:29:37.516781 kernel: ACPI: bus type drm_connector registered Feb 13 19:29:37.516794 systemd-journald[1133]: Journal started Feb 13 19:29:37.516816 systemd-journald[1133]: Runtime Journal (/run/log/journal/620196325ad845298b6e128554d866ac) is 6.0M, max 48.2M, 42.2M free. Feb 13 19:29:37.297721 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:29:37.518132 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:29:37.317802 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 19:29:37.318281 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 19:29:37.519307 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:29:37.520808 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:29:37.522197 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:29:37.523586 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:29:37.525309 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:29:37.525481 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:29:37.527039 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:29:37.527224 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:29:37.528792 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:29:37.528963 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:29:37.530476 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:29:37.532004 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:29:37.532183 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:29:37.533818 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:29:37.533999 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:29:37.535501 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:29:37.535675 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:29:37.537169 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:29:37.538678 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:29:37.540230 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:29:37.553396 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:29:37.563178 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:29:37.565423 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:29:37.566575 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:29:37.566599 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:29:37.568599 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 19:29:37.570902 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:29:37.573117 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:29:37.574269 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:29:37.577482 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:29:37.580880 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:29:37.583654 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:29:37.587161 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:29:37.588505 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:29:37.589531 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:29:37.592383 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:29:37.605570 systemd-journald[1133]: Time spent on flushing to /var/log/journal/620196325ad845298b6e128554d866ac is 14.827ms for 1041 entries. Feb 13 19:29:37.605570 systemd-journald[1133]: System Journal (/var/log/journal/620196325ad845298b6e128554d866ac) is 8.0M, max 195.6M, 187.6M free. Feb 13 19:29:37.630236 systemd-journald[1133]: Received client request to flush runtime journal. Feb 13 19:29:37.630281 kernel: loop0: detected capacity change from 0 to 141000 Feb 13 19:29:37.598159 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:29:37.602138 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:29:37.603833 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:29:37.605329 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:29:37.609605 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:29:37.615549 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:29:37.619532 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:29:37.635273 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 19:29:37.638500 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:29:37.640775 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:29:37.642562 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:29:37.652087 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:29:37.652991 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Feb 13 19:29:37.653356 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Feb 13 19:29:37.660939 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:29:37.665796 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:29:37.666554 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 19:29:37.678631 kernel: loop1: detected capacity change from 0 to 205544 Feb 13 19:29:37.676339 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:29:37.679150 udevadm[1187]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 19:29:37.706750 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:29:37.722608 kernel: loop2: detected capacity change from 0 to 138184 Feb 13 19:29:37.720716 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:29:37.741426 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Feb 13 19:29:37.741448 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Feb 13 19:29:37.747568 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:29:37.764102 kernel: loop3: detected capacity change from 0 to 141000 Feb 13 19:29:37.779097 kernel: loop4: detected capacity change from 0 to 205544 Feb 13 19:29:37.790102 kernel: loop5: detected capacity change from 0 to 138184 Feb 13 19:29:37.801787 (sd-merge)[1205]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 19:29:37.802392 (sd-merge)[1205]: Merged extensions into '/usr'. Feb 13 19:29:37.807416 systemd[1]: Reloading requested from client PID 1175 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:29:37.807446 systemd[1]: Reloading... Feb 13 19:29:37.873097 zram_generator::config[1231]: No configuration found. Feb 13 19:29:37.927145 ldconfig[1170]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:29:37.994834 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:29:38.043285 systemd[1]: Reloading finished in 235 ms. Feb 13 19:29:38.076997 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:29:38.078571 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:29:38.092224 systemd[1]: Starting ensure-sysext.service... Feb 13 19:29:38.094267 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:29:38.103424 systemd[1]: Reloading requested from client PID 1268 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:29:38.103441 systemd[1]: Reloading... Feb 13 19:29:38.118251 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:29:38.118541 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:29:38.119692 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:29:38.119990 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Feb 13 19:29:38.120079 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Feb 13 19:29:38.124300 systemd-tmpfiles[1269]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:29:38.124312 systemd-tmpfiles[1269]: Skipping /boot Feb 13 19:29:38.136531 systemd-tmpfiles[1269]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:29:38.136544 systemd-tmpfiles[1269]: Skipping /boot Feb 13 19:29:38.174245 zram_generator::config[1302]: No configuration found. Feb 13 19:29:38.268408 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:29:38.317912 systemd[1]: Reloading finished in 214 ms. Feb 13 19:29:38.335544 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:29:38.349494 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:29:38.358547 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:29:38.360900 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:29:38.363346 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:29:38.366531 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:29:38.370408 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:29:38.373352 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:29:38.379238 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:29:38.379406 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:29:38.380621 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:29:38.384628 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:29:38.388183 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:29:38.389764 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:29:38.392663 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:29:38.393692 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:29:38.397987 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:29:38.400045 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:29:38.400432 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:29:38.402454 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:29:38.402892 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:29:38.404621 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:29:38.405246 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:29:38.412165 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:29:38.417606 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:29:38.417779 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:29:38.426338 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:29:38.428822 augenrules[1370]: No rules Feb 13 19:29:38.429979 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:29:38.432303 systemd-udevd[1340]: Using default interface naming scheme 'v255'. Feb 13 19:29:38.436265 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:29:38.437522 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:29:38.440394 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:29:38.441494 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:29:38.442396 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:29:38.444759 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:29:38.444987 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:29:38.446688 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:29:38.446867 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:29:38.449734 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:29:38.450295 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:29:38.452551 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:29:38.452718 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:29:38.456122 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:29:38.459559 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:29:38.465847 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:29:38.475301 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:29:38.476541 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:29:38.481349 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:29:38.485367 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:29:38.489345 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:29:38.491581 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:29:38.492702 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:29:38.492836 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:29:38.492914 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:29:38.493690 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:29:38.495673 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:29:38.496016 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:29:38.499553 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:29:38.499750 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:29:38.502428 systemd[1]: Finished ensure-sysext.service. Feb 13 19:29:38.509304 augenrules[1390]: /sbin/augenrules: No change Feb 13 19:29:38.517345 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:29:38.529269 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 19:29:38.530957 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:29:38.531174 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:29:38.534316 augenrules[1434]: No rules Feb 13 19:29:38.534262 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:29:38.534434 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:29:38.534837 systemd-resolved[1338]: Positive Trust Anchors: Feb 13 19:29:38.534855 systemd-resolved[1338]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:29:38.534887 systemd-resolved[1338]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:29:38.535884 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:29:38.536122 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:29:38.540463 systemd-resolved[1338]: Defaulting to hostname 'linux'. Feb 13 19:29:38.546329 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:29:38.549567 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 19:29:38.550331 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:29:38.551578 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:29:38.551645 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:29:38.566151 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1394) Feb 13 19:29:38.594026 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:29:38.603358 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:29:38.617834 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:29:38.619368 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 19:29:38.621718 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:29:38.623453 systemd-networkd[1430]: lo: Link UP Feb 13 19:29:38.623461 systemd-networkd[1430]: lo: Gained carrier Feb 13 19:29:38.627444 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 13 19:29:38.627042 systemd-networkd[1430]: Enumeration completed Feb 13 19:29:38.627133 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:29:38.628584 systemd-networkd[1430]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:29:38.628597 systemd-networkd[1430]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:29:38.629289 systemd-networkd[1430]: eth0: Link UP Feb 13 19:29:38.629301 systemd-networkd[1430]: eth0: Gained carrier Feb 13 19:29:38.629320 systemd-networkd[1430]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:29:38.629675 systemd[1]: Reached target network.target - Network. Feb 13 19:29:38.635220 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Feb 13 19:29:38.637424 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Feb 13 19:29:38.637593 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Feb 13 19:29:38.637772 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Feb 13 19:29:38.639012 kernel: ACPI: button: Power Button [PWRF] Feb 13 19:29:38.639253 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:29:38.642186 systemd-networkd[1430]: eth0: DHCPv4 address 10.0.0.116/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:29:38.642862 systemd-timesyncd[1431]: Network configuration changed, trying to establish connection. Feb 13 19:29:40.345115 systemd-resolved[1338]: Clock change detected. Flushing caches. Feb 13 19:29:40.346591 systemd-timesyncd[1431]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 19:29:40.346658 systemd-timesyncd[1431]: Initial clock synchronization to Thu 2025-02-13 19:29:40.345068 UTC. Feb 13 19:29:40.352339 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 13 19:29:40.378335 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 19:29:40.379643 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:29:40.382753 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:29:40.382961 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:29:40.388625 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:29:40.444494 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:29:40.463621 kernel: kvm_amd: TSC scaling supported Feb 13 19:29:40.463705 kernel: kvm_amd: Nested Virtualization enabled Feb 13 19:29:40.463718 kernel: kvm_amd: Nested Paging enabled Feb 13 19:29:40.463731 kernel: kvm_amd: LBR virtualization supported Feb 13 19:29:40.464673 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Feb 13 19:29:40.464694 kernel: kvm_amd: Virtual GIF supported Feb 13 19:29:40.484473 kernel: EDAC MC: Ver: 3.0.0 Feb 13 19:29:40.511722 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:29:40.529451 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:29:40.537271 lvm[1468]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:29:40.584394 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:29:40.585895 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:29:40.587001 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:29:40.588151 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:29:40.589475 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:29:40.590956 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:29:40.592125 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:29:40.593384 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:29:40.594616 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:29:40.594640 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:29:40.595537 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:29:40.597531 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:29:40.600120 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:29:40.610918 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:29:40.613386 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:29:40.614977 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:29:40.616112 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:29:40.617094 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:29:40.618098 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:29:40.618127 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:29:40.619110 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:29:40.621365 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:29:40.625366 lvm[1472]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:29:40.625454 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:29:40.628587 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:29:40.629793 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:29:40.633453 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:29:40.633688 jq[1475]: false Feb 13 19:29:40.640424 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 19:29:40.642684 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:29:40.645441 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:29:40.648901 dbus-daemon[1474]: [system] SELinux support is enabled Feb 13 19:29:40.651231 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:29:40.652817 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:29:40.653276 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:29:40.655513 extend-filesystems[1476]: Found loop3 Feb 13 19:29:40.656523 extend-filesystems[1476]: Found loop4 Feb 13 19:29:40.656523 extend-filesystems[1476]: Found loop5 Feb 13 19:29:40.656523 extend-filesystems[1476]: Found sr0 Feb 13 19:29:40.656523 extend-filesystems[1476]: Found vda Feb 13 19:29:40.656523 extend-filesystems[1476]: Found vda1 Feb 13 19:29:40.656523 extend-filesystems[1476]: Found vda2 Feb 13 19:29:40.656523 extend-filesystems[1476]: Found vda3 Feb 13 19:29:40.656523 extend-filesystems[1476]: Found usr Feb 13 19:29:40.656523 extend-filesystems[1476]: Found vda4 Feb 13 19:29:40.656523 extend-filesystems[1476]: Found vda6 Feb 13 19:29:40.656523 extend-filesystems[1476]: Found vda7 Feb 13 19:29:40.656523 extend-filesystems[1476]: Found vda9 Feb 13 19:29:40.656523 extend-filesystems[1476]: Checking size of /dev/vda9 Feb 13 19:29:40.659320 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:29:40.669169 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:29:40.671220 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:29:40.675401 extend-filesystems[1476]: Resized partition /dev/vda9 Feb 13 19:29:40.677058 jq[1493]: true Feb 13 19:29:40.677163 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:29:40.680260 extend-filesystems[1497]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:29:40.681963 update_engine[1488]: I20250213 19:29:40.681910 1488 main.cc:92] Flatcar Update Engine starting Feb 13 19:29:40.689462 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 19:29:40.689496 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1397) Feb 13 19:29:40.689510 update_engine[1488]: I20250213 19:29:40.683159 1488 update_check_scheduler.cc:74] Next update check in 3m9s Feb 13 19:29:40.687966 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:29:40.688207 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:29:40.688560 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:29:40.688756 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:29:40.693625 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:29:40.693851 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:29:40.711348 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 19:29:40.713480 jq[1501]: true Feb 13 19:29:40.725646 (ntainerd)[1502]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:29:40.731006 tar[1500]: linux-amd64/helm Feb 13 19:29:40.731879 systemd-logind[1485]: Watching system buttons on /dev/input/event1 (Power Button) Feb 13 19:29:40.731908 systemd-logind[1485]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 19:29:40.733012 systemd-logind[1485]: New seat seat0. Feb 13 19:29:40.734534 extend-filesystems[1497]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 19:29:40.734534 extend-filesystems[1497]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 19:29:40.734534 extend-filesystems[1497]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 19:29:40.739526 extend-filesystems[1476]: Resized filesystem in /dev/vda9 Feb 13 19:29:40.738460 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:29:40.738912 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:29:40.742880 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:29:40.749067 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:29:40.751554 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:29:40.751704 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:29:40.753027 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:29:40.753132 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:29:40.762918 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:29:40.776558 bash[1529]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:29:40.795242 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:29:40.798544 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:29:40.800442 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 19:29:40.802690 locksmithd[1530]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:29:40.879197 sshd_keygen[1496]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:29:40.905220 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:29:40.914622 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:29:40.919308 systemd[1]: Started sshd@0-10.0.0.116:22-10.0.0.1:59646.service - OpenSSH per-connection server daemon (10.0.0.1:59646). Feb 13 19:29:40.930674 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:29:40.930903 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:29:40.957348 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:29:40.986111 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:29:40.992606 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:29:41.010675 sshd[1550]: Accepted publickey for core from 10.0.0.1 port 59646 ssh2: RSA SHA256:ENn9hOvI2hLUXcV6iHA8gc9Z4CTEPvGDMkoVtxIuYbg Feb 13 19:29:41.013398 sshd-session[1550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:29:41.205283 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 19:29:41.206694 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:29:41.217534 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:29:41.224326 containerd[1502]: time="2025-02-13T19:29:41.222747913Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 19:29:41.228574 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:29:41.237006 systemd-logind[1485]: New session 1 of user core. Feb 13 19:29:41.246968 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:29:41.249643 containerd[1502]: time="2025-02-13T19:29:41.249607274Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:29:41.251695 containerd[1502]: time="2025-02-13T19:29:41.251665514Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:29:41.251724 containerd[1502]: time="2025-02-13T19:29:41.251693487Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:29:41.251724 containerd[1502]: time="2025-02-13T19:29:41.251708905Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:29:41.251894 containerd[1502]: time="2025-02-13T19:29:41.251876470Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:29:41.251923 containerd[1502]: time="2025-02-13T19:29:41.251897950Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:29:41.252110 containerd[1502]: time="2025-02-13T19:29:41.251960898Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:29:41.252110 containerd[1502]: time="2025-02-13T19:29:41.251972350Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:29:41.253493 containerd[1502]: time="2025-02-13T19:29:41.253466151Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:29:41.253521 containerd[1502]: time="2025-02-13T19:29:41.253497369Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:29:41.253540 containerd[1502]: time="2025-02-13T19:29:41.253517868Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:29:41.253540 containerd[1502]: time="2025-02-13T19:29:41.253529099Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:29:41.254323 containerd[1502]: time="2025-02-13T19:29:41.253659794Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:29:41.254323 containerd[1502]: time="2025-02-13T19:29:41.254178777Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:29:41.254323 containerd[1502]: time="2025-02-13T19:29:41.254299514Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:29:41.254393 containerd[1502]: time="2025-02-13T19:29:41.254323068Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:29:41.254438 containerd[1502]: time="2025-02-13T19:29:41.254420691Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:29:41.254500 containerd[1502]: time="2025-02-13T19:29:41.254484891Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:29:41.255556 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:29:41.261168 (systemd)[1566]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:29:41.262014 containerd[1502]: time="2025-02-13T19:29:41.261766938Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:29:41.262014 containerd[1502]: time="2025-02-13T19:29:41.261810099Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:29:41.262014 containerd[1502]: time="2025-02-13T19:29:41.261828042Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:29:41.262014 containerd[1502]: time="2025-02-13T19:29:41.261845185Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:29:41.262014 containerd[1502]: time="2025-02-13T19:29:41.261860093Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:29:41.262014 containerd[1502]: time="2025-02-13T19:29:41.261979356Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:29:41.262204 containerd[1502]: time="2025-02-13T19:29:41.262196403Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:29:41.262354 containerd[1502]: time="2025-02-13T19:29:41.262295920Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:29:41.262354 containerd[1502]: time="2025-02-13T19:29:41.262347917Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:29:41.262401 containerd[1502]: time="2025-02-13T19:29:41.262363497Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:29:41.262401 containerd[1502]: time="2025-02-13T19:29:41.262376852Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:29:41.262401 containerd[1502]: time="2025-02-13T19:29:41.262389606Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:29:41.262463 containerd[1502]: time="2025-02-13T19:29:41.262401829Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:29:41.262463 containerd[1502]: time="2025-02-13T19:29:41.262414893Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:29:41.262463 containerd[1502]: time="2025-02-13T19:29:41.262428338Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:29:41.262463 containerd[1502]: time="2025-02-13T19:29:41.262440020Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:29:41.262463 containerd[1502]: time="2025-02-13T19:29:41.262453405Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:29:41.262463 containerd[1502]: time="2025-02-13T19:29:41.262463674Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:29:41.262570 containerd[1502]: time="2025-02-13T19:29:41.262483502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:29:41.262570 containerd[1502]: time="2025-02-13T19:29:41.262496316Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:29:41.262570 containerd[1502]: time="2025-02-13T19:29:41.262541270Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:29:41.262570 containerd[1502]: time="2025-02-13T19:29:41.262553012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:29:41.262570 containerd[1502]: time="2025-02-13T19:29:41.262564494Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:29:41.262667 containerd[1502]: time="2025-02-13T19:29:41.262576937Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:29:41.262667 containerd[1502]: time="2025-02-13T19:29:41.262588489Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:29:41.262667 containerd[1502]: time="2025-02-13T19:29:41.262600311Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:29:41.262667 containerd[1502]: time="2025-02-13T19:29:41.262612353Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:29:41.262667 containerd[1502]: time="2025-02-13T19:29:41.262626019Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:29:41.262667 containerd[1502]: time="2025-02-13T19:29:41.262636579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:29:41.262667 containerd[1502]: time="2025-02-13T19:29:41.262647680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:29:41.262794 containerd[1502]: time="2025-02-13T19:29:41.262677015Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:29:41.262794 containerd[1502]: time="2025-02-13T19:29:41.262691341Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:29:41.262794 containerd[1502]: time="2025-02-13T19:29:41.262718903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:29:41.262794 containerd[1502]: time="2025-02-13T19:29:41.262734242Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:29:41.262794 containerd[1502]: time="2025-02-13T19:29:41.262744822Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:29:41.262889 containerd[1502]: time="2025-02-13T19:29:41.262799905Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:29:41.262889 containerd[1502]: time="2025-02-13T19:29:41.262816777Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:29:41.262889 containerd[1502]: time="2025-02-13T19:29:41.262825984Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:29:41.262949 containerd[1502]: time="2025-02-13T19:29:41.262837766Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:29:41.262949 containerd[1502]: time="2025-02-13T19:29:41.262922014Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:29:41.262949 containerd[1502]: time="2025-02-13T19:29:41.262935279Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:29:41.262949 containerd[1502]: time="2025-02-13T19:29:41.262946450Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:29:41.263021 containerd[1502]: time="2025-02-13T19:29:41.262955607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:29:41.263303 containerd[1502]: time="2025-02-13T19:29:41.263231795Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:29:41.263303 containerd[1502]: time="2025-02-13T19:29:41.263275287Z" level=info msg="Connect containerd service" Feb 13 19:29:41.263303 containerd[1502]: time="2025-02-13T19:29:41.263306154Z" level=info msg="using legacy CRI server" Feb 13 19:29:41.263303 containerd[1502]: time="2025-02-13T19:29:41.263330210Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:29:41.263654 containerd[1502]: time="2025-02-13T19:29:41.263424667Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:29:41.264084 containerd[1502]: time="2025-02-13T19:29:41.264053857Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:29:41.264236 containerd[1502]: time="2025-02-13T19:29:41.264202345Z" level=info msg="Start subscribing containerd event" Feb 13 19:29:41.264273 containerd[1502]: time="2025-02-13T19:29:41.264242340Z" level=info msg="Start recovering state" Feb 13 19:29:41.264337 containerd[1502]: time="2025-02-13T19:29:41.264303966Z" level=info msg="Start event monitor" Feb 13 19:29:41.264359 containerd[1502]: time="2025-02-13T19:29:41.264336837Z" level=info msg="Start snapshots syncer" Feb 13 19:29:41.264359 containerd[1502]: time="2025-02-13T19:29:41.264346295Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:29:41.264359 containerd[1502]: time="2025-02-13T19:29:41.264353639Z" level=info msg="Start streaming server" Feb 13 19:29:41.265122 containerd[1502]: time="2025-02-13T19:29:41.264715207Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:29:41.265122 containerd[1502]: time="2025-02-13T19:29:41.264809975Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:29:41.265122 containerd[1502]: time="2025-02-13T19:29:41.264869306Z" level=info msg="containerd successfully booted in 0.044101s" Feb 13 19:29:41.264925 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:29:41.313004 tar[1500]: linux-amd64/LICENSE Feb 13 19:29:41.313101 tar[1500]: linux-amd64/README.md Feb 13 19:29:41.330617 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 19:29:41.380441 systemd[1566]: Queued start job for default target default.target. Feb 13 19:29:41.390580 systemd[1566]: Created slice app.slice - User Application Slice. Feb 13 19:29:41.390605 systemd[1566]: Reached target paths.target - Paths. Feb 13 19:29:41.390618 systemd[1566]: Reached target timers.target - Timers. Feb 13 19:29:41.392098 systemd[1566]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:29:41.404251 systemd[1566]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:29:41.404382 systemd[1566]: Reached target sockets.target - Sockets. Feb 13 19:29:41.404400 systemd[1566]: Reached target basic.target - Basic System. Feb 13 19:29:41.404435 systemd[1566]: Reached target default.target - Main User Target. Feb 13 19:29:41.404466 systemd[1566]: Startup finished in 133ms. Feb 13 19:29:41.404913 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:29:41.407364 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:29:41.420406 systemd-networkd[1430]: eth0: Gained IPv6LL Feb 13 19:29:41.423727 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:29:41.425390 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:29:41.436585 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 19:29:41.438818 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:29:41.440923 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:29:41.458960 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 19:29:41.459182 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 19:29:41.460843 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:29:41.463719 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:29:41.471699 systemd[1]: Started sshd@1-10.0.0.116:22-10.0.0.1:59656.service - OpenSSH per-connection server daemon (10.0.0.1:59656). Feb 13 19:29:41.568735 sshd[1598]: Accepted publickey for core from 10.0.0.1 port 59656 ssh2: RSA SHA256:ENn9hOvI2hLUXcV6iHA8gc9Z4CTEPvGDMkoVtxIuYbg Feb 13 19:29:41.570336 sshd-session[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:29:41.574884 systemd-logind[1485]: New session 2 of user core. Feb 13 19:29:41.581497 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:29:41.638765 sshd[1600]: Connection closed by 10.0.0.1 port 59656 Feb 13 19:29:41.641098 sshd-session[1598]: pam_unix(sshd:session): session closed for user core Feb 13 19:29:41.649039 systemd[1]: sshd@1-10.0.0.116:22-10.0.0.1:59656.service: Deactivated successfully. Feb 13 19:29:41.650958 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:29:41.653546 systemd-logind[1485]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:29:41.662698 systemd[1]: Started sshd@2-10.0.0.116:22-10.0.0.1:59658.service - OpenSSH per-connection server daemon (10.0.0.1:59658). Feb 13 19:29:41.665264 systemd-logind[1485]: Removed session 2. Feb 13 19:29:41.701727 sshd[1605]: Accepted publickey for core from 10.0.0.1 port 59658 ssh2: RSA SHA256:ENn9hOvI2hLUXcV6iHA8gc9Z4CTEPvGDMkoVtxIuYbg Feb 13 19:29:41.703241 sshd-session[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:29:41.707849 systemd-logind[1485]: New session 3 of user core. Feb 13 19:29:41.721480 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:29:41.778632 sshd[1607]: Connection closed by 10.0.0.1 port 59658 Feb 13 19:29:41.779048 sshd-session[1605]: pam_unix(sshd:session): session closed for user core Feb 13 19:29:41.783139 systemd[1]: sshd@2-10.0.0.116:22-10.0.0.1:59658.service: Deactivated successfully. Feb 13 19:29:41.784970 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:29:41.785646 systemd-logind[1485]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:29:41.786498 systemd-logind[1485]: Removed session 3. Feb 13 19:29:42.655249 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:29:42.656989 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:29:42.658479 systemd[1]: Startup finished in 701ms (kernel) + 5.085s (initrd) + 4.213s (userspace) = 10.000s. Feb 13 19:29:42.659991 (kubelet)[1616]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:29:42.669517 agetty[1559]: failed to open credentials directory Feb 13 19:29:42.669546 agetty[1560]: failed to open credentials directory Feb 13 19:29:43.047075 kubelet[1616]: E0213 19:29:43.046946 1616 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:29:43.051665 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:29:43.051861 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:29:43.052185 systemd[1]: kubelet.service: Consumed 1.498s CPU time. Feb 13 19:29:51.794286 systemd[1]: Started sshd@3-10.0.0.116:22-10.0.0.1:36718.service - OpenSSH per-connection server daemon (10.0.0.1:36718). Feb 13 19:29:51.835646 sshd[1630]: Accepted publickey for core from 10.0.0.1 port 36718 ssh2: RSA SHA256:ENn9hOvI2hLUXcV6iHA8gc9Z4CTEPvGDMkoVtxIuYbg Feb 13 19:29:51.837079 sshd-session[1630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:29:51.841134 systemd-logind[1485]: New session 4 of user core. Feb 13 19:29:51.854487 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:29:51.907812 sshd[1632]: Connection closed by 10.0.0.1 port 36718 Feb 13 19:29:51.908201 sshd-session[1630]: pam_unix(sshd:session): session closed for user core Feb 13 19:29:51.914689 systemd[1]: sshd@3-10.0.0.116:22-10.0.0.1:36718.service: Deactivated successfully. Feb 13 19:29:51.916452 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:29:51.917925 systemd-logind[1485]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:29:51.919098 systemd[1]: Started sshd@4-10.0.0.116:22-10.0.0.1:36720.service - OpenSSH per-connection server daemon (10.0.0.1:36720). Feb 13 19:29:51.919838 systemd-logind[1485]: Removed session 4. Feb 13 19:29:51.961338 sshd[1637]: Accepted publickey for core from 10.0.0.1 port 36720 ssh2: RSA SHA256:ENn9hOvI2hLUXcV6iHA8gc9Z4CTEPvGDMkoVtxIuYbg Feb 13 19:29:51.962852 sshd-session[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:29:51.966853 systemd-logind[1485]: New session 5 of user core. Feb 13 19:29:51.976464 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:29:52.025583 sshd[1639]: Connection closed by 10.0.0.1 port 36720 Feb 13 19:29:52.026011 sshd-session[1637]: pam_unix(sshd:session): session closed for user core Feb 13 19:29:52.037134 systemd[1]: sshd@4-10.0.0.116:22-10.0.0.1:36720.service: Deactivated successfully. Feb 13 19:29:52.038970 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:29:52.040571 systemd-logind[1485]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:29:52.055557 systemd[1]: Started sshd@5-10.0.0.116:22-10.0.0.1:36726.service - OpenSSH per-connection server daemon (10.0.0.1:36726). Feb 13 19:29:52.056652 systemd-logind[1485]: Removed session 5. Feb 13 19:29:52.092542 sshd[1644]: Accepted publickey for core from 10.0.0.1 port 36726 ssh2: RSA SHA256:ENn9hOvI2hLUXcV6iHA8gc9Z4CTEPvGDMkoVtxIuYbg Feb 13 19:29:52.093796 sshd-session[1644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:29:52.097224 systemd-logind[1485]: New session 6 of user core. Feb 13 19:29:52.110437 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:29:52.161834 sshd[1646]: Connection closed by 10.0.0.1 port 36726 Feb 13 19:29:52.162175 sshd-session[1644]: pam_unix(sshd:session): session closed for user core Feb 13 19:29:52.176849 systemd[1]: sshd@5-10.0.0.116:22-10.0.0.1:36726.service: Deactivated successfully. Feb 13 19:29:52.178482 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:29:52.180194 systemd-logind[1485]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:29:52.190551 systemd[1]: Started sshd@6-10.0.0.116:22-10.0.0.1:36730.service - OpenSSH per-connection server daemon (10.0.0.1:36730). Feb 13 19:29:52.191404 systemd-logind[1485]: Removed session 6. Feb 13 19:29:52.227872 sshd[1651]: Accepted publickey for core from 10.0.0.1 port 36730 ssh2: RSA SHA256:ENn9hOvI2hLUXcV6iHA8gc9Z4CTEPvGDMkoVtxIuYbg Feb 13 19:29:52.229264 sshd-session[1651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:29:52.232762 systemd-logind[1485]: New session 7 of user core. Feb 13 19:29:52.251438 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:29:52.308079 sudo[1654]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 19:29:52.308430 sudo[1654]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:29:52.329141 sudo[1654]: pam_unix(sudo:session): session closed for user root Feb 13 19:29:52.330604 sshd[1653]: Connection closed by 10.0.0.1 port 36730 Feb 13 19:29:52.331067 sshd-session[1651]: pam_unix(sshd:session): session closed for user core Feb 13 19:29:52.346058 systemd[1]: sshd@6-10.0.0.116:22-10.0.0.1:36730.service: Deactivated successfully. Feb 13 19:29:52.347815 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:29:52.349392 systemd-logind[1485]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:29:52.350803 systemd[1]: Started sshd@7-10.0.0.116:22-10.0.0.1:36740.service - OpenSSH per-connection server daemon (10.0.0.1:36740). Feb 13 19:29:52.351593 systemd-logind[1485]: Removed session 7. Feb 13 19:29:52.392610 sshd[1659]: Accepted publickey for core from 10.0.0.1 port 36740 ssh2: RSA SHA256:ENn9hOvI2hLUXcV6iHA8gc9Z4CTEPvGDMkoVtxIuYbg Feb 13 19:29:52.394019 sshd-session[1659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:29:52.397627 systemd-logind[1485]: New session 8 of user core. Feb 13 19:29:52.407418 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 19:29:52.461862 sudo[1663]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 19:29:52.462310 sudo[1663]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:29:52.465761 sudo[1663]: pam_unix(sudo:session): session closed for user root Feb 13 19:29:52.471936 sudo[1662]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 19:29:52.472271 sudo[1662]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:29:52.496719 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:29:52.526101 augenrules[1685]: No rules Feb 13 19:29:52.528006 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:29:52.528262 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:29:52.529767 sudo[1662]: pam_unix(sudo:session): session closed for user root Feb 13 19:29:52.531348 sshd[1661]: Connection closed by 10.0.0.1 port 36740 Feb 13 19:29:52.531714 sshd-session[1659]: pam_unix(sshd:session): session closed for user core Feb 13 19:29:52.545090 systemd[1]: sshd@7-10.0.0.116:22-10.0.0.1:36740.service: Deactivated successfully. Feb 13 19:29:52.546818 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 19:29:52.548426 systemd-logind[1485]: Session 8 logged out. Waiting for processes to exit. Feb 13 19:29:52.549667 systemd[1]: Started sshd@8-10.0.0.116:22-10.0.0.1:36742.service - OpenSSH per-connection server daemon (10.0.0.1:36742). Feb 13 19:29:52.550400 systemd-logind[1485]: Removed session 8. Feb 13 19:29:52.592027 sshd[1693]: Accepted publickey for core from 10.0.0.1 port 36742 ssh2: RSA SHA256:ENn9hOvI2hLUXcV6iHA8gc9Z4CTEPvGDMkoVtxIuYbg Feb 13 19:29:52.593577 sshd-session[1693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:29:52.597563 systemd-logind[1485]: New session 9 of user core. Feb 13 19:29:52.607444 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 19:29:52.660436 sudo[1696]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:29:52.660767 sudo[1696]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:29:53.207123 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 19:29:53.223514 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:29:53.225751 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 19:29:53.231300 (dockerd)[1717]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 19:29:53.443081 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:29:53.447655 (kubelet)[1725]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:29:53.529380 kubelet[1725]: E0213 19:29:53.529242 1725 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:29:53.535954 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:29:53.536168 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:29:53.706728 dockerd[1717]: time="2025-02-13T19:29:53.706656909Z" level=info msg="Starting up" Feb 13 19:29:54.063558 dockerd[1717]: time="2025-02-13T19:29:54.063498915Z" level=info msg="Loading containers: start." Feb 13 19:29:54.245348 kernel: Initializing XFRM netlink socket Feb 13 19:29:54.349678 systemd-networkd[1430]: docker0: Link UP Feb 13 19:29:54.394189 dockerd[1717]: time="2025-02-13T19:29:54.394120088Z" level=info msg="Loading containers: done." Feb 13 19:29:54.418149 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1136262063-merged.mount: Deactivated successfully. Feb 13 19:29:54.436460 dockerd[1717]: time="2025-02-13T19:29:54.436395730Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 19:29:54.436597 dockerd[1717]: time="2025-02-13T19:29:54.436568013Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Feb 13 19:29:54.436772 dockerd[1717]: time="2025-02-13T19:29:54.436745696Z" level=info msg="Daemon has completed initialization" Feb 13 19:29:54.477513 dockerd[1717]: time="2025-02-13T19:29:54.477435905Z" level=info msg="API listen on /run/docker.sock" Feb 13 19:29:54.477729 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 19:29:55.258001 containerd[1502]: time="2025-02-13T19:29:55.257958463Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\"" Feb 13 19:29:55.896668 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount552845968.mount: Deactivated successfully. Feb 13 19:29:57.044384 containerd[1502]: time="2025-02-13T19:29:57.044327052Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:29:57.044992 containerd[1502]: time="2025-02-13T19:29:57.044930734Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.6: active requests=0, bytes read=27976588" Feb 13 19:29:57.046169 containerd[1502]: time="2025-02-13T19:29:57.046137757Z" level=info msg="ImageCreate event name:\"sha256:1372127edc9da70a68712c470a11f621ed256e8be0dfec4c4d58ca09109352a3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:29:57.049218 containerd[1502]: time="2025-02-13T19:29:57.049174402Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:29:57.050169 containerd[1502]: time="2025-02-13T19:29:57.050139803Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.6\" with image id \"sha256:1372127edc9da70a68712c470a11f621ed256e8be0dfec4c4d58ca09109352a3\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\", size \"27973388\" in 1.792142867s" Feb 13 19:29:57.050228 containerd[1502]: time="2025-02-13T19:29:57.050171592Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\" returns image reference \"sha256:1372127edc9da70a68712c470a11f621ed256e8be0dfec4c4d58ca09109352a3\"" Feb 13 19:29:57.053642 containerd[1502]: time="2025-02-13T19:29:57.053600753Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\"" Feb 13 19:29:58.680535 containerd[1502]: time="2025-02-13T19:29:58.680486334Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:29:58.681362 containerd[1502]: time="2025-02-13T19:29:58.681325047Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.6: active requests=0, bytes read=24708193" Feb 13 19:29:58.682605 containerd[1502]: time="2025-02-13T19:29:58.682559683Z" level=info msg="ImageCreate event name:\"sha256:5f23cb154eea1f587685082e456e95e5480c1d459849b1c634119d7de897e34e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:29:58.685669 containerd[1502]: time="2025-02-13T19:29:58.685627025Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:29:58.686761 containerd[1502]: time="2025-02-13T19:29:58.686708323Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.6\" with image id \"sha256:5f23cb154eea1f587685082e456e95e5480c1d459849b1c634119d7de897e34e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\", size \"26154739\" in 1.63306494s" Feb 13 19:29:58.686761 containerd[1502]: time="2025-02-13T19:29:58.686740363Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\" returns image reference \"sha256:5f23cb154eea1f587685082e456e95e5480c1d459849b1c634119d7de897e34e\"" Feb 13 19:29:58.687346 containerd[1502]: time="2025-02-13T19:29:58.687282850Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\"" Feb 13 19:30:00.186088 containerd[1502]: time="2025-02-13T19:30:00.186028093Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:00.187054 containerd[1502]: time="2025-02-13T19:30:00.186951434Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.6: active requests=0, bytes read=18652425" Feb 13 19:30:00.188288 containerd[1502]: time="2025-02-13T19:30:00.188256512Z" level=info msg="ImageCreate event name:\"sha256:9195ad415d31e3c2df6dddf4603bc56915b71486f514455bc3b5389b9b0ed9c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:00.194056 containerd[1502]: time="2025-02-13T19:30:00.192935196Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:00.196277 containerd[1502]: time="2025-02-13T19:30:00.196212091Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.6\" with image id \"sha256:9195ad415d31e3c2df6dddf4603bc56915b71486f514455bc3b5389b9b0ed9c1\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\", size \"20098989\" in 1.508899555s" Feb 13 19:30:00.196354 containerd[1502]: time="2025-02-13T19:30:00.196283976Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\" returns image reference \"sha256:9195ad415d31e3c2df6dddf4603bc56915b71486f514455bc3b5389b9b0ed9c1\"" Feb 13 19:30:00.196975 containerd[1502]: time="2025-02-13T19:30:00.196945366Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\"" Feb 13 19:30:01.279351 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1605231092.mount: Deactivated successfully. Feb 13 19:30:02.078941 containerd[1502]: time="2025-02-13T19:30:02.078885155Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:02.079604 containerd[1502]: time="2025-02-13T19:30:02.079574177Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.6: active requests=0, bytes read=30229108" Feb 13 19:30:02.080733 containerd[1502]: time="2025-02-13T19:30:02.080658611Z" level=info msg="ImageCreate event name:\"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:02.082628 containerd[1502]: time="2025-02-13T19:30:02.082588140Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:02.083146 containerd[1502]: time="2025-02-13T19:30:02.083113645Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.6\" with image id \"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\", repo tag \"registry.k8s.io/kube-proxy:v1.31.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\", size \"30228127\" in 1.88613704s" Feb 13 19:30:02.083146 containerd[1502]: time="2025-02-13T19:30:02.083142950Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\" returns image reference \"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\"" Feb 13 19:30:02.083726 containerd[1502]: time="2025-02-13T19:30:02.083690357Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 19:30:02.685845 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount98007677.mount: Deactivated successfully. Feb 13 19:30:03.549421 containerd[1502]: time="2025-02-13T19:30:03.549356774Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:03.550204 containerd[1502]: time="2025-02-13T19:30:03.550161263Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Feb 13 19:30:03.551397 containerd[1502]: time="2025-02-13T19:30:03.551367836Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:03.554267 containerd[1502]: time="2025-02-13T19:30:03.554222059Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:03.557898 containerd[1502]: time="2025-02-13T19:30:03.557854901Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.474119951s" Feb 13 19:30:03.557946 containerd[1502]: time="2025-02-13T19:30:03.557897732Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Feb 13 19:30:03.558567 containerd[1502]: time="2025-02-13T19:30:03.558523064Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 19:30:03.707172 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 19:30:03.727526 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:30:03.874077 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:30:03.878567 (kubelet)[2054]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:30:04.075537 kubelet[2054]: E0213 19:30:04.075484 2054 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:30:04.079697 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:30:04.079902 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:30:04.261892 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3738740765.mount: Deactivated successfully. Feb 13 19:30:04.268049 containerd[1502]: time="2025-02-13T19:30:04.268006118Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:04.268887 containerd[1502]: time="2025-02-13T19:30:04.268827218Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Feb 13 19:30:04.269937 containerd[1502]: time="2025-02-13T19:30:04.269897766Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:04.272372 containerd[1502]: time="2025-02-13T19:30:04.272340427Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:04.273239 containerd[1502]: time="2025-02-13T19:30:04.273192364Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 714.631168ms" Feb 13 19:30:04.273239 containerd[1502]: time="2025-02-13T19:30:04.273235886Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Feb 13 19:30:04.273865 containerd[1502]: time="2025-02-13T19:30:04.273837685Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Feb 13 19:30:04.731534 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount454953538.mount: Deactivated successfully. Feb 13 19:30:06.805051 containerd[1502]: time="2025-02-13T19:30:06.804975391Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:06.805805 containerd[1502]: time="2025-02-13T19:30:06.805757428Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779973" Feb 13 19:30:06.807190 containerd[1502]: time="2025-02-13T19:30:06.807140732Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:06.810139 containerd[1502]: time="2025-02-13T19:30:06.810091907Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:06.811382 containerd[1502]: time="2025-02-13T19:30:06.811311444Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.537439295s" Feb 13 19:30:06.811382 containerd[1502]: time="2025-02-13T19:30:06.811372178Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Feb 13 19:30:09.117672 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:30:09.130507 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:30:09.154894 systemd[1]: Reloading requested from client PID 2150 ('systemctl') (unit session-9.scope)... Feb 13 19:30:09.154912 systemd[1]: Reloading... Feb 13 19:30:09.244813 zram_generator::config[2195]: No configuration found. Feb 13 19:30:09.457759 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:30:09.533899 systemd[1]: Reloading finished in 378 ms. Feb 13 19:30:09.587915 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 19:30:09.588009 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 19:30:09.588299 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:30:09.591241 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:30:09.748404 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:30:09.753358 (kubelet)[2238]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:30:09.789186 kubelet[2238]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:30:09.789186 kubelet[2238]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:30:09.789186 kubelet[2238]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:30:09.789588 kubelet[2238]: I0213 19:30:09.789243 2238 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:30:10.155931 kubelet[2238]: I0213 19:30:10.155815 2238 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 19:30:10.155931 kubelet[2238]: I0213 19:30:10.155850 2238 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:30:10.156168 kubelet[2238]: I0213 19:30:10.156138 2238 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 19:30:10.175448 kubelet[2238]: E0213 19:30:10.175402 2238 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.116:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.116:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:30:10.176551 kubelet[2238]: I0213 19:30:10.176494 2238 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:30:10.182515 kubelet[2238]: E0213 19:30:10.182482 2238 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:30:10.182515 kubelet[2238]: I0213 19:30:10.182510 2238 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:30:10.188473 kubelet[2238]: I0213 19:30:10.188444 2238 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:30:10.189401 kubelet[2238]: I0213 19:30:10.189374 2238 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 19:30:10.189581 kubelet[2238]: I0213 19:30:10.189545 2238 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:30:10.189750 kubelet[2238]: I0213 19:30:10.189575 2238 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:30:10.189750 kubelet[2238]: I0213 19:30:10.189744 2238 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:30:10.189750 kubelet[2238]: I0213 19:30:10.189753 2238 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 19:30:10.189900 kubelet[2238]: I0213 19:30:10.189875 2238 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:30:10.191217 kubelet[2238]: I0213 19:30:10.191182 2238 kubelet.go:408] "Attempting to sync node with API server" Feb 13 19:30:10.191217 kubelet[2238]: I0213 19:30:10.191204 2238 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:30:10.191385 kubelet[2238]: I0213 19:30:10.191243 2238 kubelet.go:314] "Adding apiserver pod source" Feb 13 19:30:10.191385 kubelet[2238]: I0213 19:30:10.191259 2238 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:30:10.197026 kubelet[2238]: W0213 19:30:10.196930 2238 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.116:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Feb 13 19:30:10.197026 kubelet[2238]: W0213 19:30:10.196950 2238 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.116:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Feb 13 19:30:10.197170 kubelet[2238]: E0213 19:30:10.197042 2238 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.116:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.116:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:30:10.197260 kubelet[2238]: E0213 19:30:10.197220 2238 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.116:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.116:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:30:10.197815 kubelet[2238]: I0213 19:30:10.197793 2238 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:30:10.199663 kubelet[2238]: I0213 19:30:10.199637 2238 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:30:10.201181 kubelet[2238]: W0213 19:30:10.200228 2238 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:30:10.201181 kubelet[2238]: I0213 19:30:10.201082 2238 server.go:1269] "Started kubelet" Feb 13 19:30:10.201935 kubelet[2238]: I0213 19:30:10.201403 2238 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:30:10.201935 kubelet[2238]: I0213 19:30:10.201620 2238 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:30:10.202066 kubelet[2238]: I0213 19:30:10.202042 2238 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:30:10.203459 kubelet[2238]: I0213 19:30:10.203440 2238 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:30:10.203848 kubelet[2238]: I0213 19:30:10.203827 2238 server.go:460] "Adding debug handlers to kubelet server" Feb 13 19:30:10.204702 kubelet[2238]: I0213 19:30:10.204387 2238 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 19:30:10.204702 kubelet[2238]: E0213 19:30:10.204497 2238 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:30:10.204702 kubelet[2238]: I0213 19:30:10.203721 2238 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:30:10.204702 kubelet[2238]: I0213 19:30:10.204571 2238 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 19:30:10.204702 kubelet[2238]: I0213 19:30:10.204628 2238 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:30:10.207412 kubelet[2238]: W0213 19:30:10.207371 2238 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.116:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Feb 13 19:30:10.207528 kubelet[2238]: E0213 19:30:10.207509 2238 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.116:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.116:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:30:10.208967 kubelet[2238]: E0213 19:30:10.208938 2238 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.116:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.116:6443: connect: connection refused" interval="200ms" Feb 13 19:30:10.210008 kubelet[2238]: E0213 19:30:10.209980 2238 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:30:10.210865 kubelet[2238]: E0213 19:30:10.208233 2238 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.116:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.116:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823db4e3eb3fa42 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 19:30:10.201057858 +0000 UTC m=+0.443676492,LastTimestamp:2025-02-13 19:30:10.201057858 +0000 UTC m=+0.443676492,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 19:30:10.210970 kubelet[2238]: I0213 19:30:10.210929 2238 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:30:10.210970 kubelet[2238]: I0213 19:30:10.210940 2238 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:30:10.211038 kubelet[2238]: I0213 19:30:10.211019 2238 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:30:10.224166 kubelet[2238]: I0213 19:30:10.224138 2238 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:30:10.224166 kubelet[2238]: I0213 19:30:10.224157 2238 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:30:10.224166 kubelet[2238]: I0213 19:30:10.224158 2238 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:30:10.224610 kubelet[2238]: I0213 19:30:10.224174 2238 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:30:10.225376 kubelet[2238]: I0213 19:30:10.225356 2238 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:30:10.226181 kubelet[2238]: I0213 19:30:10.225397 2238 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:30:10.226181 kubelet[2238]: I0213 19:30:10.225418 2238 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 19:30:10.226181 kubelet[2238]: E0213 19:30:10.225462 2238 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:30:10.226181 kubelet[2238]: W0213 19:30:10.226030 2238 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.116:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Feb 13 19:30:10.226181 kubelet[2238]: E0213 19:30:10.226072 2238 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.116:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.116:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:30:10.304794 kubelet[2238]: E0213 19:30:10.304753 2238 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:30:10.326119 kubelet[2238]: E0213 19:30:10.326079 2238 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 19:30:10.405381 kubelet[2238]: E0213 19:30:10.405302 2238 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:30:10.409966 kubelet[2238]: E0213 19:30:10.409859 2238 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.116:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.116:6443: connect: connection refused" interval="400ms" Feb 13 19:30:10.465770 kubelet[2238]: I0213 19:30:10.465722 2238 policy_none.go:49] "None policy: Start" Feb 13 19:30:10.466564 kubelet[2238]: I0213 19:30:10.466544 2238 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:30:10.466564 kubelet[2238]: I0213 19:30:10.466566 2238 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:30:10.475520 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 19:30:10.490140 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 19:30:10.493102 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 19:30:10.506114 kubelet[2238]: E0213 19:30:10.506077 2238 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:30:10.509323 kubelet[2238]: I0213 19:30:10.509279 2238 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:30:10.509616 kubelet[2238]: I0213 19:30:10.509597 2238 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:30:10.509697 kubelet[2238]: I0213 19:30:10.509614 2238 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:30:10.509858 kubelet[2238]: I0213 19:30:10.509838 2238 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:30:10.510928 kubelet[2238]: E0213 19:30:10.510902 2238 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 19:30:10.534250 systemd[1]: Created slice kubepods-burstable-pod98eb2295280bc6da80e83f7636be329c.slice - libcontainer container kubepods-burstable-pod98eb2295280bc6da80e83f7636be329c.slice. Feb 13 19:30:10.547927 systemd[1]: Created slice kubepods-burstable-pod04cca2c455deeb5da380812dcab224d8.slice - libcontainer container kubepods-burstable-pod04cca2c455deeb5da380812dcab224d8.slice. Feb 13 19:30:10.563201 systemd[1]: Created slice kubepods-burstable-podddef8bd57ffbcdc0c6c4696d6aab5c8f.slice - libcontainer container kubepods-burstable-podddef8bd57ffbcdc0c6c4696d6aab5c8f.slice. Feb 13 19:30:10.607233 kubelet[2238]: I0213 19:30:10.607192 2238 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/04cca2c455deeb5da380812dcab224d8-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"04cca2c455deeb5da380812dcab224d8\") " pod="kube-system/kube-scheduler-localhost" Feb 13 19:30:10.607233 kubelet[2238]: I0213 19:30:10.607225 2238 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ddef8bd57ffbcdc0c6c4696d6aab5c8f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ddef8bd57ffbcdc0c6c4696d6aab5c8f\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:30:10.607233 kubelet[2238]: I0213 19:30:10.607243 2238 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:30:10.607498 kubelet[2238]: I0213 19:30:10.607259 2238 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:30:10.607498 kubelet[2238]: I0213 19:30:10.607277 2238 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:30:10.607498 kubelet[2238]: I0213 19:30:10.607293 2238 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ddef8bd57ffbcdc0c6c4696d6aab5c8f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ddef8bd57ffbcdc0c6c4696d6aab5c8f\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:30:10.607498 kubelet[2238]: I0213 19:30:10.607327 2238 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ddef8bd57ffbcdc0c6c4696d6aab5c8f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ddef8bd57ffbcdc0c6c4696d6aab5c8f\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:30:10.607498 kubelet[2238]: I0213 19:30:10.607345 2238 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:30:10.607672 kubelet[2238]: I0213 19:30:10.607387 2238 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:30:10.612246 kubelet[2238]: I0213 19:30:10.612220 2238 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 19:30:10.612640 kubelet[2238]: E0213 19:30:10.612612 2238 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.116:6443/api/v1/nodes\": dial tcp 10.0.0.116:6443: connect: connection refused" node="localhost" Feb 13 19:30:10.811357 kubelet[2238]: E0213 19:30:10.811292 2238 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.116:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.116:6443: connect: connection refused" interval="800ms" Feb 13 19:30:10.814430 kubelet[2238]: I0213 19:30:10.814410 2238 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 19:30:10.814684 kubelet[2238]: E0213 19:30:10.814657 2238 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.116:6443/api/v1/nodes\": dial tcp 10.0.0.116:6443: connect: connection refused" node="localhost" Feb 13 19:30:10.847001 kubelet[2238]: E0213 19:30:10.846969 2238 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:10.847704 containerd[1502]: time="2025-02-13T19:30:10.847650771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:98eb2295280bc6da80e83f7636be329c,Namespace:kube-system,Attempt:0,}" Feb 13 19:30:10.861846 kubelet[2238]: E0213 19:30:10.861804 2238 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:10.862271 containerd[1502]: time="2025-02-13T19:30:10.862229671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:04cca2c455deeb5da380812dcab224d8,Namespace:kube-system,Attempt:0,}" Feb 13 19:30:10.865539 kubelet[2238]: E0213 19:30:10.865512 2238 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:10.865873 containerd[1502]: time="2025-02-13T19:30:10.865839241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ddef8bd57ffbcdc0c6c4696d6aab5c8f,Namespace:kube-system,Attempt:0,}" Feb 13 19:30:11.216678 kubelet[2238]: I0213 19:30:11.216651 2238 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 19:30:11.217023 kubelet[2238]: E0213 19:30:11.216983 2238 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.116:6443/api/v1/nodes\": dial tcp 10.0.0.116:6443: connect: connection refused" node="localhost" Feb 13 19:30:11.357805 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1611740470.mount: Deactivated successfully. Feb 13 19:30:11.363591 containerd[1502]: time="2025-02-13T19:30:11.363546509Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:30:11.367580 containerd[1502]: time="2025-02-13T19:30:11.367513910Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 19:30:11.368445 containerd[1502]: time="2025-02-13T19:30:11.368411583Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:30:11.369364 containerd[1502]: time="2025-02-13T19:30:11.369331959Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:30:11.370348 containerd[1502]: time="2025-02-13T19:30:11.370294635Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:30:11.371185 containerd[1502]: time="2025-02-13T19:30:11.371110565Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:30:11.371994 containerd[1502]: time="2025-02-13T19:30:11.371964186Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:30:11.373678 containerd[1502]: time="2025-02-13T19:30:11.373639477Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:30:11.375695 containerd[1502]: time="2025-02-13T19:30:11.375660317Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 509.751115ms" Feb 13 19:30:11.376382 containerd[1502]: time="2025-02-13T19:30:11.376350942Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 528.579205ms" Feb 13 19:30:11.378733 containerd[1502]: time="2025-02-13T19:30:11.378702713Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 516.39251ms" Feb 13 19:30:11.488030 containerd[1502]: time="2025-02-13T19:30:11.487832009Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:30:11.488030 containerd[1502]: time="2025-02-13T19:30:11.487888936Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:30:11.488030 containerd[1502]: time="2025-02-13T19:30:11.487900207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:30:11.488397 containerd[1502]: time="2025-02-13T19:30:11.488143383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:30:11.488845 containerd[1502]: time="2025-02-13T19:30:11.488453735Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:30:11.488932 containerd[1502]: time="2025-02-13T19:30:11.488876418Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:30:11.488932 containerd[1502]: time="2025-02-13T19:30:11.488910371Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:30:11.489036 containerd[1502]: time="2025-02-13T19:30:11.487178293Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:30:11.489036 containerd[1502]: time="2025-02-13T19:30:11.488990632Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:30:11.489036 containerd[1502]: time="2025-02-13T19:30:11.489013595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:30:11.489228 containerd[1502]: time="2025-02-13T19:30:11.489086582Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:30:11.489264 containerd[1502]: time="2025-02-13T19:30:11.489236683Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:30:11.510489 systemd[1]: Started cri-containerd-7847cb11c323ab80cc058c785ffbef07771685a74da2692e3f63ac95d59a3916.scope - libcontainer container 7847cb11c323ab80cc058c785ffbef07771685a74da2692e3f63ac95d59a3916. Feb 13 19:30:11.516217 systemd[1]: Started cri-containerd-3baabb4e1f82c76672cf5ebacf0a16bd947b6f79d892a0775b9020193d9e9f66.scope - libcontainer container 3baabb4e1f82c76672cf5ebacf0a16bd947b6f79d892a0775b9020193d9e9f66. Feb 13 19:30:11.518737 systemd[1]: Started cri-containerd-c792d447aed5eb20e6b86116093e5780a1514173b3d9066c08c74508d5d8cc9f.scope - libcontainer container c792d447aed5eb20e6b86116093e5780a1514173b3d9066c08c74508d5d8cc9f. Feb 13 19:30:11.556281 kubelet[2238]: W0213 19:30:11.556225 2238 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.116:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Feb 13 19:30:11.556794 kubelet[2238]: E0213 19:30:11.556289 2238 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.116:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.116:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:30:11.556828 containerd[1502]: time="2025-02-13T19:30:11.556516056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:98eb2295280bc6da80e83f7636be329c,Namespace:kube-system,Attempt:0,} returns sandbox id \"c792d447aed5eb20e6b86116093e5780a1514173b3d9066c08c74508d5d8cc9f\"" Feb 13 19:30:11.559231 kubelet[2238]: E0213 19:30:11.558023 2238 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:11.559331 containerd[1502]: time="2025-02-13T19:30:11.558114464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:04cca2c455deeb5da380812dcab224d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"3baabb4e1f82c76672cf5ebacf0a16bd947b6f79d892a0775b9020193d9e9f66\"" Feb 13 19:30:11.559776 kubelet[2238]: E0213 19:30:11.559740 2238 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:11.560295 containerd[1502]: time="2025-02-13T19:30:11.560254136Z" level=info msg="CreateContainer within sandbox \"c792d447aed5eb20e6b86116093e5780a1514173b3d9066c08c74508d5d8cc9f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 19:30:11.560967 containerd[1502]: time="2025-02-13T19:30:11.560933971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ddef8bd57ffbcdc0c6c4696d6aab5c8f,Namespace:kube-system,Attempt:0,} returns sandbox id \"7847cb11c323ab80cc058c785ffbef07771685a74da2692e3f63ac95d59a3916\"" Feb 13 19:30:11.561721 containerd[1502]: time="2025-02-13T19:30:11.561533466Z" level=info msg="CreateContainer within sandbox \"3baabb4e1f82c76672cf5ebacf0a16bd947b6f79d892a0775b9020193d9e9f66\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 19:30:11.561828 kubelet[2238]: E0213 19:30:11.561802 2238 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:11.563472 containerd[1502]: time="2025-02-13T19:30:11.563439190Z" level=info msg="CreateContainer within sandbox \"7847cb11c323ab80cc058c785ffbef07771685a74da2692e3f63ac95d59a3916\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 19:30:11.579187 kubelet[2238]: W0213 19:30:11.579121 2238 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.116:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Feb 13 19:30:11.579187 kubelet[2238]: E0213 19:30:11.579175 2238 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.116:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.116:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:30:11.605913 kubelet[2238]: W0213 19:30:11.605842 2238 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.116:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Feb 13 19:30:11.605913 kubelet[2238]: E0213 19:30:11.605901 2238 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.116:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.116:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:30:11.610497 kubelet[2238]: W0213 19:30:11.610425 2238 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.116:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Feb 13 19:30:11.610497 kubelet[2238]: E0213 19:30:11.610487 2238 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.116:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.116:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:30:11.611889 kubelet[2238]: E0213 19:30:11.611847 2238 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.116:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.116:6443: connect: connection refused" interval="1.6s" Feb 13 19:30:11.721747 containerd[1502]: time="2025-02-13T19:30:11.721670249Z" level=info msg="CreateContainer within sandbox \"3baabb4e1f82c76672cf5ebacf0a16bd947b6f79d892a0775b9020193d9e9f66\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"cb54413567b1fac88dacba911451cc3ef612ac9ee5053b721a16fc120f5f25e0\"" Feb 13 19:30:11.722519 containerd[1502]: time="2025-02-13T19:30:11.722488504Z" level=info msg="StartContainer for \"cb54413567b1fac88dacba911451cc3ef612ac9ee5053b721a16fc120f5f25e0\"" Feb 13 19:30:11.723349 containerd[1502]: time="2025-02-13T19:30:11.723291510Z" level=info msg="CreateContainer within sandbox \"c792d447aed5eb20e6b86116093e5780a1514173b3d9066c08c74508d5d8cc9f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5150c9a75e7e72474a92e3f207298771aea2af5776f7f3a8c70336d32e427de5\"" Feb 13 19:30:11.723878 containerd[1502]: time="2025-02-13T19:30:11.723838405Z" level=info msg="StartContainer for \"5150c9a75e7e72474a92e3f207298771aea2af5776f7f3a8c70336d32e427de5\"" Feb 13 19:30:11.726915 containerd[1502]: time="2025-02-13T19:30:11.726852498Z" level=info msg="CreateContainer within sandbox \"7847cb11c323ab80cc058c785ffbef07771685a74da2692e3f63ac95d59a3916\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"59fa6a825a284d227a6afdecb4196ab5e119bff5d0420cbd2f1fc71ee71665ea\"" Feb 13 19:30:11.727252 containerd[1502]: time="2025-02-13T19:30:11.727225137Z" level=info msg="StartContainer for \"59fa6a825a284d227a6afdecb4196ab5e119bff5d0420cbd2f1fc71ee71665ea\"" Feb 13 19:30:11.750481 systemd[1]: Started cri-containerd-cb54413567b1fac88dacba911451cc3ef612ac9ee5053b721a16fc120f5f25e0.scope - libcontainer container cb54413567b1fac88dacba911451cc3ef612ac9ee5053b721a16fc120f5f25e0. Feb 13 19:30:11.761510 systemd[1]: Started cri-containerd-5150c9a75e7e72474a92e3f207298771aea2af5776f7f3a8c70336d32e427de5.scope - libcontainer container 5150c9a75e7e72474a92e3f207298771aea2af5776f7f3a8c70336d32e427de5. Feb 13 19:30:11.764526 systemd[1]: Started cri-containerd-59fa6a825a284d227a6afdecb4196ab5e119bff5d0420cbd2f1fc71ee71665ea.scope - libcontainer container 59fa6a825a284d227a6afdecb4196ab5e119bff5d0420cbd2f1fc71ee71665ea. Feb 13 19:30:11.800224 containerd[1502]: time="2025-02-13T19:30:11.800172960Z" level=info msg="StartContainer for \"cb54413567b1fac88dacba911451cc3ef612ac9ee5053b721a16fc120f5f25e0\" returns successfully" Feb 13 19:30:11.809178 containerd[1502]: time="2025-02-13T19:30:11.807624855Z" level=info msg="StartContainer for \"5150c9a75e7e72474a92e3f207298771aea2af5776f7f3a8c70336d32e427de5\" returns successfully" Feb 13 19:30:11.814708 containerd[1502]: time="2025-02-13T19:30:11.814641223Z" level=info msg="StartContainer for \"59fa6a825a284d227a6afdecb4196ab5e119bff5d0420cbd2f1fc71ee71665ea\" returns successfully" Feb 13 19:30:12.019671 kubelet[2238]: I0213 19:30:12.019541 2238 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 19:30:12.233641 kubelet[2238]: E0213 19:30:12.233601 2238 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:12.234651 kubelet[2238]: E0213 19:30:12.234630 2238 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:12.236466 kubelet[2238]: E0213 19:30:12.236446 2238 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:13.241617 kubelet[2238]: E0213 19:30:13.241578 2238 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:13.301170 kubelet[2238]: E0213 19:30:13.301126 2238 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 13 19:30:13.408281 kubelet[2238]: I0213 19:30:13.408220 2238 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Feb 13 19:30:13.408281 kubelet[2238]: E0213 19:30:13.408276 2238 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Feb 13 19:30:13.449686 kubelet[2238]: E0213 19:30:13.448300 2238 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1823db4e3eb3fa42 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 19:30:10.201057858 +0000 UTC m=+0.443676492,LastTimestamp:2025-02-13 19:30:10.201057858 +0000 UTC m=+0.443676492,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 19:30:13.456144 kubelet[2238]: E0213 19:30:13.456101 2238 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:30:13.501609 kubelet[2238]: E0213 19:30:13.501407 2238 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1823db4e3f3bee29 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 19:30:10.209967657 +0000 UTC m=+0.452586281,LastTimestamp:2025-02-13 19:30:10.209967657 +0000 UTC m=+0.452586281,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 19:30:13.555080 kubelet[2238]: E0213 19:30:13.554979 2238 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1823db4e400868ba default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 19:30:10.223368378 +0000 UTC m=+0.465986992,LastTimestamp:2025-02-13 19:30:10.223368378 +0000 UTC m=+0.465986992,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 19:30:13.557150 kubelet[2238]: E0213 19:30:13.557105 2238 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:30:13.657886 kubelet[2238]: E0213 19:30:13.657744 2238 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:30:13.758188 kubelet[2238]: E0213 19:30:13.758028 2238 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:30:14.199693 kubelet[2238]: I0213 19:30:14.199565 2238 apiserver.go:52] "Watching apiserver" Feb 13 19:30:14.205761 kubelet[2238]: I0213 19:30:14.205716 2238 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 19:30:14.277333 kubelet[2238]: E0213 19:30:14.277282 2238 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Feb 13 19:30:14.277752 kubelet[2238]: E0213 19:30:14.277556 2238 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:15.188333 systemd[1]: Reloading requested from client PID 2516 ('systemctl') (unit session-9.scope)... Feb 13 19:30:15.188347 systemd[1]: Reloading... Feb 13 19:30:15.276794 zram_generator::config[2558]: No configuration found. Feb 13 19:30:15.385631 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:30:15.477659 systemd[1]: Reloading finished in 288 ms. Feb 13 19:30:15.525645 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:30:15.552651 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:30:15.552922 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:30:15.566521 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:30:15.713091 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:30:15.717682 (kubelet)[2600]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:30:15.766856 kubelet[2600]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:30:15.766856 kubelet[2600]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:30:15.766856 kubelet[2600]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:30:15.766856 kubelet[2600]: I0213 19:30:15.766831 2600 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:30:15.772683 kubelet[2600]: I0213 19:30:15.772634 2600 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 19:30:15.772683 kubelet[2600]: I0213 19:30:15.772654 2600 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:30:15.774630 kubelet[2600]: I0213 19:30:15.772825 2600 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 19:30:15.774630 kubelet[2600]: I0213 19:30:15.774606 2600 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 19:30:15.777877 kubelet[2600]: I0213 19:30:15.777850 2600 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:30:15.781277 kubelet[2600]: E0213 19:30:15.781241 2600 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:30:15.781365 kubelet[2600]: I0213 19:30:15.781282 2600 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:30:15.786170 kubelet[2600]: I0213 19:30:15.786138 2600 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:30:15.786305 kubelet[2600]: I0213 19:30:15.786252 2600 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 19:30:15.786430 kubelet[2600]: I0213 19:30:15.786395 2600 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:30:15.786595 kubelet[2600]: I0213 19:30:15.786424 2600 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:30:15.786678 kubelet[2600]: I0213 19:30:15.786595 2600 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:30:15.786678 kubelet[2600]: I0213 19:30:15.786605 2600 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 19:30:15.786678 kubelet[2600]: I0213 19:30:15.786634 2600 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:30:15.786760 kubelet[2600]: I0213 19:30:15.786747 2600 kubelet.go:408] "Attempting to sync node with API server" Feb 13 19:30:15.786784 kubelet[2600]: I0213 19:30:15.786762 2600 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:30:15.786809 kubelet[2600]: I0213 19:30:15.786798 2600 kubelet.go:314] "Adding apiserver pod source" Feb 13 19:30:15.786830 kubelet[2600]: I0213 19:30:15.786813 2600 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:30:15.789965 kubelet[2600]: I0213 19:30:15.788517 2600 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:30:15.789965 kubelet[2600]: I0213 19:30:15.789207 2600 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:30:15.789965 kubelet[2600]: I0213 19:30:15.789862 2600 server.go:1269] "Started kubelet" Feb 13 19:30:15.794413 kubelet[2600]: I0213 19:30:15.794355 2600 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:30:15.796944 kubelet[2600]: I0213 19:30:15.796864 2600 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:30:15.796944 kubelet[2600]: I0213 19:30:15.794551 2600 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:30:15.799780 kubelet[2600]: I0213 19:30:15.799596 2600 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:30:15.799972 kubelet[2600]: I0213 19:30:15.799936 2600 server.go:460] "Adding debug handlers to kubelet server" Feb 13 19:30:15.800623 kubelet[2600]: I0213 19:30:15.800592 2600 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:30:15.801517 kubelet[2600]: I0213 19:30:15.801505 2600 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 19:30:15.802777 kubelet[2600]: I0213 19:30:15.802762 2600 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 19:30:15.803050 kubelet[2600]: I0213 19:30:15.803035 2600 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:30:15.803578 kubelet[2600]: E0213 19:30:15.803552 2600 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:30:15.803694 kubelet[2600]: I0213 19:30:15.803666 2600 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:30:15.804118 kubelet[2600]: I0213 19:30:15.803961 2600 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:30:15.805181 kubelet[2600]: I0213 19:30:15.805138 2600 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:30:15.815189 kubelet[2600]: I0213 19:30:15.815026 2600 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:30:15.816961 kubelet[2600]: I0213 19:30:15.816691 2600 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:30:15.816961 kubelet[2600]: I0213 19:30:15.816734 2600 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:30:15.816961 kubelet[2600]: I0213 19:30:15.816753 2600 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 19:30:15.816961 kubelet[2600]: E0213 19:30:15.816803 2600 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:30:15.841498 kubelet[2600]: I0213 19:30:15.841477 2600 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:30:15.841605 kubelet[2600]: I0213 19:30:15.841594 2600 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:30:15.841661 kubelet[2600]: I0213 19:30:15.841653 2600 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:30:15.841844 kubelet[2600]: I0213 19:30:15.841831 2600 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 19:30:15.841911 kubelet[2600]: I0213 19:30:15.841890 2600 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 19:30:15.841953 kubelet[2600]: I0213 19:30:15.841946 2600 policy_none.go:49] "None policy: Start" Feb 13 19:30:15.842476 kubelet[2600]: I0213 19:30:15.842451 2600 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:30:15.842544 kubelet[2600]: I0213 19:30:15.842533 2600 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:30:15.842743 kubelet[2600]: I0213 19:30:15.842732 2600 state_mem.go:75] "Updated machine memory state" Feb 13 19:30:15.847388 kubelet[2600]: I0213 19:30:15.847354 2600 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:30:15.847577 kubelet[2600]: I0213 19:30:15.847552 2600 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:30:15.847669 kubelet[2600]: I0213 19:30:15.847576 2600 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:30:15.847804 kubelet[2600]: I0213 19:30:15.847786 2600 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:30:15.954812 kubelet[2600]: I0213 19:30:15.954774 2600 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Feb 13 19:30:15.960272 kubelet[2600]: I0213 19:30:15.960238 2600 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Feb 13 19:30:15.960383 kubelet[2600]: I0213 19:30:15.960363 2600 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Feb 13 19:30:16.104043 kubelet[2600]: I0213 19:30:16.103907 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ddef8bd57ffbcdc0c6c4696d6aab5c8f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ddef8bd57ffbcdc0c6c4696d6aab5c8f\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:30:16.104043 kubelet[2600]: I0213 19:30:16.103942 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ddef8bd57ffbcdc0c6c4696d6aab5c8f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ddef8bd57ffbcdc0c6c4696d6aab5c8f\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:30:16.104043 kubelet[2600]: I0213 19:30:16.103965 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:30:16.104043 kubelet[2600]: I0213 19:30:16.104020 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:30:16.104043 kubelet[2600]: I0213 19:30:16.104046 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/04cca2c455deeb5da380812dcab224d8-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"04cca2c455deeb5da380812dcab224d8\") " pod="kube-system/kube-scheduler-localhost" Feb 13 19:30:16.104293 kubelet[2600]: I0213 19:30:16.104104 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ddef8bd57ffbcdc0c6c4696d6aab5c8f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ddef8bd57ffbcdc0c6c4696d6aab5c8f\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:30:16.104293 kubelet[2600]: I0213 19:30:16.104146 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:30:16.104293 kubelet[2600]: I0213 19:30:16.104163 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:30:16.104293 kubelet[2600]: I0213 19:30:16.104179 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/98eb2295280bc6da80e83f7636be329c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"98eb2295280bc6da80e83f7636be329c\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:30:16.221688 kubelet[2600]: E0213 19:30:16.221641 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:16.224079 kubelet[2600]: E0213 19:30:16.224053 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:16.225975 kubelet[2600]: E0213 19:30:16.225937 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:16.787730 kubelet[2600]: I0213 19:30:16.787664 2600 apiserver.go:52] "Watching apiserver" Feb 13 19:30:16.803849 kubelet[2600]: I0213 19:30:16.803809 2600 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 19:30:16.827388 kubelet[2600]: E0213 19:30:16.827350 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:16.919002 kubelet[2600]: E0213 19:30:16.918937 2600 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Feb 13 19:30:16.920173 kubelet[2600]: E0213 19:30:16.919212 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:16.920281 kubelet[2600]: E0213 19:30:16.920243 2600 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 19:30:16.920430 kubelet[2600]: E0213 19:30:16.920404 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:16.970073 kubelet[2600]: I0213 19:30:16.969987 2600 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.969962692 podStartE2EDuration="1.969962692s" podCreationTimestamp="2025-02-13 19:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:30:16.919134244 +0000 UTC m=+1.197165561" watchObservedRunningTime="2025-02-13 19:30:16.969962692 +0000 UTC m=+1.247994008" Feb 13 19:30:17.011578 kubelet[2600]: I0213 19:30:17.011525 2600 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.011506415 podStartE2EDuration="2.011506415s" podCreationTimestamp="2025-02-13 19:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:30:17.010877935 +0000 UTC m=+1.288909252" watchObservedRunningTime="2025-02-13 19:30:17.011506415 +0000 UTC m=+1.289537731" Feb 13 19:30:17.011794 kubelet[2600]: I0213 19:30:17.011619 2600 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.011614261 podStartE2EDuration="2.011614261s" podCreationTimestamp="2025-02-13 19:30:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:30:16.970087089 +0000 UTC m=+1.248118405" watchObservedRunningTime="2025-02-13 19:30:17.011614261 +0000 UTC m=+1.289645577" Feb 13 19:30:17.829174 kubelet[2600]: E0213 19:30:17.829125 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:17.829918 kubelet[2600]: E0213 19:30:17.829738 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:18.972941 kubelet[2600]: E0213 19:30:18.972873 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:20.477208 kubelet[2600]: I0213 19:30:20.477171 2600 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 19:30:20.477628 containerd[1502]: time="2025-02-13T19:30:20.477595607Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:30:20.477861 kubelet[2600]: I0213 19:30:20.477783 2600 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 19:30:20.677930 kubelet[2600]: E0213 19:30:20.677885 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:20.794691 sudo[1696]: pam_unix(sudo:session): session closed for user root Feb 13 19:30:20.796370 sshd[1695]: Connection closed by 10.0.0.1 port 36742 Feb 13 19:30:20.797007 sshd-session[1693]: pam_unix(sshd:session): session closed for user core Feb 13 19:30:20.801233 systemd[1]: sshd@8-10.0.0.116:22-10.0.0.1:36742.service: Deactivated successfully. Feb 13 19:30:20.803488 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 19:30:20.803685 systemd[1]: session-9.scope: Consumed 4.728s CPU time, 153.2M memory peak, 0B memory swap peak. Feb 13 19:30:20.804167 systemd-logind[1485]: Session 9 logged out. Waiting for processes to exit. Feb 13 19:30:20.805088 systemd-logind[1485]: Removed session 9. Feb 13 19:30:21.085881 systemd[1]: Created slice kubepods-besteffort-podd7e890fd_16f2_430d_9efd_1b271fa2d073.slice - libcontainer container kubepods-besteffort-podd7e890fd_16f2_430d_9efd_1b271fa2d073.slice. Feb 13 19:30:21.190411 kubelet[2600]: I0213 19:30:21.190341 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mds5v\" (UniqueName: \"kubernetes.io/projected/d7e890fd-16f2-430d-9efd-1b271fa2d073-kube-api-access-mds5v\") pod \"kube-proxy-8zt7x\" (UID: \"d7e890fd-16f2-430d-9efd-1b271fa2d073\") " pod="kube-system/kube-proxy-8zt7x" Feb 13 19:30:21.190411 kubelet[2600]: I0213 19:30:21.190413 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d7e890fd-16f2-430d-9efd-1b271fa2d073-kube-proxy\") pod \"kube-proxy-8zt7x\" (UID: \"d7e890fd-16f2-430d-9efd-1b271fa2d073\") " pod="kube-system/kube-proxy-8zt7x" Feb 13 19:30:21.190581 kubelet[2600]: I0213 19:30:21.190441 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d7e890fd-16f2-430d-9efd-1b271fa2d073-xtables-lock\") pod \"kube-proxy-8zt7x\" (UID: \"d7e890fd-16f2-430d-9efd-1b271fa2d073\") " pod="kube-system/kube-proxy-8zt7x" Feb 13 19:30:21.190581 kubelet[2600]: I0213 19:30:21.190462 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d7e890fd-16f2-430d-9efd-1b271fa2d073-lib-modules\") pod \"kube-proxy-8zt7x\" (UID: \"d7e890fd-16f2-430d-9efd-1b271fa2d073\") " pod="kube-system/kube-proxy-8zt7x" Feb 13 19:30:21.295559 kubelet[2600]: E0213 19:30:21.295514 2600 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Feb 13 19:30:21.295559 kubelet[2600]: E0213 19:30:21.295557 2600 projected.go:194] Error preparing data for projected volume kube-api-access-mds5v for pod kube-system/kube-proxy-8zt7x: configmap "kube-root-ca.crt" not found Feb 13 19:30:21.295702 kubelet[2600]: E0213 19:30:21.295638 2600 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d7e890fd-16f2-430d-9efd-1b271fa2d073-kube-api-access-mds5v podName:d7e890fd-16f2-430d-9efd-1b271fa2d073 nodeName:}" failed. No retries permitted until 2025-02-13 19:30:21.795610947 +0000 UTC m=+6.073642263 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-mds5v" (UniqueName: "kubernetes.io/projected/d7e890fd-16f2-430d-9efd-1b271fa2d073-kube-api-access-mds5v") pod "kube-proxy-8zt7x" (UID: "d7e890fd-16f2-430d-9efd-1b271fa2d073") : configmap "kube-root-ca.crt" not found Feb 13 19:30:21.689990 kubelet[2600]: W0213 19:30:21.689955 2600 reflector.go:561] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'localhost' and this object Feb 13 19:30:21.690457 kubelet[2600]: E0213 19:30:21.690000 2600 reflector.go:158] "Unhandled Error" err="object-\"tigera-operator\"/\"kubernetes-services-endpoint\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kubernetes-services-endpoint\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Feb 13 19:30:21.691208 kubelet[2600]: W0213 19:30:21.690527 2600 reflector.go:561] object-"tigera-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'localhost' and this object Feb 13 19:30:21.691208 kubelet[2600]: E0213 19:30:21.690546 2600 reflector.go:158] "Unhandled Error" err="object-\"tigera-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Feb 13 19:30:21.694946 systemd[1]: Created slice kubepods-besteffort-podf41b2193_a4f2_4676_9b5e_0646d5c80ae0.slice - libcontainer container kubepods-besteffort-podf41b2193_a4f2_4676_9b5e_0646d5c80ae0.slice. Feb 13 19:30:21.792194 kubelet[2600]: I0213 19:30:21.792129 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f41b2193-a4f2-4676-9b5e-0646d5c80ae0-var-lib-calico\") pod \"tigera-operator-76c4976dd7-rfqr4\" (UID: \"f41b2193-a4f2-4676-9b5e-0646d5c80ae0\") " pod="tigera-operator/tigera-operator-76c4976dd7-rfqr4" Feb 13 19:30:21.792194 kubelet[2600]: I0213 19:30:21.792190 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lbzv\" (UniqueName: \"kubernetes.io/projected/f41b2193-a4f2-4676-9b5e-0646d5c80ae0-kube-api-access-6lbzv\") pod \"tigera-operator-76c4976dd7-rfqr4\" (UID: \"f41b2193-a4f2-4676-9b5e-0646d5c80ae0\") " pod="tigera-operator/tigera-operator-76c4976dd7-rfqr4" Feb 13 19:30:21.999541 kubelet[2600]: E0213 19:30:21.999444 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:22.000226 containerd[1502]: time="2025-02-13T19:30:22.000181986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8zt7x,Uid:d7e890fd-16f2-430d-9efd-1b271fa2d073,Namespace:kube-system,Attempt:0,}" Feb 13 19:30:22.023789 containerd[1502]: time="2025-02-13T19:30:22.023674110Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:30:22.024526 containerd[1502]: time="2025-02-13T19:30:22.024305419Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:30:22.024526 containerd[1502]: time="2025-02-13T19:30:22.024403396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:30:22.024642 containerd[1502]: time="2025-02-13T19:30:22.024568530Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:30:22.046451 systemd[1]: Started cri-containerd-1e1d640164bf8ca11f7599c160111b2e6167dc0f228f9187b17c247993da0e32.scope - libcontainer container 1e1d640164bf8ca11f7599c160111b2e6167dc0f228f9187b17c247993da0e32. Feb 13 19:30:22.068279 containerd[1502]: time="2025-02-13T19:30:22.068237345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8zt7x,Uid:d7e890fd-16f2-430d-9efd-1b271fa2d073,Namespace:kube-system,Attempt:0,} returns sandbox id \"1e1d640164bf8ca11f7599c160111b2e6167dc0f228f9187b17c247993da0e32\"" Feb 13 19:30:22.069058 kubelet[2600]: E0213 19:30:22.069028 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:22.071742 containerd[1502]: time="2025-02-13T19:30:22.071700775Z" level=info msg="CreateContainer within sandbox \"1e1d640164bf8ca11f7599c160111b2e6167dc0f228f9187b17c247993da0e32\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:30:22.093015 containerd[1502]: time="2025-02-13T19:30:22.092957490Z" level=info msg="CreateContainer within sandbox \"1e1d640164bf8ca11f7599c160111b2e6167dc0f228f9187b17c247993da0e32\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2c0bf2f53d19d3830514c2d4e5afcb1d615534abdc3343a9bdd74c53ac12e429\"" Feb 13 19:30:22.093528 containerd[1502]: time="2025-02-13T19:30:22.093480194Z" level=info msg="StartContainer for \"2c0bf2f53d19d3830514c2d4e5afcb1d615534abdc3343a9bdd74c53ac12e429\"" Feb 13 19:30:22.124437 systemd[1]: Started cri-containerd-2c0bf2f53d19d3830514c2d4e5afcb1d615534abdc3343a9bdd74c53ac12e429.scope - libcontainer container 2c0bf2f53d19d3830514c2d4e5afcb1d615534abdc3343a9bdd74c53ac12e429. Feb 13 19:30:22.154860 containerd[1502]: time="2025-02-13T19:30:22.154826284Z" level=info msg="StartContainer for \"2c0bf2f53d19d3830514c2d4e5afcb1d615534abdc3343a9bdd74c53ac12e429\" returns successfully" Feb 13 19:30:22.847262 kubelet[2600]: E0213 19:30:22.847214 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:22.848251 kubelet[2600]: E0213 19:30:22.848170 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:22.859346 kubelet[2600]: I0213 19:30:22.859247 2600 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8zt7x" podStartSLOduration=1.8592245630000002 podStartE2EDuration="1.859224563s" podCreationTimestamp="2025-02-13 19:30:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:30:22.859077514 +0000 UTC m=+7.137108830" watchObservedRunningTime="2025-02-13 19:30:22.859224563 +0000 UTC m=+7.137255879" Feb 13 19:30:22.899290 containerd[1502]: time="2025-02-13T19:30:22.899241761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-rfqr4,Uid:f41b2193-a4f2-4676-9b5e-0646d5c80ae0,Namespace:tigera-operator,Attempt:0,}" Feb 13 19:30:22.927426 containerd[1502]: time="2025-02-13T19:30:22.926716800Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:30:22.927426 containerd[1502]: time="2025-02-13T19:30:22.927400650Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:30:22.927426 containerd[1502]: time="2025-02-13T19:30:22.927417882Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:30:22.927595 containerd[1502]: time="2025-02-13T19:30:22.927541146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:30:22.949450 systemd[1]: Started cri-containerd-86e2858e78491ed21e605d25df3eac1a084d5b8d69003bd8dcd8765a3e2c3554.scope - libcontainer container 86e2858e78491ed21e605d25df3eac1a084d5b8d69003bd8dcd8765a3e2c3554. Feb 13 19:30:22.986912 containerd[1502]: time="2025-02-13T19:30:22.986849865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-rfqr4,Uid:f41b2193-a4f2-4676-9b5e-0646d5c80ae0,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"86e2858e78491ed21e605d25df3eac1a084d5b8d69003bd8dcd8765a3e2c3554\"" Feb 13 19:30:22.988366 containerd[1502]: time="2025-02-13T19:30:22.988254354Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Feb 13 19:30:23.850518 kubelet[2600]: E0213 19:30:23.850462 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:24.615257 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1449443205.mount: Deactivated successfully. Feb 13 19:30:24.901216 containerd[1502]: time="2025-02-13T19:30:24.901106352Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:24.901956 containerd[1502]: time="2025-02-13T19:30:24.901927110Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Feb 13 19:30:24.903372 containerd[1502]: time="2025-02-13T19:30:24.903344748Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:24.905374 containerd[1502]: time="2025-02-13T19:30:24.905337929Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:24.906440 containerd[1502]: time="2025-02-13T19:30:24.906403761Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 1.918116876s" Feb 13 19:30:24.906479 containerd[1502]: time="2025-02-13T19:30:24.906438757Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Feb 13 19:30:24.908261 containerd[1502]: time="2025-02-13T19:30:24.908236648Z" level=info msg="CreateContainer within sandbox \"86e2858e78491ed21e605d25df3eac1a084d5b8d69003bd8dcd8765a3e2c3554\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 13 19:30:24.919161 containerd[1502]: time="2025-02-13T19:30:24.919112278Z" level=info msg="CreateContainer within sandbox \"86e2858e78491ed21e605d25df3eac1a084d5b8d69003bd8dcd8765a3e2c3554\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"2001bca3dbff7f7e6b8c56a86fe121af29ec3b43e71f3817919f434255d323ef\"" Feb 13 19:30:24.919679 containerd[1502]: time="2025-02-13T19:30:24.919622666Z" level=info msg="StartContainer for \"2001bca3dbff7f7e6b8c56a86fe121af29ec3b43e71f3817919f434255d323ef\"" Feb 13 19:30:24.948448 systemd[1]: Started cri-containerd-2001bca3dbff7f7e6b8c56a86fe121af29ec3b43e71f3817919f434255d323ef.scope - libcontainer container 2001bca3dbff7f7e6b8c56a86fe121af29ec3b43e71f3817919f434255d323ef. Feb 13 19:30:24.973792 containerd[1502]: time="2025-02-13T19:30:24.973741885Z" level=info msg="StartContainer for \"2001bca3dbff7f7e6b8c56a86fe121af29ec3b43e71f3817919f434255d323ef\" returns successfully" Feb 13 19:30:25.659864 update_engine[1488]: I20250213 19:30:25.659777 1488 update_attempter.cc:509] Updating boot flags... Feb 13 19:30:25.750345 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2990) Feb 13 19:30:25.792385 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2992) Feb 13 19:30:25.817382 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2992) Feb 13 19:30:27.746356 kubelet[2600]: I0213 19:30:27.746247 2600 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4976dd7-rfqr4" podStartSLOduration=4.827049852 podStartE2EDuration="6.746220958s" podCreationTimestamp="2025-02-13 19:30:21 +0000 UTC" firstStartedPulling="2025-02-13 19:30:22.987922453 +0000 UTC m=+7.265953769" lastFinishedPulling="2025-02-13 19:30:24.907093559 +0000 UTC m=+9.185124875" observedRunningTime="2025-02-13 19:30:25.86443803 +0000 UTC m=+10.142469346" watchObservedRunningTime="2025-02-13 19:30:27.746220958 +0000 UTC m=+12.024252274" Feb 13 19:30:27.762022 systemd[1]: Created slice kubepods-besteffort-pod8e5409b7_b2fc_493d_ab0b_50b67fa8c57d.slice - libcontainer container kubepods-besteffort-pod8e5409b7_b2fc_493d_ab0b_50b67fa8c57d.slice. Feb 13 19:30:27.806660 systemd[1]: Created slice kubepods-besteffort-podca39506d_913e_47fe_b92d_520a7e10fce9.slice - libcontainer container kubepods-besteffort-podca39506d_913e_47fe_b92d_520a7e10fce9.slice. Feb 13 19:30:27.826513 kubelet[2600]: I0213 19:30:27.826308 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd8kf\" (UniqueName: \"kubernetes.io/projected/8e5409b7-b2fc-493d-ab0b-50b67fa8c57d-kube-api-access-cd8kf\") pod \"calico-typha-8458c4cdfd-b8lhh\" (UID: \"8e5409b7-b2fc-493d-ab0b-50b67fa8c57d\") " pod="calico-system/calico-typha-8458c4cdfd-b8lhh" Feb 13 19:30:27.826513 kubelet[2600]: I0213 19:30:27.826384 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e5409b7-b2fc-493d-ab0b-50b67fa8c57d-tigera-ca-bundle\") pod \"calico-typha-8458c4cdfd-b8lhh\" (UID: \"8e5409b7-b2fc-493d-ab0b-50b67fa8c57d\") " pod="calico-system/calico-typha-8458c4cdfd-b8lhh" Feb 13 19:30:27.826513 kubelet[2600]: I0213 19:30:27.826400 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8e5409b7-b2fc-493d-ab0b-50b67fa8c57d-typha-certs\") pod \"calico-typha-8458c4cdfd-b8lhh\" (UID: \"8e5409b7-b2fc-493d-ab0b-50b67fa8c57d\") " pod="calico-system/calico-typha-8458c4cdfd-b8lhh" Feb 13 19:30:27.920276 kubelet[2600]: E0213 19:30:27.920078 2600 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cfdf6" podUID="61978848-76c7-4692-bab4-3c8c891d5468" Feb 13 19:30:27.926777 kubelet[2600]: I0213 19:30:27.926723 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ca39506d-913e-47fe-b92d-520a7e10fce9-policysync\") pod \"calico-node-twt5n\" (UID: \"ca39506d-913e-47fe-b92d-520a7e10fce9\") " pod="calico-system/calico-node-twt5n" Feb 13 19:30:27.926777 kubelet[2600]: I0213 19:30:27.926761 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ca39506d-913e-47fe-b92d-520a7e10fce9-cni-net-dir\") pod \"calico-node-twt5n\" (UID: \"ca39506d-913e-47fe-b92d-520a7e10fce9\") " pod="calico-system/calico-node-twt5n" Feb 13 19:30:27.926777 kubelet[2600]: I0213 19:30:27.926775 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ca39506d-913e-47fe-b92d-520a7e10fce9-lib-modules\") pod \"calico-node-twt5n\" (UID: \"ca39506d-913e-47fe-b92d-520a7e10fce9\") " pod="calico-system/calico-node-twt5n" Feb 13 19:30:27.926980 kubelet[2600]: I0213 19:30:27.926793 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ca39506d-913e-47fe-b92d-520a7e10fce9-node-certs\") pod \"calico-node-twt5n\" (UID: \"ca39506d-913e-47fe-b92d-520a7e10fce9\") " pod="calico-system/calico-node-twt5n" Feb 13 19:30:27.926980 kubelet[2600]: I0213 19:30:27.926807 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ca39506d-913e-47fe-b92d-520a7e10fce9-var-lib-calico\") pod \"calico-node-twt5n\" (UID: \"ca39506d-913e-47fe-b92d-520a7e10fce9\") " pod="calico-system/calico-node-twt5n" Feb 13 19:30:27.926980 kubelet[2600]: I0213 19:30:27.926820 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ca39506d-913e-47fe-b92d-520a7e10fce9-cni-log-dir\") pod \"calico-node-twt5n\" (UID: \"ca39506d-913e-47fe-b92d-520a7e10fce9\") " pod="calico-system/calico-node-twt5n" Feb 13 19:30:27.926980 kubelet[2600]: I0213 19:30:27.926834 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ca39506d-913e-47fe-b92d-520a7e10fce9-flexvol-driver-host\") pod \"calico-node-twt5n\" (UID: \"ca39506d-913e-47fe-b92d-520a7e10fce9\") " pod="calico-system/calico-node-twt5n" Feb 13 19:30:27.926980 kubelet[2600]: I0213 19:30:27.926851 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hm2t\" (UniqueName: \"kubernetes.io/projected/ca39506d-913e-47fe-b92d-520a7e10fce9-kube-api-access-5hm2t\") pod \"calico-node-twt5n\" (UID: \"ca39506d-913e-47fe-b92d-520a7e10fce9\") " pod="calico-system/calico-node-twt5n" Feb 13 19:30:27.927157 kubelet[2600]: I0213 19:30:27.926878 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ca39506d-913e-47fe-b92d-520a7e10fce9-cni-bin-dir\") pod \"calico-node-twt5n\" (UID: \"ca39506d-913e-47fe-b92d-520a7e10fce9\") " pod="calico-system/calico-node-twt5n" Feb 13 19:30:27.927157 kubelet[2600]: I0213 19:30:27.926894 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca39506d-913e-47fe-b92d-520a7e10fce9-tigera-ca-bundle\") pod \"calico-node-twt5n\" (UID: \"ca39506d-913e-47fe-b92d-520a7e10fce9\") " pod="calico-system/calico-node-twt5n" Feb 13 19:30:27.927157 kubelet[2600]: I0213 19:30:27.926909 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ca39506d-913e-47fe-b92d-520a7e10fce9-xtables-lock\") pod \"calico-node-twt5n\" (UID: \"ca39506d-913e-47fe-b92d-520a7e10fce9\") " pod="calico-system/calico-node-twt5n" Feb 13 19:30:27.927157 kubelet[2600]: I0213 19:30:27.926935 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ca39506d-913e-47fe-b92d-520a7e10fce9-var-run-calico\") pod \"calico-node-twt5n\" (UID: \"ca39506d-913e-47fe-b92d-520a7e10fce9\") " pod="calico-system/calico-node-twt5n" Feb 13 19:30:28.027774 kubelet[2600]: I0213 19:30:28.027633 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/61978848-76c7-4692-bab4-3c8c891d5468-varrun\") pod \"csi-node-driver-cfdf6\" (UID: \"61978848-76c7-4692-bab4-3c8c891d5468\") " pod="calico-system/csi-node-driver-cfdf6" Feb 13 19:30:28.027774 kubelet[2600]: I0213 19:30:28.027703 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/61978848-76c7-4692-bab4-3c8c891d5468-registration-dir\") pod \"csi-node-driver-cfdf6\" (UID: \"61978848-76c7-4692-bab4-3c8c891d5468\") " pod="calico-system/csi-node-driver-cfdf6" Feb 13 19:30:28.027774 kubelet[2600]: I0213 19:30:28.027741 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/61978848-76c7-4692-bab4-3c8c891d5468-kubelet-dir\") pod \"csi-node-driver-cfdf6\" (UID: \"61978848-76c7-4692-bab4-3c8c891d5468\") " pod="calico-system/csi-node-driver-cfdf6" Feb 13 19:30:28.027774 kubelet[2600]: I0213 19:30:28.027774 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/61978848-76c7-4692-bab4-3c8c891d5468-socket-dir\") pod \"csi-node-driver-cfdf6\" (UID: \"61978848-76c7-4692-bab4-3c8c891d5468\") " pod="calico-system/csi-node-driver-cfdf6" Feb 13 19:30:28.027991 kubelet[2600]: I0213 19:30:28.027892 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zxbj\" (UniqueName: \"kubernetes.io/projected/61978848-76c7-4692-bab4-3c8c891d5468-kube-api-access-5zxbj\") pod \"csi-node-driver-cfdf6\" (UID: \"61978848-76c7-4692-bab4-3c8c891d5468\") " pod="calico-system/csi-node-driver-cfdf6" Feb 13 19:30:28.029729 kubelet[2600]: E0213 19:30:28.029637 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:28.029729 kubelet[2600]: W0213 19:30:28.029660 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:28.029729 kubelet[2600]: E0213 19:30:28.029685 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:28.030117 kubelet[2600]: E0213 19:30:28.030081 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:28.030117 kubelet[2600]: W0213 19:30:28.030109 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:28.030281 kubelet[2600]: E0213 19:30:28.030135 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:28.030706 kubelet[2600]: E0213 19:30:28.030667 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:28.030706 kubelet[2600]: W0213 19:30:28.030682 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:28.030706 kubelet[2600]: E0213 19:30:28.030694 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:28.031729 kubelet[2600]: E0213 19:30:28.031690 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:28.031729 kubelet[2600]: W0213 19:30:28.031708 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:28.031819 kubelet[2600]: E0213 19:30:28.031758 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:28.033912 kubelet[2600]: E0213 19:30:28.033889 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:28.034067 kubelet[2600]: W0213 19:30:28.034000 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:28.034067 kubelet[2600]: E0213 19:30:28.034027 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:28.040179 kubelet[2600]: E0213 19:30:28.039668 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:28.040179 kubelet[2600]: W0213 19:30:28.039692 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:28.040179 kubelet[2600]: E0213 19:30:28.039713 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:28.066144 kubelet[2600]: E0213 19:30:28.066092 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:28.067350 containerd[1502]: time="2025-02-13T19:30:28.066793377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8458c4cdfd-b8lhh,Uid:8e5409b7-b2fc-493d-ab0b-50b67fa8c57d,Namespace:calico-system,Attempt:0,}" Feb 13 19:30:28.099216 containerd[1502]: time="2025-02-13T19:30:28.099090833Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:30:28.099541 containerd[1502]: time="2025-02-13T19:30:28.099340064Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:30:28.099541 containerd[1502]: time="2025-02-13T19:30:28.099368148Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:30:28.099846 containerd[1502]: time="2025-02-13T19:30:28.099726105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:30:28.109418 kubelet[2600]: E0213 19:30:28.109368 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:28.115016 containerd[1502]: time="2025-02-13T19:30:28.111114111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-twt5n,Uid:ca39506d-913e-47fe-b92d-520a7e10fce9,Namespace:calico-system,Attempt:0,}" Feb 13 19:30:28.123609 systemd[1]: Started cri-containerd-3997a36349072ebcc6d5c3806b507cc40c941fdf010f3158db04e7560bb9812b.scope - libcontainer container 3997a36349072ebcc6d5c3806b507cc40c941fdf010f3158db04e7560bb9812b. Feb 13 19:30:28.128846 kubelet[2600]: E0213 19:30:28.128785 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:28.128846 kubelet[2600]: W0213 19:30:28.128831 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:28.129055 kubelet[2600]: E0213 19:30:28.128852 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:28.129153 kubelet[2600]: E0213 19:30:28.129136 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:28.129225 kubelet[2600]: W0213 19:30:28.129157 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:28.129225 kubelet[2600]: E0213 19:30:28.129181 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:28.129580 kubelet[2600]: E0213 19:30:28.129561 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:28.129674 kubelet[2600]: W0213 19:30:28.129575 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:28.129674 kubelet[2600]: E0213 19:30:28.129616 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:28.129977 kubelet[2600]: E0213 19:30:28.129951 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:28.129977 kubelet[2600]: W0213 19:30:28.129961 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:28.129977 kubelet[2600]: E0213 19:30:28.129972 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:28.130472 kubelet[2600]: E0213 19:30:28.130457 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:28.130472 kubelet[2600]: W0213 19:30:28.130473 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:28.130730 kubelet[2600]: E0213 19:30:28.130498 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:28.130840 kubelet[2600]: E0213 19:30:28.130822 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:28.130840 kubelet[2600]: W0213 19:30:28.130836 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:28.130932 kubelet[2600]: E0213 19:30:28.130849 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:28.131110 kubelet[2600]: E0213 19:30:28.131097 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:28.131110 kubelet[2600]: W0213 19:30:28.131107 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:28.131210 kubelet[2600]: E0213 19:30:28.131195 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:28.131599 kubelet[2600]: E0213 19:30:28.131577 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:28.131599 kubelet[2600]: W0213 19:30:28.131588 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:28.131714 kubelet[2600]: E0213 19:30:28.131693 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:28.131911 kubelet[2600]: E0213 19:30:28.131897 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:28.131911 kubelet[2600]: W0213 19:30:28.131906 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:28.132032 kubelet[2600]: E0213 19:30:28.131982 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:28.132194 kubelet[2600]: E0213 19:30:28.132180 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:28.132252 kubelet[2600]: W0213 19:30:28.132190 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:28.132847 kubelet[2600]: E0213 19:30:28.132820 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:28.133165 kubelet[2600]: E0213 19:30:28.133130 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:28.133165 kubelet[2600]: W0213 19:30:28.133149 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:28.133261 kubelet[2600]: E0213 19:30:28.133217 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:28.133545 kubelet[2600]: E0213 19:30:28.133498 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:28.133545 kubelet[2600]: W0213 19:30:28.133539 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:28.133630 kubelet[2600]: E0213 19:30:28.133605 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:28.133860 kubelet[2600]: E0213 19:30:28.133833 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:28.133860 kubelet[2600]: W0213 19:30:28.133853 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:28.133950 kubelet[2600]: E0213 19:30:28.133924 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:28.134136 kubelet[2600]: E0213 19:30:28.134119 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:28.134136 kubelet[2600]: W0213 19:30:28.134130 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:28.134209 kubelet[2600]: E0213 19:30:28.134175 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:28.134442 kubelet[2600]: E0213 19:30:28.134424 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:28.134442 kubelet[2600]: W0213 19:30:28.134436 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:28.134534 kubelet[2600]: E0213 19:30:28.134471 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:28.134694 kubelet[2600]: E0213 19:30:28.134677 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:28.134749 kubelet[2600]: W0213 19:30:28.134712 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:28.134785 kubelet[2600]: E0213 19:30:28.134749 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:28.135078 kubelet[2600]: E0213 19:30:28.135029 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:28.135078 kubelet[2600]: W0213 19:30:28.135061 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:28.135162 kubelet[2600]: E0213 19:30:28.135098 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:28.135394 kubelet[2600]: E0213 19:30:28.135360 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:28.135394 kubelet[2600]: W0213 19:30:28.135391 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:28.135495 kubelet[2600]: E0213 19:30:28.135431 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:28.135715 kubelet[2600]: E0213 19:30:28.135690 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:28.135715 kubelet[2600]: W0213 19:30:28.135707 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:28.135789 kubelet[2600]: E0213 19:30:28.135743 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:28.136057 kubelet[2600]: E0213 19:30:28.136029 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:28.136057 kubelet[2600]: W0213 19:30:28.136044 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:28.136157 kubelet[2600]: E0213 19:30:28.136086 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:28.136350 kubelet[2600]: E0213 19:30:28.136331 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:28.136350 kubelet[2600]: W0213 19:30:28.136345 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:28.136432 kubelet[2600]: E0213 19:30:28.136395 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:28.136700 kubelet[2600]: E0213 19:30:28.136658 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:28.136700 kubelet[2600]: W0213 19:30:28.136670 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:28.136842 kubelet[2600]: E0213 19:30:28.136706 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:28.137060 kubelet[2600]: E0213 19:30:28.137040 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:28.137060 kubelet[2600]: W0213 19:30:28.137053 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:28.137163 kubelet[2600]: E0213 19:30:28.137103 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:28.137743 kubelet[2600]: E0213 19:30:28.137724 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:28.137868 kubelet[2600]: W0213 19:30:28.137736 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:28.137937 kubelet[2600]: E0213 19:30:28.137873 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:28.138298 kubelet[2600]: E0213 19:30:28.138279 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:28.138298 kubelet[2600]: W0213 19:30:28.138291 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:28.138298 kubelet[2600]: E0213 19:30:28.138301 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:28.143732 kubelet[2600]: E0213 19:30:28.143709 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:28.143732 kubelet[2600]: W0213 19:30:28.143721 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:28.143732 kubelet[2600]: E0213 19:30:28.143730 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:28.166021 containerd[1502]: time="2025-02-13T19:30:28.165976321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8458c4cdfd-b8lhh,Uid:8e5409b7-b2fc-493d-ab0b-50b67fa8c57d,Namespace:calico-system,Attempt:0,} returns sandbox id \"3997a36349072ebcc6d5c3806b507cc40c941fdf010f3158db04e7560bb9812b\"" Feb 13 19:30:28.166700 kubelet[2600]: E0213 19:30:28.166673 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:28.168169 containerd[1502]: time="2025-02-13T19:30:28.168140014Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Feb 13 19:30:28.602465 containerd[1502]: time="2025-02-13T19:30:28.602345112Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:30:28.602465 containerd[1502]: time="2025-02-13T19:30:28.602444490Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:30:28.602738 containerd[1502]: time="2025-02-13T19:30:28.602467374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:30:28.602738 containerd[1502]: time="2025-02-13T19:30:28.602566310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:30:28.626597 systemd[1]: Started cri-containerd-059b614fd2f892cff198cd86b990eba2edb5278b2c86b8b9dd583b1b74bf0292.scope - libcontainer container 059b614fd2f892cff198cd86b990eba2edb5278b2c86b8b9dd583b1b74bf0292. Feb 13 19:30:28.651569 containerd[1502]: time="2025-02-13T19:30:28.651518178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-twt5n,Uid:ca39506d-913e-47fe-b92d-520a7e10fce9,Namespace:calico-system,Attempt:0,} returns sandbox id \"059b614fd2f892cff198cd86b990eba2edb5278b2c86b8b9dd583b1b74bf0292\"" Feb 13 19:30:28.652433 kubelet[2600]: E0213 19:30:28.652386 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:28.976826 kubelet[2600]: E0213 19:30:28.976786 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:28.991782 kubelet[2600]: E0213 19:30:28.991745 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:28.991782 kubelet[2600]: W0213 19:30:28.991767 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:28.991782 kubelet[2600]: E0213 19:30:28.991788 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:28.992039 kubelet[2600]: E0213 19:30:28.992018 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:28.992039 kubelet[2600]: W0213 19:30:28.992032 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:28.992117 kubelet[2600]: E0213 19:30:28.992042 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:28.992241 kubelet[2600]: E0213 19:30:28.992227 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:28.992241 kubelet[2600]: W0213 19:30:28.992239 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:28.992348 kubelet[2600]: E0213 19:30:28.992249 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:28.992490 kubelet[2600]: E0213 19:30:28.992462 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:28.992490 kubelet[2600]: W0213 19:30:28.992482 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:28.992568 kubelet[2600]: E0213 19:30:28.992493 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:28.992727 kubelet[2600]: E0213 19:30:28.992713 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:28.992727 kubelet[2600]: W0213 19:30:28.992724 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:28.992815 kubelet[2600]: E0213 19:30:28.992734 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:28.993012 kubelet[2600]: E0213 19:30:28.992939 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:28.993012 kubelet[2600]: W0213 19:30:28.992951 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:28.993012 kubelet[2600]: E0213 19:30:28.992970 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:28.993153 kubelet[2600]: E0213 19:30:28.993144 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:28.993153 kubelet[2600]: W0213 19:30:28.993151 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:28.993240 kubelet[2600]: E0213 19:30:28.993158 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:28.993534 kubelet[2600]: E0213 19:30:28.993355 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:28.993534 kubelet[2600]: W0213 19:30:28.993366 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:28.993534 kubelet[2600]: E0213 19:30:28.993374 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:28.993763 kubelet[2600]: E0213 19:30:28.993553 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:28.993763 kubelet[2600]: W0213 19:30:28.993561 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:28.993763 kubelet[2600]: E0213 19:30:28.993568 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:28.993763 kubelet[2600]: E0213 19:30:28.993723 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:28.993763 kubelet[2600]: W0213 19:30:28.993732 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:28.993763 kubelet[2600]: E0213 19:30:28.993742 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:28.993948 kubelet[2600]: E0213 19:30:28.993920 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:28.993948 kubelet[2600]: W0213 19:30:28.993927 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:28.993948 kubelet[2600]: E0213 19:30:28.993935 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:28.994573 kubelet[2600]: E0213 19:30:28.994086 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:28.994573 kubelet[2600]: W0213 19:30:28.994095 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:28.994573 kubelet[2600]: E0213 19:30:28.994102 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:28.994573 kubelet[2600]: E0213 19:30:28.994253 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:28.994573 kubelet[2600]: W0213 19:30:28.994260 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:28.994573 kubelet[2600]: E0213 19:30:28.994268 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:28.994573 kubelet[2600]: E0213 19:30:28.994436 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:28.994573 kubelet[2600]: W0213 19:30:28.994444 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:28.994573 kubelet[2600]: E0213 19:30:28.994452 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:28.994992 kubelet[2600]: E0213 19:30:28.994607 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:28.994992 kubelet[2600]: W0213 19:30:28.994614 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:28.994992 kubelet[2600]: E0213 19:30:28.994621 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:28.994992 kubelet[2600]: E0213 19:30:28.994766 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:28.994992 kubelet[2600]: W0213 19:30:28.994772 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:28.994992 kubelet[2600]: E0213 19:30:28.994779 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:28.994992 kubelet[2600]: E0213 19:30:28.994959 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:28.994992 kubelet[2600]: W0213 19:30:28.994968 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:28.994992 kubelet[2600]: E0213 19:30:28.994979 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:28.995397 kubelet[2600]: E0213 19:30:28.995232 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:28.995397 kubelet[2600]: W0213 19:30:28.995241 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:28.995397 kubelet[2600]: E0213 19:30:28.995249 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:28.995553 kubelet[2600]: E0213 19:30:28.995459 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:28.995553 kubelet[2600]: W0213 19:30:28.995466 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:28.995553 kubelet[2600]: E0213 19:30:28.995481 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:28.995716 kubelet[2600]: E0213 19:30:28.995699 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:28.995763 kubelet[2600]: W0213 19:30:28.995708 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:28.995763 kubelet[2600]: E0213 19:30:28.995726 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:28.995957 kubelet[2600]: E0213 19:30:28.995927 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:28.995957 kubelet[2600]: W0213 19:30:28.995945 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:28.995957 kubelet[2600]: E0213 19:30:28.995957 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:28.996147 kubelet[2600]: E0213 19:30:28.996131 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:28.996147 kubelet[2600]: W0213 19:30:28.996140 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:28.996216 kubelet[2600]: E0213 19:30:28.996149 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:28.996358 kubelet[2600]: E0213 19:30:28.996339 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:28.996358 kubelet[2600]: W0213 19:30:28.996352 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:28.996444 kubelet[2600]: E0213 19:30:28.996361 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:28.996603 kubelet[2600]: E0213 19:30:28.996589 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:28.996603 kubelet[2600]: W0213 19:30:28.996601 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:28.996660 kubelet[2600]: E0213 19:30:28.996613 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:28.996825 kubelet[2600]: E0213 19:30:28.996812 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:28.996860 kubelet[2600]: W0213 19:30:28.996824 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:28.996860 kubelet[2600]: E0213 19:30:28.996834 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:29.818072 kubelet[2600]: E0213 19:30:29.817994 2600 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cfdf6" podUID="61978848-76c7-4692-bab4-3c8c891d5468" Feb 13 19:30:30.654248 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1668672961.mount: Deactivated successfully. Feb 13 19:30:30.682090 kubelet[2600]: E0213 19:30:30.682052 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:30.707048 kubelet[2600]: E0213 19:30:30.707028 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:30.707048 kubelet[2600]: W0213 19:30:30.707044 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:30.707189 kubelet[2600]: E0213 19:30:30.707070 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:30.707297 kubelet[2600]: E0213 19:30:30.707284 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:30.707297 kubelet[2600]: W0213 19:30:30.707293 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:30.707370 kubelet[2600]: E0213 19:30:30.707303 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:30.707541 kubelet[2600]: E0213 19:30:30.707492 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:30.707541 kubelet[2600]: W0213 19:30:30.707502 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:30.707541 kubelet[2600]: E0213 19:30:30.707511 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:30.707691 kubelet[2600]: E0213 19:30:30.707668 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:30.707691 kubelet[2600]: W0213 19:30:30.707680 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:30.707691 kubelet[2600]: E0213 19:30:30.707687 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:30.708002 kubelet[2600]: E0213 19:30:30.707975 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:30.708002 kubelet[2600]: W0213 19:30:30.707985 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:30.708002 kubelet[2600]: E0213 19:30:30.707992 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:30.708213 kubelet[2600]: E0213 19:30:30.708186 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:30.708213 kubelet[2600]: W0213 19:30:30.708194 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:30.708213 kubelet[2600]: E0213 19:30:30.708203 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:30.708380 kubelet[2600]: E0213 19:30:30.708365 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:30.708380 kubelet[2600]: W0213 19:30:30.708376 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:30.708449 kubelet[2600]: E0213 19:30:30.708384 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:30.708577 kubelet[2600]: E0213 19:30:30.708563 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:30.708577 kubelet[2600]: W0213 19:30:30.708573 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:30.708624 kubelet[2600]: E0213 19:30:30.708581 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:30.708844 kubelet[2600]: E0213 19:30:30.708828 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:30.708844 kubelet[2600]: W0213 19:30:30.708839 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:30.708936 kubelet[2600]: E0213 19:30:30.708849 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:30.709050 kubelet[2600]: E0213 19:30:30.709039 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:30.709050 kubelet[2600]: W0213 19:30:30.709048 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:30.709102 kubelet[2600]: E0213 19:30:30.709057 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:30.709249 kubelet[2600]: E0213 19:30:30.709234 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:30.709249 kubelet[2600]: W0213 19:30:30.709242 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:30.709296 kubelet[2600]: E0213 19:30:30.709249 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:30.709448 kubelet[2600]: E0213 19:30:30.709437 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:30.709448 kubelet[2600]: W0213 19:30:30.709446 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:30.709512 kubelet[2600]: E0213 19:30:30.709453 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:30.709659 kubelet[2600]: E0213 19:30:30.709649 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:30.709659 kubelet[2600]: W0213 19:30:30.709657 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:30.709705 kubelet[2600]: E0213 19:30:30.709665 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:30.709843 kubelet[2600]: E0213 19:30:30.709832 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:30.709843 kubelet[2600]: W0213 19:30:30.709840 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:30.709893 kubelet[2600]: E0213 19:30:30.709848 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:30.710020 kubelet[2600]: E0213 19:30:30.710010 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:30.710020 kubelet[2600]: W0213 19:30:30.710017 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:30.710067 kubelet[2600]: E0213 19:30:30.710025 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:31.712159 containerd[1502]: time="2025-02-13T19:30:31.712095315Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:31.712934 containerd[1502]: time="2025-02-13T19:30:31.712889655Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Feb 13 19:30:31.714011 containerd[1502]: time="2025-02-13T19:30:31.713975687Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:31.716185 containerd[1502]: time="2025-02-13T19:30:31.716150746Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:31.716841 containerd[1502]: time="2025-02-13T19:30:31.716813938Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 3.548635101s" Feb 13 19:30:31.716876 containerd[1502]: time="2025-02-13T19:30:31.716840920Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Feb 13 19:30:31.717740 containerd[1502]: time="2025-02-13T19:30:31.717710693Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 19:30:31.724626 containerd[1502]: time="2025-02-13T19:30:31.724578927Z" level=info msg="CreateContainer within sandbox \"3997a36349072ebcc6d5c3806b507cc40c941fdf010f3158db04e7560bb9812b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 13 19:30:31.740697 containerd[1502]: time="2025-02-13T19:30:31.740657381Z" level=info msg="CreateContainer within sandbox \"3997a36349072ebcc6d5c3806b507cc40c941fdf010f3158db04e7560bb9812b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ebbdaf34e0d919eca339b442537b8b7e9e4fb43e92b73e29c00519890b25b90a\"" Feb 13 19:30:31.741285 containerd[1502]: time="2025-02-13T19:30:31.741108262Z" level=info msg="StartContainer for \"ebbdaf34e0d919eca339b442537b8b7e9e4fb43e92b73e29c00519890b25b90a\"" Feb 13 19:30:31.817797 kubelet[2600]: E0213 19:30:31.817442 2600 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cfdf6" podUID="61978848-76c7-4692-bab4-3c8c891d5468" Feb 13 19:30:31.829582 systemd[1]: Started cri-containerd-ebbdaf34e0d919eca339b442537b8b7e9e4fb43e92b73e29c00519890b25b90a.scope - libcontainer container ebbdaf34e0d919eca339b442537b8b7e9e4fb43e92b73e29c00519890b25b90a. Feb 13 19:30:31.912476 containerd[1502]: time="2025-02-13T19:30:31.912420575Z" level=info msg="StartContainer for \"ebbdaf34e0d919eca339b442537b8b7e9e4fb43e92b73e29c00519890b25b90a\" returns successfully" Feb 13 19:30:32.918175 kubelet[2600]: E0213 19:30:32.918130 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:32.923331 kubelet[2600]: E0213 19:30:32.923279 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:32.923331 kubelet[2600]: W0213 19:30:32.923303 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:32.923331 kubelet[2600]: E0213 19:30:32.923337 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:32.923644 kubelet[2600]: E0213 19:30:32.923628 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:32.923644 kubelet[2600]: W0213 19:30:32.923641 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:32.923702 kubelet[2600]: E0213 19:30:32.923652 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:32.923909 kubelet[2600]: E0213 19:30:32.923893 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:32.923909 kubelet[2600]: W0213 19:30:32.923907 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:32.923983 kubelet[2600]: E0213 19:30:32.923918 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:32.924387 kubelet[2600]: E0213 19:30:32.924188 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:32.924387 kubelet[2600]: W0213 19:30:32.924212 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:32.924387 kubelet[2600]: E0213 19:30:32.924237 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:32.924582 kubelet[2600]: E0213 19:30:32.924564 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:32.924582 kubelet[2600]: W0213 19:30:32.924580 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:32.924648 kubelet[2600]: E0213 19:30:32.924595 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:32.924979 kubelet[2600]: E0213 19:30:32.924951 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:32.924979 kubelet[2600]: W0213 19:30:32.924965 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:32.924979 kubelet[2600]: E0213 19:30:32.924976 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:32.925365 kubelet[2600]: E0213 19:30:32.925235 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:32.925365 kubelet[2600]: W0213 19:30:32.925248 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:32.925365 kubelet[2600]: E0213 19:30:32.925259 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:32.925547 kubelet[2600]: E0213 19:30:32.925528 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:32.925547 kubelet[2600]: W0213 19:30:32.925543 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:32.925597 kubelet[2600]: E0213 19:30:32.925555 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:32.925773 kubelet[2600]: E0213 19:30:32.925751 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:32.925773 kubelet[2600]: W0213 19:30:32.925762 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:32.925773 kubelet[2600]: E0213 19:30:32.925770 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:32.925956 kubelet[2600]: E0213 19:30:32.925942 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:32.925956 kubelet[2600]: W0213 19:30:32.925951 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:32.926001 kubelet[2600]: E0213 19:30:32.925970 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:32.926170 kubelet[2600]: E0213 19:30:32.926156 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:32.926170 kubelet[2600]: W0213 19:30:32.926167 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:32.926220 kubelet[2600]: E0213 19:30:32.926177 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:32.926390 kubelet[2600]: E0213 19:30:32.926376 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:32.926390 kubelet[2600]: W0213 19:30:32.926386 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:32.926453 kubelet[2600]: E0213 19:30:32.926395 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:32.926587 kubelet[2600]: E0213 19:30:32.926573 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:32.926587 kubelet[2600]: W0213 19:30:32.926584 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:32.926644 kubelet[2600]: E0213 19:30:32.926591 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:32.926778 kubelet[2600]: E0213 19:30:32.926764 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:32.926778 kubelet[2600]: W0213 19:30:32.926774 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:32.926828 kubelet[2600]: E0213 19:30:32.926784 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:32.926979 kubelet[2600]: E0213 19:30:32.926965 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:32.926979 kubelet[2600]: W0213 19:30:32.926975 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:32.927031 kubelet[2600]: E0213 19:30:32.926983 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:32.928617 kubelet[2600]: I0213 19:30:32.928466 2600 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-8458c4cdfd-b8lhh" podStartSLOduration=2.378396891 podStartE2EDuration="5.928454723s" podCreationTimestamp="2025-02-13 19:30:27 +0000 UTC" firstStartedPulling="2025-02-13 19:30:28.167512276 +0000 UTC m=+12.445543592" lastFinishedPulling="2025-02-13 19:30:31.717570108 +0000 UTC m=+15.995601424" observedRunningTime="2025-02-13 19:30:32.928232874 +0000 UTC m=+17.206264190" watchObservedRunningTime="2025-02-13 19:30:32.928454723 +0000 UTC m=+17.206486059" Feb 13 19:30:32.962883 kubelet[2600]: E0213 19:30:32.962819 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:32.962883 kubelet[2600]: W0213 19:30:32.962851 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:32.962883 kubelet[2600]: E0213 19:30:32.962874 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:32.963279 kubelet[2600]: E0213 19:30:32.963242 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:32.963393 kubelet[2600]: W0213 19:30:32.963273 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:32.963393 kubelet[2600]: E0213 19:30:32.963334 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:32.963588 kubelet[2600]: E0213 19:30:32.963562 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:32.963588 kubelet[2600]: W0213 19:30:32.963585 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:32.963657 kubelet[2600]: E0213 19:30:32.963609 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:32.963854 kubelet[2600]: E0213 19:30:32.963840 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:32.963854 kubelet[2600]: W0213 19:30:32.963850 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:32.963920 kubelet[2600]: E0213 19:30:32.963864 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:32.964080 kubelet[2600]: E0213 19:30:32.964057 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:32.964080 kubelet[2600]: W0213 19:30:32.964067 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:32.964134 kubelet[2600]: E0213 19:30:32.964081 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:32.964372 kubelet[2600]: E0213 19:30:32.964357 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:32.964372 kubelet[2600]: W0213 19:30:32.964368 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:32.964455 kubelet[2600]: E0213 19:30:32.964382 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:32.964680 kubelet[2600]: E0213 19:30:32.964656 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:32.964680 kubelet[2600]: W0213 19:30:32.964672 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:32.964732 kubelet[2600]: E0213 19:30:32.964687 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:32.964944 kubelet[2600]: E0213 19:30:32.964924 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:32.964944 kubelet[2600]: W0213 19:30:32.964936 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:32.964997 kubelet[2600]: E0213 19:30:32.964970 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:32.965151 kubelet[2600]: E0213 19:30:32.965136 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:32.965151 kubelet[2600]: W0213 19:30:32.965147 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:32.965200 kubelet[2600]: E0213 19:30:32.965190 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:32.965372 kubelet[2600]: E0213 19:30:32.965358 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:32.965372 kubelet[2600]: W0213 19:30:32.965368 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:32.965450 kubelet[2600]: E0213 19:30:32.965382 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:32.965689 kubelet[2600]: E0213 19:30:32.965672 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:32.965732 kubelet[2600]: W0213 19:30:32.965687 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:32.965732 kubelet[2600]: E0213 19:30:32.965710 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:32.965934 kubelet[2600]: E0213 19:30:32.965919 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:32.965969 kubelet[2600]: W0213 19:30:32.965932 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:32.965969 kubelet[2600]: E0213 19:30:32.965952 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:32.966176 kubelet[2600]: E0213 19:30:32.966162 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:32.966176 kubelet[2600]: W0213 19:30:32.966174 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:32.966225 kubelet[2600]: E0213 19:30:32.966188 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:32.966446 kubelet[2600]: E0213 19:30:32.966425 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:32.966446 kubelet[2600]: W0213 19:30:32.966442 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:32.966510 kubelet[2600]: E0213 19:30:32.966458 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:32.966689 kubelet[2600]: E0213 19:30:32.966675 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:32.966689 kubelet[2600]: W0213 19:30:32.966684 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:32.966740 kubelet[2600]: E0213 19:30:32.966697 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:32.966955 kubelet[2600]: E0213 19:30:32.966938 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:32.966955 kubelet[2600]: W0213 19:30:32.966952 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:32.967017 kubelet[2600]: E0213 19:30:32.966966 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:32.967247 kubelet[2600]: E0213 19:30:32.967232 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:32.967247 kubelet[2600]: W0213 19:30:32.967243 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:32.967292 kubelet[2600]: E0213 19:30:32.967257 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:32.967494 kubelet[2600]: E0213 19:30:32.967480 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:32.967494 kubelet[2600]: W0213 19:30:32.967490 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:32.967544 kubelet[2600]: E0213 19:30:32.967498 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:33.817665 kubelet[2600]: E0213 19:30:33.817592 2600 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cfdf6" podUID="61978848-76c7-4692-bab4-3c8c891d5468" Feb 13 19:30:33.919949 kubelet[2600]: I0213 19:30:33.919910 2600 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:30:33.920559 kubelet[2600]: E0213 19:30:33.920274 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:33.935799 kubelet[2600]: E0213 19:30:33.935757 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:33.935799 kubelet[2600]: W0213 19:30:33.935798 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:33.936002 kubelet[2600]: E0213 19:30:33.935823 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:33.936106 kubelet[2600]: E0213 19:30:33.936068 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:33.936106 kubelet[2600]: W0213 19:30:33.936097 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:33.936172 kubelet[2600]: E0213 19:30:33.936126 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:33.936464 kubelet[2600]: E0213 19:30:33.936437 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:33.936464 kubelet[2600]: W0213 19:30:33.936450 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:33.936464 kubelet[2600]: E0213 19:30:33.936460 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:33.936704 kubelet[2600]: E0213 19:30:33.936680 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:33.936704 kubelet[2600]: W0213 19:30:33.936691 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:33.936704 kubelet[2600]: E0213 19:30:33.936699 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:33.936926 kubelet[2600]: E0213 19:30:33.936909 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:33.936926 kubelet[2600]: W0213 19:30:33.936919 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:33.936926 kubelet[2600]: E0213 19:30:33.936927 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:33.937129 kubelet[2600]: E0213 19:30:33.937113 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:33.937129 kubelet[2600]: W0213 19:30:33.937122 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:33.937199 kubelet[2600]: E0213 19:30:33.937132 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:33.937359 kubelet[2600]: E0213 19:30:33.937341 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:33.937359 kubelet[2600]: W0213 19:30:33.937355 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:33.937456 kubelet[2600]: E0213 19:30:33.937368 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:33.937606 kubelet[2600]: E0213 19:30:33.937590 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:33.937606 kubelet[2600]: W0213 19:30:33.937599 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:33.937606 kubelet[2600]: E0213 19:30:33.937608 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:33.937826 kubelet[2600]: E0213 19:30:33.937809 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:33.937826 kubelet[2600]: W0213 19:30:33.937820 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:33.937826 kubelet[2600]: E0213 19:30:33.937828 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:33.938033 kubelet[2600]: E0213 19:30:33.938017 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:33.938033 kubelet[2600]: W0213 19:30:33.938027 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:33.938103 kubelet[2600]: E0213 19:30:33.938034 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:33.938238 kubelet[2600]: E0213 19:30:33.938222 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:33.938238 kubelet[2600]: W0213 19:30:33.938233 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:33.938238 kubelet[2600]: E0213 19:30:33.938241 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:33.938483 kubelet[2600]: E0213 19:30:33.938466 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:33.938483 kubelet[2600]: W0213 19:30:33.938479 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:33.938555 kubelet[2600]: E0213 19:30:33.938487 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:33.938727 kubelet[2600]: E0213 19:30:33.938707 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:33.938727 kubelet[2600]: W0213 19:30:33.938719 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:33.938727 kubelet[2600]: E0213 19:30:33.938729 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:33.938952 kubelet[2600]: E0213 19:30:33.938936 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:33.938952 kubelet[2600]: W0213 19:30:33.938946 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:33.938952 kubelet[2600]: E0213 19:30:33.938953 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:33.939173 kubelet[2600]: E0213 19:30:33.939158 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:33.939173 kubelet[2600]: W0213 19:30:33.939167 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:33.939173 kubelet[2600]: E0213 19:30:33.939175 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:33.949105 containerd[1502]: time="2025-02-13T19:30:33.949058363Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:33.949900 containerd[1502]: time="2025-02-13T19:30:33.949849235Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Feb 13 19:30:33.950962 containerd[1502]: time="2025-02-13T19:30:33.950906741Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:33.953010 containerd[1502]: time="2025-02-13T19:30:33.952971999Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:33.953579 containerd[1502]: time="2025-02-13T19:30:33.953546233Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 2.235812446s" Feb 13 19:30:33.953626 containerd[1502]: time="2025-02-13T19:30:33.953574596Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Feb 13 19:30:33.955698 containerd[1502]: time="2025-02-13T19:30:33.955669491Z" level=info msg="CreateContainer within sandbox \"059b614fd2f892cff198cd86b990eba2edb5278b2c86b8b9dd583b1b74bf0292\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 19:30:33.969491 kubelet[2600]: E0213 19:30:33.969450 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:33.969491 kubelet[2600]: W0213 19:30:33.969471 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:33.969669 kubelet[2600]: E0213 19:30:33.969495 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:33.969797 kubelet[2600]: E0213 19:30:33.969782 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:33.969797 kubelet[2600]: W0213 19:30:33.969793 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:33.970181 kubelet[2600]: E0213 19:30:33.969819 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:33.970181 kubelet[2600]: E0213 19:30:33.970044 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:33.970181 kubelet[2600]: W0213 19:30:33.970053 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:33.970181 kubelet[2600]: E0213 19:30:33.970064 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:33.970340 containerd[1502]: time="2025-02-13T19:30:33.969792306Z" level=info msg="CreateContainer within sandbox \"059b614fd2f892cff198cd86b990eba2edb5278b2c86b8b9dd583b1b74bf0292\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ef2465f885e4ed0d9d21fa965771eecabaa998aa75e6e5d05af806a1f07937e5\"" Feb 13 19:30:33.970340 containerd[1502]: time="2025-02-13T19:30:33.970227327Z" level=info msg="StartContainer for \"ef2465f885e4ed0d9d21fa965771eecabaa998aa75e6e5d05af806a1f07937e5\"" Feb 13 19:30:33.970475 kubelet[2600]: E0213 19:30:33.970306 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:33.970475 kubelet[2600]: W0213 19:30:33.970332 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:33.970475 kubelet[2600]: E0213 19:30:33.970344 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:33.970641 kubelet[2600]: E0213 19:30:33.970623 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:33.970686 kubelet[2600]: W0213 19:30:33.970640 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:33.970686 kubelet[2600]: E0213 19:30:33.970664 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:33.970879 kubelet[2600]: E0213 19:30:33.970862 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:33.970879 kubelet[2600]: W0213 19:30:33.970874 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:33.970997 kubelet[2600]: E0213 19:30:33.970977 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:33.971167 kubelet[2600]: E0213 19:30:33.971154 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:33.971247 kubelet[2600]: W0213 19:30:33.971231 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:33.971383 kubelet[2600]: E0213 19:30:33.971302 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:33.971623 kubelet[2600]: E0213 19:30:33.971519 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:33.971623 kubelet[2600]: W0213 19:30:33.971533 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:33.971764 kubelet[2600]: E0213 19:30:33.971669 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:33.973370 kubelet[2600]: E0213 19:30:33.971835 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:33.973370 kubelet[2600]: W0213 19:30:33.971871 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:33.973370 kubelet[2600]: E0213 19:30:33.971884 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:33.973370 kubelet[2600]: E0213 19:30:33.972301 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:33.973370 kubelet[2600]: W0213 19:30:33.972325 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:33.973370 kubelet[2600]: E0213 19:30:33.972337 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:33.973370 kubelet[2600]: E0213 19:30:33.972656 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:33.973370 kubelet[2600]: W0213 19:30:33.972666 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:33.973370 kubelet[2600]: E0213 19:30:33.972678 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:33.973370 kubelet[2600]: E0213 19:30:33.972938 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:33.974350 kubelet[2600]: W0213 19:30:33.972972 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:33.974350 kubelet[2600]: E0213 19:30:33.972984 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:33.974350 kubelet[2600]: E0213 19:30:33.973230 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:33.974350 kubelet[2600]: W0213 19:30:33.973239 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:33.974350 kubelet[2600]: E0213 19:30:33.973249 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:33.974350 kubelet[2600]: E0213 19:30:33.973827 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:33.974350 kubelet[2600]: W0213 19:30:33.973860 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:33.974350 kubelet[2600]: E0213 19:30:33.973874 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:33.974350 kubelet[2600]: E0213 19:30:33.974129 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:33.974350 kubelet[2600]: W0213 19:30:33.974138 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:33.974583 kubelet[2600]: E0213 19:30:33.974148 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:33.974583 kubelet[2600]: E0213 19:30:33.974388 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:33.974583 kubelet[2600]: W0213 19:30:33.974436 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:33.974583 kubelet[2600]: E0213 19:30:33.974447 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:33.974938 kubelet[2600]: E0213 19:30:33.974714 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:33.974938 kubelet[2600]: W0213 19:30:33.974747 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:33.974938 kubelet[2600]: E0213 19:30:33.974765 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:33.975383 kubelet[2600]: E0213 19:30:33.975367 2600 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:30:33.975448 kubelet[2600]: W0213 19:30:33.975383 2600 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:30:33.975448 kubelet[2600]: E0213 19:30:33.975394 2600 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:30:34.007526 systemd[1]: Started cri-containerd-ef2465f885e4ed0d9d21fa965771eecabaa998aa75e6e5d05af806a1f07937e5.scope - libcontainer container ef2465f885e4ed0d9d21fa965771eecabaa998aa75e6e5d05af806a1f07937e5. Feb 13 19:30:34.040417 containerd[1502]: time="2025-02-13T19:30:34.040263986Z" level=info msg="StartContainer for \"ef2465f885e4ed0d9d21fa965771eecabaa998aa75e6e5d05af806a1f07937e5\" returns successfully" Feb 13 19:30:34.054161 systemd[1]: cri-containerd-ef2465f885e4ed0d9d21fa965771eecabaa998aa75e6e5d05af806a1f07937e5.scope: Deactivated successfully. Feb 13 19:30:34.078227 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef2465f885e4ed0d9d21fa965771eecabaa998aa75e6e5d05af806a1f07937e5-rootfs.mount: Deactivated successfully. Feb 13 19:30:34.113804 containerd[1502]: time="2025-02-13T19:30:34.113723068Z" level=info msg="shim disconnected" id=ef2465f885e4ed0d9d21fa965771eecabaa998aa75e6e5d05af806a1f07937e5 namespace=k8s.io Feb 13 19:30:34.113804 containerd[1502]: time="2025-02-13T19:30:34.113794914Z" level=warning msg="cleaning up after shim disconnected" id=ef2465f885e4ed0d9d21fa965771eecabaa998aa75e6e5d05af806a1f07937e5 namespace=k8s.io Feb 13 19:30:34.113804 containerd[1502]: time="2025-02-13T19:30:34.113807067Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:30:34.924010 kubelet[2600]: E0213 19:30:34.923977 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:34.924834 containerd[1502]: time="2025-02-13T19:30:34.924780728Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 19:30:35.818185 kubelet[2600]: E0213 19:30:35.818105 2600 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cfdf6" podUID="61978848-76c7-4692-bab4-3c8c891d5468" Feb 13 19:30:37.817709 kubelet[2600]: E0213 19:30:37.817652 2600 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cfdf6" podUID="61978848-76c7-4692-bab4-3c8c891d5468" Feb 13 19:30:40.145297 kubelet[2600]: E0213 19:30:40.145213 2600 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cfdf6" podUID="61978848-76c7-4692-bab4-3c8c891d5468" Feb 13 19:30:42.135187 kubelet[2600]: E0213 19:30:42.135123 2600 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cfdf6" podUID="61978848-76c7-4692-bab4-3c8c891d5468" Feb 13 19:30:42.725510 containerd[1502]: time="2025-02-13T19:30:42.725444320Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:42.811866 containerd[1502]: time="2025-02-13T19:30:42.811795522Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Feb 13 19:30:42.886499 containerd[1502]: time="2025-02-13T19:30:42.886443807Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:42.905710 containerd[1502]: time="2025-02-13T19:30:42.905570265Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:42.906461 containerd[1502]: time="2025-02-13T19:30:42.906286343Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 7.981459148s" Feb 13 19:30:42.906461 containerd[1502]: time="2025-02-13T19:30:42.906357636Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Feb 13 19:30:42.908453 containerd[1502]: time="2025-02-13T19:30:42.908426640Z" level=info msg="CreateContainer within sandbox \"059b614fd2f892cff198cd86b990eba2edb5278b2c86b8b9dd583b1b74bf0292\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 19:30:43.533885 containerd[1502]: time="2025-02-13T19:30:43.533812357Z" level=info msg="CreateContainer within sandbox \"059b614fd2f892cff198cd86b990eba2edb5278b2c86b8b9dd583b1b74bf0292\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d8f2996c31e23a1e061256a37af1fa4d365901c61b53b2b275de586f4b7a88e4\"" Feb 13 19:30:43.534516 containerd[1502]: time="2025-02-13T19:30:43.534457291Z" level=info msg="StartContainer for \"d8f2996c31e23a1e061256a37af1fa4d365901c61b53b2b275de586f4b7a88e4\"" Feb 13 19:30:43.570582 systemd[1]: Started cri-containerd-d8f2996c31e23a1e061256a37af1fa4d365901c61b53b2b275de586f4b7a88e4.scope - libcontainer container d8f2996c31e23a1e061256a37af1fa4d365901c61b53b2b275de586f4b7a88e4. Feb 13 19:30:43.668269 containerd[1502]: time="2025-02-13T19:30:43.668193109Z" level=info msg="StartContainer for \"d8f2996c31e23a1e061256a37af1fa4d365901c61b53b2b275de586f4b7a88e4\" returns successfully" Feb 13 19:30:43.818334 kubelet[2600]: E0213 19:30:43.818114 2600 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cfdf6" podUID="61978848-76c7-4692-bab4-3c8c891d5468" Feb 13 19:30:43.946340 kubelet[2600]: E0213 19:30:43.946286 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:44.947739 kubelet[2600]: E0213 19:30:44.947702 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:45.817852 kubelet[2600]: E0213 19:30:45.817785 2600 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cfdf6" podUID="61978848-76c7-4692-bab4-3c8c891d5468" Feb 13 19:30:45.853957 systemd[1]: cri-containerd-d8f2996c31e23a1e061256a37af1fa4d365901c61b53b2b275de586f4b7a88e4.scope: Deactivated successfully. Feb 13 19:30:45.875417 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d8f2996c31e23a1e061256a37af1fa4d365901c61b53b2b275de586f4b7a88e4-rootfs.mount: Deactivated successfully. Feb 13 19:30:45.907556 systemd[1]: Started sshd@9-10.0.0.116:22-10.0.0.1:60208.service - OpenSSH per-connection server daemon (10.0.0.1:60208). Feb 13 19:30:45.964333 kubelet[2600]: I0213 19:30:45.931598 2600 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Feb 13 19:30:45.967565 sshd[3430]: Accepted publickey for core from 10.0.0.1 port 60208 ssh2: RSA SHA256:ENn9hOvI2hLUXcV6iHA8gc9Z4CTEPvGDMkoVtxIuYbg Feb 13 19:30:45.969447 sshd-session[3430]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:30:46.177511 systemd-logind[1485]: New session 10 of user core. Feb 13 19:30:46.188444 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 19:30:46.353893 kubelet[2600]: I0213 19:30:46.353836 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/402f0fa8-6d39-4a67-b618-1d216e220aea-config-volume\") pod \"coredns-6f6b679f8f-bxg8d\" (UID: \"402f0fa8-6d39-4a67-b618-1d216e220aea\") " pod="kube-system/coredns-6f6b679f8f-bxg8d" Feb 13 19:30:46.353893 kubelet[2600]: I0213 19:30:46.353888 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf4bc\" (UniqueName: \"kubernetes.io/projected/402f0fa8-6d39-4a67-b618-1d216e220aea-kube-api-access-wf4bc\") pod \"coredns-6f6b679f8f-bxg8d\" (UID: \"402f0fa8-6d39-4a67-b618-1d216e220aea\") " pod="kube-system/coredns-6f6b679f8f-bxg8d" Feb 13 19:30:46.453197 systemd[1]: Created slice kubepods-burstable-pod402f0fa8_6d39_4a67_b618_1d216e220aea.slice - libcontainer container kubepods-burstable-pod402f0fa8_6d39_4a67_b618_1d216e220aea.slice. Feb 13 19:30:46.467114 systemd[1]: Created slice kubepods-besteffort-pod9eba37d3_14ec_4521_9302_789cbdb496aa.slice - libcontainer container kubepods-besteffort-pod9eba37d3_14ec_4521_9302_789cbdb496aa.slice. Feb 13 19:30:46.472014 systemd[1]: Created slice kubepods-burstable-podf2d2c41c_272d_4c74_897d_79c94986b647.slice - libcontainer container kubepods-burstable-podf2d2c41c_272d_4c74_897d_79c94986b647.slice. Feb 13 19:30:46.480682 systemd[1]: Created slice kubepods-besteffort-pod9bf3d155_222d_46fc_9867_b55f1df961f7.slice - libcontainer container kubepods-besteffort-pod9bf3d155_222d_46fc_9867_b55f1df961f7.slice. Feb 13 19:30:46.489920 systemd[1]: Created slice kubepods-besteffort-podd38faaa4_494c_4c4e_87ff_0a00aa82bf88.slice - libcontainer container kubepods-besteffort-podd38faaa4_494c_4c4e_87ff_0a00aa82bf88.slice. Feb 13 19:30:46.526150 sshd[3432]: Connection closed by 10.0.0.1 port 60208 Feb 13 19:30:46.526490 sshd-session[3430]: pam_unix(sshd:session): session closed for user core Feb 13 19:30:46.531037 systemd[1]: sshd@9-10.0.0.116:22-10.0.0.1:60208.service: Deactivated successfully. Feb 13 19:30:46.533023 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 19:30:46.533685 systemd-logind[1485]: Session 10 logged out. Waiting for processes to exit. Feb 13 19:30:46.534596 systemd-logind[1485]: Removed session 10. Feb 13 19:30:46.555693 kubelet[2600]: I0213 19:30:46.555634 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9eba37d3-14ec-4521-9302-789cbdb496aa-calico-apiserver-certs\") pod \"calico-apiserver-6794f4445b-q9q8b\" (UID: \"9eba37d3-14ec-4521-9302-789cbdb496aa\") " pod="calico-apiserver/calico-apiserver-6794f4445b-q9q8b" Feb 13 19:30:46.555693 kubelet[2600]: I0213 19:30:46.555670 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m95dw\" (UniqueName: \"kubernetes.io/projected/f2d2c41c-272d-4c74-897d-79c94986b647-kube-api-access-m95dw\") pod \"coredns-6f6b679f8f-gvd66\" (UID: \"f2d2c41c-272d-4c74-897d-79c94986b647\") " pod="kube-system/coredns-6f6b679f8f-gvd66" Feb 13 19:30:46.555693 kubelet[2600]: I0213 19:30:46.555686 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6h62\" (UniqueName: \"kubernetes.io/projected/9bf3d155-222d-46fc-9867-b55f1df961f7-kube-api-access-n6h62\") pod \"calico-apiserver-6794f4445b-7cftj\" (UID: \"9bf3d155-222d-46fc-9867-b55f1df961f7\") " pod="calico-apiserver/calico-apiserver-6794f4445b-7cftj" Feb 13 19:30:46.555912 kubelet[2600]: I0213 19:30:46.555714 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgkjk\" (UniqueName: \"kubernetes.io/projected/9eba37d3-14ec-4521-9302-789cbdb496aa-kube-api-access-xgkjk\") pod \"calico-apiserver-6794f4445b-q9q8b\" (UID: \"9eba37d3-14ec-4521-9302-789cbdb496aa\") " pod="calico-apiserver/calico-apiserver-6794f4445b-q9q8b" Feb 13 19:30:46.555912 kubelet[2600]: I0213 19:30:46.555824 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f2d2c41c-272d-4c74-897d-79c94986b647-config-volume\") pod \"coredns-6f6b679f8f-gvd66\" (UID: \"f2d2c41c-272d-4c74-897d-79c94986b647\") " pod="kube-system/coredns-6f6b679f8f-gvd66" Feb 13 19:30:46.555912 kubelet[2600]: I0213 19:30:46.555900 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9bf3d155-222d-46fc-9867-b55f1df961f7-calico-apiserver-certs\") pod \"calico-apiserver-6794f4445b-7cftj\" (UID: \"9bf3d155-222d-46fc-9867-b55f1df961f7\") " pod="calico-apiserver/calico-apiserver-6794f4445b-7cftj" Feb 13 19:30:46.556017 kubelet[2600]: I0213 19:30:46.555922 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqvnf\" (UniqueName: \"kubernetes.io/projected/d38faaa4-494c-4c4e-87ff-0a00aa82bf88-kube-api-access-sqvnf\") pod \"calico-kube-controllers-68b8f8cf95-qdbnh\" (UID: \"d38faaa4-494c-4c4e-87ff-0a00aa82bf88\") " pod="calico-system/calico-kube-controllers-68b8f8cf95-qdbnh" Feb 13 19:30:46.556017 kubelet[2600]: I0213 19:30:46.555963 2600 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d38faaa4-494c-4c4e-87ff-0a00aa82bf88-tigera-ca-bundle\") pod \"calico-kube-controllers-68b8f8cf95-qdbnh\" (UID: \"d38faaa4-494c-4c4e-87ff-0a00aa82bf88\") " pod="calico-system/calico-kube-controllers-68b8f8cf95-qdbnh" Feb 13 19:30:46.741672 containerd[1502]: time="2025-02-13T19:30:46.741613178Z" level=info msg="shim disconnected" id=d8f2996c31e23a1e061256a37af1fa4d365901c61b53b2b275de586f4b7a88e4 namespace=k8s.io Feb 13 19:30:46.741672 containerd[1502]: time="2025-02-13T19:30:46.741664684Z" level=warning msg="cleaning up after shim disconnected" id=d8f2996c31e23a1e061256a37af1fa4d365901c61b53b2b275de586f4b7a88e4 namespace=k8s.io Feb 13 19:30:46.741672 containerd[1502]: time="2025-02-13T19:30:46.741672910Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:30:46.765801 kubelet[2600]: E0213 19:30:46.765753 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:46.777170 kubelet[2600]: E0213 19:30:46.777145 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:46.805996 containerd[1502]: time="2025-02-13T19:30:46.805955247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68b8f8cf95-qdbnh,Uid:d38faaa4-494c-4c4e-87ff-0a00aa82bf88,Namespace:calico-system,Attempt:0,}" Feb 13 19:30:46.805996 containerd[1502]: time="2025-02-13T19:30:46.805987688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6794f4445b-q9q8b,Uid:9eba37d3-14ec-4521-9302-789cbdb496aa,Namespace:calico-apiserver,Attempt:0,}" Feb 13 19:30:46.806145 containerd[1502]: time="2025-02-13T19:30:46.805955257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-gvd66,Uid:f2d2c41c-272d-4c74-897d-79c94986b647,Namespace:kube-system,Attempt:0,}" Feb 13 19:30:46.806246 containerd[1502]: time="2025-02-13T19:30:46.805971728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bxg8d,Uid:402f0fa8-6d39-4a67-b618-1d216e220aea,Namespace:kube-system,Attempt:0,}" Feb 13 19:30:46.806475 containerd[1502]: time="2025-02-13T19:30:46.806454076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6794f4445b-7cftj,Uid:9bf3d155-222d-46fc-9867-b55f1df961f7,Namespace:calico-apiserver,Attempt:0,}" Feb 13 19:30:46.952008 kubelet[2600]: E0213 19:30:46.951974 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:46.953475 containerd[1502]: time="2025-02-13T19:30:46.952499483Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 19:30:47.485868 containerd[1502]: time="2025-02-13T19:30:47.485809101Z" level=error msg="Failed to destroy network for sandbox \"259a6c2041cb9f5c709110e833b792e9227b4e6e9df73f046bd2d34be5e0fa0a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:47.487521 containerd[1502]: time="2025-02-13T19:30:47.486191089Z" level=error msg="encountered an error cleaning up failed sandbox \"259a6c2041cb9f5c709110e833b792e9227b4e6e9df73f046bd2d34be5e0fa0a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:47.487521 containerd[1502]: time="2025-02-13T19:30:47.486263274Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6794f4445b-q9q8b,Uid:9eba37d3-14ec-4521-9302-789cbdb496aa,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"259a6c2041cb9f5c709110e833b792e9227b4e6e9df73f046bd2d34be5e0fa0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:47.488753 kubelet[2600]: E0213 19:30:47.486511 2600 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"259a6c2041cb9f5c709110e833b792e9227b4e6e9df73f046bd2d34be5e0fa0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:47.488753 kubelet[2600]: E0213 19:30:47.486588 2600 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"259a6c2041cb9f5c709110e833b792e9227b4e6e9df73f046bd2d34be5e0fa0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6794f4445b-q9q8b" Feb 13 19:30:47.488753 kubelet[2600]: E0213 19:30:47.486611 2600 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"259a6c2041cb9f5c709110e833b792e9227b4e6e9df73f046bd2d34be5e0fa0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6794f4445b-q9q8b" Feb 13 19:30:47.489397 kubelet[2600]: E0213 19:30:47.486656 2600 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6794f4445b-q9q8b_calico-apiserver(9eba37d3-14ec-4521-9302-789cbdb496aa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6794f4445b-q9q8b_calico-apiserver(9eba37d3-14ec-4521-9302-789cbdb496aa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"259a6c2041cb9f5c709110e833b792e9227b4e6e9df73f046bd2d34be5e0fa0a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6794f4445b-q9q8b" podUID="9eba37d3-14ec-4521-9302-789cbdb496aa" Feb 13 19:30:47.491502 containerd[1502]: time="2025-02-13T19:30:47.491448083Z" level=error msg="Failed to destroy network for sandbox \"9a576abef7e3f4083c481695c9f493c39417d50066da6a1bc9040a8a2fdf6340\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:47.491868 containerd[1502]: time="2025-02-13T19:30:47.491837114Z" level=error msg="encountered an error cleaning up failed sandbox \"9a576abef7e3f4083c481695c9f493c39417d50066da6a1bc9040a8a2fdf6340\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:47.491921 containerd[1502]: time="2025-02-13T19:30:47.491891557Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-gvd66,Uid:f2d2c41c-272d-4c74-897d-79c94986b647,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9a576abef7e3f4083c481695c9f493c39417d50066da6a1bc9040a8a2fdf6340\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:47.492080 kubelet[2600]: E0213 19:30:47.492056 2600 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a576abef7e3f4083c481695c9f493c39417d50066da6a1bc9040a8a2fdf6340\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:47.492140 kubelet[2600]: E0213 19:30:47.492085 2600 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a576abef7e3f4083c481695c9f493c39417d50066da6a1bc9040a8a2fdf6340\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-gvd66" Feb 13 19:30:47.492140 kubelet[2600]: E0213 19:30:47.492101 2600 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a576abef7e3f4083c481695c9f493c39417d50066da6a1bc9040a8a2fdf6340\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-gvd66" Feb 13 19:30:47.492140 kubelet[2600]: E0213 19:30:47.492128 2600 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-gvd66_kube-system(f2d2c41c-272d-4c74-897d-79c94986b647)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-gvd66_kube-system(f2d2c41c-272d-4c74-897d-79c94986b647)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9a576abef7e3f4083c481695c9f493c39417d50066da6a1bc9040a8a2fdf6340\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-gvd66" podUID="f2d2c41c-272d-4c74-897d-79c94986b647" Feb 13 19:30:47.508887 containerd[1502]: time="2025-02-13T19:30:47.508838712Z" level=error msg="Failed to destroy network for sandbox \"197b50055a8d04a83b5b696198cca280193728180dffd10631ed830366a0c893\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:47.509524 containerd[1502]: time="2025-02-13T19:30:47.509416488Z" level=error msg="encountered an error cleaning up failed sandbox \"197b50055a8d04a83b5b696198cca280193728180dffd10631ed830366a0c893\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:47.509524 containerd[1502]: time="2025-02-13T19:30:47.509480639Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bxg8d,Uid:402f0fa8-6d39-4a67-b618-1d216e220aea,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"197b50055a8d04a83b5b696198cca280193728180dffd10631ed830366a0c893\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:47.509876 kubelet[2600]: E0213 19:30:47.509823 2600 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"197b50055a8d04a83b5b696198cca280193728180dffd10631ed830366a0c893\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:47.509945 kubelet[2600]: E0213 19:30:47.509898 2600 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"197b50055a8d04a83b5b696198cca280193728180dffd10631ed830366a0c893\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-bxg8d" Feb 13 19:30:47.509945 kubelet[2600]: E0213 19:30:47.509920 2600 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"197b50055a8d04a83b5b696198cca280193728180dffd10631ed830366a0c893\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-bxg8d" Feb 13 19:30:47.510002 kubelet[2600]: E0213 19:30:47.509958 2600 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-bxg8d_kube-system(402f0fa8-6d39-4a67-b618-1d216e220aea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-bxg8d_kube-system(402f0fa8-6d39-4a67-b618-1d216e220aea)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"197b50055a8d04a83b5b696198cca280193728180dffd10631ed830366a0c893\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-bxg8d" podUID="402f0fa8-6d39-4a67-b618-1d216e220aea" Feb 13 19:30:47.520789 containerd[1502]: time="2025-02-13T19:30:47.520430537Z" level=error msg="Failed to destroy network for sandbox \"fd20a9c9c675e34be18747d0be53656eb71ddfce7fadca259852046e986978fd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:47.520917 containerd[1502]: time="2025-02-13T19:30:47.520886284Z" level=error msg="encountered an error cleaning up failed sandbox \"fd20a9c9c675e34be18747d0be53656eb71ddfce7fadca259852046e986978fd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:47.520995 containerd[1502]: time="2025-02-13T19:30:47.520970212Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68b8f8cf95-qdbnh,Uid:d38faaa4-494c-4c4e-87ff-0a00aa82bf88,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fd20a9c9c675e34be18747d0be53656eb71ddfce7fadca259852046e986978fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:47.522646 kubelet[2600]: E0213 19:30:47.522544 2600 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd20a9c9c675e34be18747d0be53656eb71ddfce7fadca259852046e986978fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:47.522646 kubelet[2600]: E0213 19:30:47.522640 2600 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd20a9c9c675e34be18747d0be53656eb71ddfce7fadca259852046e986978fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68b8f8cf95-qdbnh" Feb 13 19:30:47.522751 kubelet[2600]: E0213 19:30:47.522660 2600 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd20a9c9c675e34be18747d0be53656eb71ddfce7fadca259852046e986978fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68b8f8cf95-qdbnh" Feb 13 19:30:47.522751 kubelet[2600]: E0213 19:30:47.522703 2600 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-68b8f8cf95-qdbnh_calico-system(d38faaa4-494c-4c4e-87ff-0a00aa82bf88)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-68b8f8cf95-qdbnh_calico-system(d38faaa4-494c-4c4e-87ff-0a00aa82bf88)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fd20a9c9c675e34be18747d0be53656eb71ddfce7fadca259852046e986978fd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68b8f8cf95-qdbnh" podUID="d38faaa4-494c-4c4e-87ff-0a00aa82bf88" Feb 13 19:30:47.534358 containerd[1502]: time="2025-02-13T19:30:47.534294515Z" level=error msg="Failed to destroy network for sandbox \"b5ee6d231558bb9e6055e52293f90a21d1ac84e5b91aa9ef485efd32a23f0760\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:47.534743 containerd[1502]: time="2025-02-13T19:30:47.534714875Z" level=error msg="encountered an error cleaning up failed sandbox \"b5ee6d231558bb9e6055e52293f90a21d1ac84e5b91aa9ef485efd32a23f0760\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:47.534789 containerd[1502]: time="2025-02-13T19:30:47.534774817Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6794f4445b-7cftj,Uid:9bf3d155-222d-46fc-9867-b55f1df961f7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b5ee6d231558bb9e6055e52293f90a21d1ac84e5b91aa9ef485efd32a23f0760\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:47.535015 kubelet[2600]: E0213 19:30:47.534972 2600 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5ee6d231558bb9e6055e52293f90a21d1ac84e5b91aa9ef485efd32a23f0760\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:47.535073 kubelet[2600]: E0213 19:30:47.535040 2600 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5ee6d231558bb9e6055e52293f90a21d1ac84e5b91aa9ef485efd32a23f0760\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6794f4445b-7cftj" Feb 13 19:30:47.535073 kubelet[2600]: E0213 19:30:47.535063 2600 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5ee6d231558bb9e6055e52293f90a21d1ac84e5b91aa9ef485efd32a23f0760\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6794f4445b-7cftj" Feb 13 19:30:47.535133 kubelet[2600]: E0213 19:30:47.535108 2600 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6794f4445b-7cftj_calico-apiserver(9bf3d155-222d-46fc-9867-b55f1df961f7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6794f4445b-7cftj_calico-apiserver(9bf3d155-222d-46fc-9867-b55f1df961f7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b5ee6d231558bb9e6055e52293f90a21d1ac84e5b91aa9ef485efd32a23f0760\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6794f4445b-7cftj" podUID="9bf3d155-222d-46fc-9867-b55f1df961f7" Feb 13 19:30:47.824517 systemd[1]: Created slice kubepods-besteffort-pod61978848_76c7_4692_bab4_3c8c891d5468.slice - libcontainer container kubepods-besteffort-pod61978848_76c7_4692_bab4_3c8c891d5468.slice. Feb 13 19:30:47.827058 containerd[1502]: time="2025-02-13T19:30:47.827015421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cfdf6,Uid:61978848-76c7-4692-bab4-3c8c891d5468,Namespace:calico-system,Attempt:0,}" Feb 13 19:30:47.955656 kubelet[2600]: I0213 19:30:47.955607 2600 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="259a6c2041cb9f5c709110e833b792e9227b4e6e9df73f046bd2d34be5e0fa0a" Feb 13 19:30:47.957188 containerd[1502]: time="2025-02-13T19:30:47.956449779Z" level=info msg="StopPodSandbox for \"259a6c2041cb9f5c709110e833b792e9227b4e6e9df73f046bd2d34be5e0fa0a\"" Feb 13 19:30:47.957188 containerd[1502]: time="2025-02-13T19:30:47.957008319Z" level=info msg="Ensure that sandbox 259a6c2041cb9f5c709110e833b792e9227b4e6e9df73f046bd2d34be5e0fa0a in task-service has been cleanup successfully" Feb 13 19:30:47.957438 containerd[1502]: time="2025-02-13T19:30:47.957395968Z" level=info msg="TearDown network for sandbox \"259a6c2041cb9f5c709110e833b792e9227b4e6e9df73f046bd2d34be5e0fa0a\" successfully" Feb 13 19:30:47.957535 containerd[1502]: time="2025-02-13T19:30:47.957484785Z" level=info msg="StopPodSandbox for \"259a6c2041cb9f5c709110e833b792e9227b4e6e9df73f046bd2d34be5e0fa0a\" returns successfully" Feb 13 19:30:47.958699 kubelet[2600]: I0213 19:30:47.958668 2600 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd20a9c9c675e34be18747d0be53656eb71ddfce7fadca259852046e986978fd" Feb 13 19:30:47.958796 containerd[1502]: time="2025-02-13T19:30:47.958774870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6794f4445b-q9q8b,Uid:9eba37d3-14ec-4521-9302-789cbdb496aa,Namespace:calico-apiserver,Attempt:1,}" Feb 13 19:30:47.959443 containerd[1502]: time="2025-02-13T19:30:47.959396909Z" level=info msg="StopPodSandbox for \"fd20a9c9c675e34be18747d0be53656eb71ddfce7fadca259852046e986978fd\"" Feb 13 19:30:47.959860 containerd[1502]: time="2025-02-13T19:30:47.959832449Z" level=info msg="Ensure that sandbox fd20a9c9c675e34be18747d0be53656eb71ddfce7fadca259852046e986978fd in task-service has been cleanup successfully" Feb 13 19:30:47.960118 containerd[1502]: time="2025-02-13T19:30:47.960071719Z" level=info msg="TearDown network for sandbox \"fd20a9c9c675e34be18747d0be53656eb71ddfce7fadca259852046e986978fd\" successfully" Feb 13 19:30:47.960118 containerd[1502]: time="2025-02-13T19:30:47.960094432Z" level=info msg="StopPodSandbox for \"fd20a9c9c675e34be18747d0be53656eb71ddfce7fadca259852046e986978fd\" returns successfully" Feb 13 19:30:47.961017 kubelet[2600]: I0213 19:30:47.960671 2600 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5ee6d231558bb9e6055e52293f90a21d1ac84e5b91aa9ef485efd32a23f0760" Feb 13 19:30:47.961068 containerd[1502]: time="2025-02-13T19:30:47.960754553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68b8f8cf95-qdbnh,Uid:d38faaa4-494c-4c4e-87ff-0a00aa82bf88,Namespace:calico-system,Attempt:1,}" Feb 13 19:30:47.961281 containerd[1502]: time="2025-02-13T19:30:47.961243622Z" level=info msg="StopPodSandbox for \"b5ee6d231558bb9e6055e52293f90a21d1ac84e5b91aa9ef485efd32a23f0760\"" Feb 13 19:30:47.961667 containerd[1502]: time="2025-02-13T19:30:47.961645978Z" level=info msg="Ensure that sandbox b5ee6d231558bb9e6055e52293f90a21d1ac84e5b91aa9ef485efd32a23f0760 in task-service has been cleanup successfully" Feb 13 19:30:47.962098 containerd[1502]: time="2025-02-13T19:30:47.961993712Z" level=info msg="TearDown network for sandbox \"b5ee6d231558bb9e6055e52293f90a21d1ac84e5b91aa9ef485efd32a23f0760\" successfully" Feb 13 19:30:47.962098 containerd[1502]: time="2025-02-13T19:30:47.962010474Z" level=info msg="StopPodSandbox for \"b5ee6d231558bb9e6055e52293f90a21d1ac84e5b91aa9ef485efd32a23f0760\" returns successfully" Feb 13 19:30:47.963832 containerd[1502]: time="2025-02-13T19:30:47.963519652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6794f4445b-7cftj,Uid:9bf3d155-222d-46fc-9867-b55f1df961f7,Namespace:calico-apiserver,Attempt:1,}" Feb 13 19:30:47.966010 kubelet[2600]: I0213 19:30:47.965977 2600 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="197b50055a8d04a83b5b696198cca280193728180dffd10631ed830366a0c893" Feb 13 19:30:47.966495 containerd[1502]: time="2025-02-13T19:30:47.966464869Z" level=info msg="StopPodSandbox for \"197b50055a8d04a83b5b696198cca280193728180dffd10631ed830366a0c893\"" Feb 13 19:30:47.966694 containerd[1502]: time="2025-02-13T19:30:47.966670937Z" level=info msg="Ensure that sandbox 197b50055a8d04a83b5b696198cca280193728180dffd10631ed830366a0c893 in task-service has been cleanup successfully" Feb 13 19:30:47.967137 containerd[1502]: time="2025-02-13T19:30:47.967042145Z" level=info msg="TearDown network for sandbox \"197b50055a8d04a83b5b696198cca280193728180dffd10631ed830366a0c893\" successfully" Feb 13 19:30:47.967137 containerd[1502]: time="2025-02-13T19:30:47.967064737Z" level=info msg="StopPodSandbox for \"197b50055a8d04a83b5b696198cca280193728180dffd10631ed830366a0c893\" returns successfully" Feb 13 19:30:47.967279 kubelet[2600]: E0213 19:30:47.967235 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:47.967521 kubelet[2600]: I0213 19:30:47.967494 2600 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a576abef7e3f4083c481695c9f493c39417d50066da6a1bc9040a8a2fdf6340" Feb 13 19:30:47.967600 containerd[1502]: time="2025-02-13T19:30:47.967490157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bxg8d,Uid:402f0fa8-6d39-4a67-b618-1d216e220aea,Namespace:kube-system,Attempt:1,}" Feb 13 19:30:47.968246 containerd[1502]: time="2025-02-13T19:30:47.967864511Z" level=info msg="StopPodSandbox for \"9a576abef7e3f4083c481695c9f493c39417d50066da6a1bc9040a8a2fdf6340\"" Feb 13 19:30:47.968246 containerd[1502]: time="2025-02-13T19:30:47.968072632Z" level=info msg="Ensure that sandbox 9a576abef7e3f4083c481695c9f493c39417d50066da6a1bc9040a8a2fdf6340 in task-service has been cleanup successfully" Feb 13 19:30:47.968498 containerd[1502]: time="2025-02-13T19:30:47.968480519Z" level=info msg="TearDown network for sandbox \"9a576abef7e3f4083c481695c9f493c39417d50066da6a1bc9040a8a2fdf6340\" successfully" Feb 13 19:30:47.968624 containerd[1502]: time="2025-02-13T19:30:47.968598921Z" level=info msg="StopPodSandbox for \"9a576abef7e3f4083c481695c9f493c39417d50066da6a1bc9040a8a2fdf6340\" returns successfully" Feb 13 19:30:47.968870 kubelet[2600]: E0213 19:30:47.968745 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:47.969521 containerd[1502]: time="2025-02-13T19:30:47.969496500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-gvd66,Uid:f2d2c41c-272d-4c74-897d-79c94986b647,Namespace:kube-system,Attempt:1,}" Feb 13 19:30:47.992721 containerd[1502]: time="2025-02-13T19:30:47.992662928Z" level=error msg="Failed to destroy network for sandbox \"4d57794a877b0b6953752ccedec2bd85d68373c09318c76ac2c537449d27f46f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:47.993071 containerd[1502]: time="2025-02-13T19:30:47.993041600Z" level=error msg="encountered an error cleaning up failed sandbox \"4d57794a877b0b6953752ccedec2bd85d68373c09318c76ac2c537449d27f46f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:47.993125 containerd[1502]: time="2025-02-13T19:30:47.993103997Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cfdf6,Uid:61978848-76c7-4692-bab4-3c8c891d5468,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4d57794a877b0b6953752ccedec2bd85d68373c09318c76ac2c537449d27f46f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:47.993374 kubelet[2600]: E0213 19:30:47.993328 2600 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d57794a877b0b6953752ccedec2bd85d68373c09318c76ac2c537449d27f46f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:47.993500 kubelet[2600]: E0213 19:30:47.993390 2600 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d57794a877b0b6953752ccedec2bd85d68373c09318c76ac2c537449d27f46f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cfdf6" Feb 13 19:30:47.993500 kubelet[2600]: E0213 19:30:47.993419 2600 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d57794a877b0b6953752ccedec2bd85d68373c09318c76ac2c537449d27f46f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cfdf6" Feb 13 19:30:47.993500 kubelet[2600]: E0213 19:30:47.993464 2600 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-cfdf6_calico-system(61978848-76c7-4692-bab4-3c8c891d5468)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-cfdf6_calico-system(61978848-76c7-4692-bab4-3c8c891d5468)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4d57794a877b0b6953752ccedec2bd85d68373c09318c76ac2c537449d27f46f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cfdf6" podUID="61978848-76c7-4692-bab4-3c8c891d5468" Feb 13 19:30:48.342171 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-197b50055a8d04a83b5b696198cca280193728180dffd10631ed830366a0c893-shm.mount: Deactivated successfully. Feb 13 19:30:48.342286 systemd[1]: run-netns-cni\x2d341713fa\x2df5a4\x2d6109\x2db05d\x2d393dd0cb96a2.mount: Deactivated successfully. Feb 13 19:30:48.342385 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9a576abef7e3f4083c481695c9f493c39417d50066da6a1bc9040a8a2fdf6340-shm.mount: Deactivated successfully. Feb 13 19:30:48.342462 systemd[1]: run-netns-cni\x2d4f4859d9\x2df40d\x2dcfce\x2d4577\x2d0ad7e2eed673.mount: Deactivated successfully. Feb 13 19:30:48.342531 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-259a6c2041cb9f5c709110e833b792e9227b4e6e9df73f046bd2d34be5e0fa0a-shm.mount: Deactivated successfully. Feb 13 19:30:48.755566 containerd[1502]: time="2025-02-13T19:30:48.755511567Z" level=error msg="Failed to destroy network for sandbox \"029de7693a9f8fae3fc5f15fc92d9b4380854aa9a9856579c8e3ed5055510b90\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:48.756013 containerd[1502]: time="2025-02-13T19:30:48.755979798Z" level=error msg="encountered an error cleaning up failed sandbox \"029de7693a9f8fae3fc5f15fc92d9b4380854aa9a9856579c8e3ed5055510b90\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:48.756088 containerd[1502]: time="2025-02-13T19:30:48.756057984Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6794f4445b-q9q8b,Uid:9eba37d3-14ec-4521-9302-789cbdb496aa,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"029de7693a9f8fae3fc5f15fc92d9b4380854aa9a9856579c8e3ed5055510b90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:48.756401 kubelet[2600]: E0213 19:30:48.756353 2600 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"029de7693a9f8fae3fc5f15fc92d9b4380854aa9a9856579c8e3ed5055510b90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:48.756750 kubelet[2600]: E0213 19:30:48.756424 2600 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"029de7693a9f8fae3fc5f15fc92d9b4380854aa9a9856579c8e3ed5055510b90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6794f4445b-q9q8b" Feb 13 19:30:48.756750 kubelet[2600]: E0213 19:30:48.756449 2600 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"029de7693a9f8fae3fc5f15fc92d9b4380854aa9a9856579c8e3ed5055510b90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6794f4445b-q9q8b" Feb 13 19:30:48.756750 kubelet[2600]: E0213 19:30:48.756494 2600 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6794f4445b-q9q8b_calico-apiserver(9eba37d3-14ec-4521-9302-789cbdb496aa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6794f4445b-q9q8b_calico-apiserver(9eba37d3-14ec-4521-9302-789cbdb496aa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"029de7693a9f8fae3fc5f15fc92d9b4380854aa9a9856579c8e3ed5055510b90\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6794f4445b-q9q8b" podUID="9eba37d3-14ec-4521-9302-789cbdb496aa" Feb 13 19:30:48.758010 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-029de7693a9f8fae3fc5f15fc92d9b4380854aa9a9856579c8e3ed5055510b90-shm.mount: Deactivated successfully. Feb 13 19:30:48.944012 containerd[1502]: time="2025-02-13T19:30:48.943953656Z" level=error msg="Failed to destroy network for sandbox \"1deac3e8f1315485d945ed4d7ff726c16660ae79ed6279b69ac45afcd92e7fed\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:48.944412 containerd[1502]: time="2025-02-13T19:30:48.944387021Z" level=error msg="encountered an error cleaning up failed sandbox \"1deac3e8f1315485d945ed4d7ff726c16660ae79ed6279b69ac45afcd92e7fed\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:48.944520 containerd[1502]: time="2025-02-13T19:30:48.944447184Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bxg8d,Uid:402f0fa8-6d39-4a67-b618-1d216e220aea,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"1deac3e8f1315485d945ed4d7ff726c16660ae79ed6279b69ac45afcd92e7fed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:48.944763 kubelet[2600]: E0213 19:30:48.944691 2600 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1deac3e8f1315485d945ed4d7ff726c16660ae79ed6279b69ac45afcd92e7fed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:48.944815 kubelet[2600]: E0213 19:30:48.944782 2600 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1deac3e8f1315485d945ed4d7ff726c16660ae79ed6279b69ac45afcd92e7fed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-bxg8d" Feb 13 19:30:48.944815 kubelet[2600]: E0213 19:30:48.944803 2600 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1deac3e8f1315485d945ed4d7ff726c16660ae79ed6279b69ac45afcd92e7fed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-bxg8d" Feb 13 19:30:48.944868 kubelet[2600]: E0213 19:30:48.944845 2600 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-bxg8d_kube-system(402f0fa8-6d39-4a67-b618-1d216e220aea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-bxg8d_kube-system(402f0fa8-6d39-4a67-b618-1d216e220aea)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1deac3e8f1315485d945ed4d7ff726c16660ae79ed6279b69ac45afcd92e7fed\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-bxg8d" podUID="402f0fa8-6d39-4a67-b618-1d216e220aea" Feb 13 19:30:48.970222 kubelet[2600]: I0213 19:30:48.970184 2600 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d57794a877b0b6953752ccedec2bd85d68373c09318c76ac2c537449d27f46f" Feb 13 19:30:48.971258 containerd[1502]: time="2025-02-13T19:30:48.970758568Z" level=info msg="StopPodSandbox for \"4d57794a877b0b6953752ccedec2bd85d68373c09318c76ac2c537449d27f46f\"" Feb 13 19:30:48.971258 containerd[1502]: time="2025-02-13T19:30:48.970982128Z" level=info msg="Ensure that sandbox 4d57794a877b0b6953752ccedec2bd85d68373c09318c76ac2c537449d27f46f in task-service has been cleanup successfully" Feb 13 19:30:48.971577 containerd[1502]: time="2025-02-13T19:30:48.971553282Z" level=info msg="TearDown network for sandbox \"4d57794a877b0b6953752ccedec2bd85d68373c09318c76ac2c537449d27f46f\" successfully" Feb 13 19:30:48.971662 containerd[1502]: time="2025-02-13T19:30:48.971648310Z" level=info msg="StopPodSandbox for \"4d57794a877b0b6953752ccedec2bd85d68373c09318c76ac2c537449d27f46f\" returns successfully" Feb 13 19:30:48.972828 containerd[1502]: time="2025-02-13T19:30:48.972497016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cfdf6,Uid:61978848-76c7-4692-bab4-3c8c891d5468,Namespace:calico-system,Attempt:1,}" Feb 13 19:30:48.974145 kubelet[2600]: I0213 19:30:48.973042 2600 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="029de7693a9f8fae3fc5f15fc92d9b4380854aa9a9856579c8e3ed5055510b90" Feb 13 19:30:48.974790 containerd[1502]: time="2025-02-13T19:30:48.974746314Z" level=info msg="StopPodSandbox for \"029de7693a9f8fae3fc5f15fc92d9b4380854aa9a9856579c8e3ed5055510b90\"" Feb 13 19:30:48.975015 containerd[1502]: time="2025-02-13T19:30:48.974995483Z" level=info msg="Ensure that sandbox 029de7693a9f8fae3fc5f15fc92d9b4380854aa9a9856579c8e3ed5055510b90 in task-service has been cleanup successfully" Feb 13 19:30:48.975997 containerd[1502]: time="2025-02-13T19:30:48.975973842Z" level=info msg="TearDown network for sandbox \"029de7693a9f8fae3fc5f15fc92d9b4380854aa9a9856579c8e3ed5055510b90\" successfully" Feb 13 19:30:48.976105 containerd[1502]: time="2025-02-13T19:30:48.976056517Z" level=info msg="StopPodSandbox for \"029de7693a9f8fae3fc5f15fc92d9b4380854aa9a9856579c8e3ed5055510b90\" returns successfully" Feb 13 19:30:48.977049 kubelet[2600]: I0213 19:30:48.976720 2600 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1deac3e8f1315485d945ed4d7ff726c16660ae79ed6279b69ac45afcd92e7fed" Feb 13 19:30:48.977258 containerd[1502]: time="2025-02-13T19:30:48.977233491Z" level=info msg="StopPodSandbox for \"259a6c2041cb9f5c709110e833b792e9227b4e6e9df73f046bd2d34be5e0fa0a\"" Feb 13 19:30:48.977383 containerd[1502]: time="2025-02-13T19:30:48.977358866Z" level=info msg="StopPodSandbox for \"1deac3e8f1315485d945ed4d7ff726c16660ae79ed6279b69ac45afcd92e7fed\"" Feb 13 19:30:48.977476 containerd[1502]: time="2025-02-13T19:30:48.977460948Z" level=info msg="TearDown network for sandbox \"259a6c2041cb9f5c709110e833b792e9227b4e6e9df73f046bd2d34be5e0fa0a\" successfully" Feb 13 19:30:48.977529 containerd[1502]: time="2025-02-13T19:30:48.977518105Z" level=info msg="StopPodSandbox for \"259a6c2041cb9f5c709110e833b792e9227b4e6e9df73f046bd2d34be5e0fa0a\" returns successfully" Feb 13 19:30:48.977590 containerd[1502]: time="2025-02-13T19:30:48.977566486Z" level=info msg="Ensure that sandbox 1deac3e8f1315485d945ed4d7ff726c16660ae79ed6279b69ac45afcd92e7fed in task-service has been cleanup successfully" Feb 13 19:30:48.977871 containerd[1502]: time="2025-02-13T19:30:48.977782813Z" level=info msg="TearDown network for sandbox \"1deac3e8f1315485d945ed4d7ff726c16660ae79ed6279b69ac45afcd92e7fed\" successfully" Feb 13 19:30:48.977871 containerd[1502]: time="2025-02-13T19:30:48.977835111Z" level=info msg="StopPodSandbox for \"1deac3e8f1315485d945ed4d7ff726c16660ae79ed6279b69ac45afcd92e7fed\" returns successfully" Feb 13 19:30:48.978102 containerd[1502]: time="2025-02-13T19:30:48.978082677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6794f4445b-q9q8b,Uid:9eba37d3-14ec-4521-9302-789cbdb496aa,Namespace:calico-apiserver,Attempt:2,}" Feb 13 19:30:48.978620 containerd[1502]: time="2025-02-13T19:30:48.978503378Z" level=info msg="StopPodSandbox for \"197b50055a8d04a83b5b696198cca280193728180dffd10631ed830366a0c893\"" Feb 13 19:30:48.978693 containerd[1502]: time="2025-02-13T19:30:48.978678256Z" level=info msg="TearDown network for sandbox \"197b50055a8d04a83b5b696198cca280193728180dffd10631ed830366a0c893\" successfully" Feb 13 19:30:48.978693 containerd[1502]: time="2025-02-13T19:30:48.978688255Z" level=info msg="StopPodSandbox for \"197b50055a8d04a83b5b696198cca280193728180dffd10631ed830366a0c893\" returns successfully" Feb 13 19:30:48.978873 kubelet[2600]: E0213 19:30:48.978853 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:48.979135 containerd[1502]: time="2025-02-13T19:30:48.979087485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bxg8d,Uid:402f0fa8-6d39-4a67-b618-1d216e220aea,Namespace:kube-system,Attempt:2,}" Feb 13 19:30:48.998510 containerd[1502]: time="2025-02-13T19:30:48.998456418Z" level=error msg="Failed to destroy network for sandbox \"d128712c7ee019c611fbc612ab74775c9fe61e0291a6f84232248274b76f860a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:48.998904 containerd[1502]: time="2025-02-13T19:30:48.998870366Z" level=error msg="encountered an error cleaning up failed sandbox \"d128712c7ee019c611fbc612ab74775c9fe61e0291a6f84232248274b76f860a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:48.998969 containerd[1502]: time="2025-02-13T19:30:48.998941810Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6794f4445b-7cftj,Uid:9bf3d155-222d-46fc-9867-b55f1df961f7,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"d128712c7ee019c611fbc612ab74775c9fe61e0291a6f84232248274b76f860a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:48.999345 kubelet[2600]: E0213 19:30:48.999247 2600 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d128712c7ee019c611fbc612ab74775c9fe61e0291a6f84232248274b76f860a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:48.999611 kubelet[2600]: E0213 19:30:48.999460 2600 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d128712c7ee019c611fbc612ab74775c9fe61e0291a6f84232248274b76f860a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6794f4445b-7cftj" Feb 13 19:30:48.999611 kubelet[2600]: E0213 19:30:48.999495 2600 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d128712c7ee019c611fbc612ab74775c9fe61e0291a6f84232248274b76f860a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6794f4445b-7cftj" Feb 13 19:30:48.999611 kubelet[2600]: E0213 19:30:48.999554 2600 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6794f4445b-7cftj_calico-apiserver(9bf3d155-222d-46fc-9867-b55f1df961f7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6794f4445b-7cftj_calico-apiserver(9bf3d155-222d-46fc-9867-b55f1df961f7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d128712c7ee019c611fbc612ab74775c9fe61e0291a6f84232248274b76f860a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6794f4445b-7cftj" podUID="9bf3d155-222d-46fc-9867-b55f1df961f7" Feb 13 19:30:49.027130 containerd[1502]: time="2025-02-13T19:30:49.026911865Z" level=error msg="Failed to destroy network for sandbox \"9a99dde00b5a0c5753dc22e79a23fa5cb756241ff3d7fe4eed74bd21fcf14d6f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:49.027778 containerd[1502]: time="2025-02-13T19:30:49.027557178Z" level=error msg="encountered an error cleaning up failed sandbox \"9a99dde00b5a0c5753dc22e79a23fa5cb756241ff3d7fe4eed74bd21fcf14d6f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:49.028499 containerd[1502]: time="2025-02-13T19:30:49.028459795Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68b8f8cf95-qdbnh,Uid:d38faaa4-494c-4c4e-87ff-0a00aa82bf88,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"9a99dde00b5a0c5753dc22e79a23fa5cb756241ff3d7fe4eed74bd21fcf14d6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:49.028835 kubelet[2600]: E0213 19:30:49.028801 2600 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a99dde00b5a0c5753dc22e79a23fa5cb756241ff3d7fe4eed74bd21fcf14d6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:49.028906 kubelet[2600]: E0213 19:30:49.028864 2600 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a99dde00b5a0c5753dc22e79a23fa5cb756241ff3d7fe4eed74bd21fcf14d6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68b8f8cf95-qdbnh" Feb 13 19:30:49.028906 kubelet[2600]: E0213 19:30:49.028887 2600 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a99dde00b5a0c5753dc22e79a23fa5cb756241ff3d7fe4eed74bd21fcf14d6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68b8f8cf95-qdbnh" Feb 13 19:30:49.028988 kubelet[2600]: E0213 19:30:49.028938 2600 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-68b8f8cf95-qdbnh_calico-system(d38faaa4-494c-4c4e-87ff-0a00aa82bf88)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-68b8f8cf95-qdbnh_calico-system(d38faaa4-494c-4c4e-87ff-0a00aa82bf88)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9a99dde00b5a0c5753dc22e79a23fa5cb756241ff3d7fe4eed74bd21fcf14d6f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68b8f8cf95-qdbnh" podUID="d38faaa4-494c-4c4e-87ff-0a00aa82bf88" Feb 13 19:30:49.052764 containerd[1502]: time="2025-02-13T19:30:49.052708383Z" level=error msg="Failed to destroy network for sandbox \"919def795fcabe325c259e12ac19e82f4306388eba272371855df017ef8e2d5d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:49.053209 containerd[1502]: time="2025-02-13T19:30:49.053159321Z" level=error msg="encountered an error cleaning up failed sandbox \"919def795fcabe325c259e12ac19e82f4306388eba272371855df017ef8e2d5d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:49.053278 containerd[1502]: time="2025-02-13T19:30:49.053242418Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-gvd66,Uid:f2d2c41c-272d-4c74-897d-79c94986b647,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"919def795fcabe325c259e12ac19e82f4306388eba272371855df017ef8e2d5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:49.053552 kubelet[2600]: E0213 19:30:49.053509 2600 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"919def795fcabe325c259e12ac19e82f4306388eba272371855df017ef8e2d5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:49.053599 kubelet[2600]: E0213 19:30:49.053569 2600 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"919def795fcabe325c259e12ac19e82f4306388eba272371855df017ef8e2d5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-gvd66" Feb 13 19:30:49.053599 kubelet[2600]: E0213 19:30:49.053594 2600 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"919def795fcabe325c259e12ac19e82f4306388eba272371855df017ef8e2d5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-gvd66" Feb 13 19:30:49.053661 kubelet[2600]: E0213 19:30:49.053634 2600 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-gvd66_kube-system(f2d2c41c-272d-4c74-897d-79c94986b647)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-gvd66_kube-system(f2d2c41c-272d-4c74-897d-79c94986b647)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"919def795fcabe325c259e12ac19e82f4306388eba272371855df017ef8e2d5d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-gvd66" podUID="f2d2c41c-272d-4c74-897d-79c94986b647" Feb 13 19:30:49.342936 systemd[1]: run-netns-cni\x2d421d35ed\x2d5a05\x2d779e\x2d7bdb\x2d11e513ad8981.mount: Deactivated successfully. Feb 13 19:30:49.343048 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1deac3e8f1315485d945ed4d7ff726c16660ae79ed6279b69ac45afcd92e7fed-shm.mount: Deactivated successfully. Feb 13 19:30:49.343134 systemd[1]: run-netns-cni\x2d02483d21\x2d02d2\x2da65b\x2dbe64\x2d43803a01658a.mount: Deactivated successfully. Feb 13 19:30:49.343222 systemd[1]: run-netns-cni\x2dfb9bba81\x2d8d15\x2d1656\x2d08ef\x2d2cb258f9fb55.mount: Deactivated successfully. Feb 13 19:30:49.642351 containerd[1502]: time="2025-02-13T19:30:49.642198638Z" level=error msg="Failed to destroy network for sandbox \"a615d24e84e74129058bc9c7cb028777bb16a481796829c1a98ceaf03287d6c5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:49.642949 containerd[1502]: time="2025-02-13T19:30:49.642859961Z" level=error msg="encountered an error cleaning up failed sandbox \"a615d24e84e74129058bc9c7cb028777bb16a481796829c1a98ceaf03287d6c5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:49.642998 containerd[1502]: time="2025-02-13T19:30:49.642980839Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bxg8d,Uid:402f0fa8-6d39-4a67-b618-1d216e220aea,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"a615d24e84e74129058bc9c7cb028777bb16a481796829c1a98ceaf03287d6c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:49.643403 kubelet[2600]: E0213 19:30:49.643283 2600 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a615d24e84e74129058bc9c7cb028777bb16a481796829c1a98ceaf03287d6c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:49.643472 kubelet[2600]: E0213 19:30:49.643437 2600 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a615d24e84e74129058bc9c7cb028777bb16a481796829c1a98ceaf03287d6c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-bxg8d" Feb 13 19:30:49.643523 kubelet[2600]: E0213 19:30:49.643467 2600 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a615d24e84e74129058bc9c7cb028777bb16a481796829c1a98ceaf03287d6c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-bxg8d" Feb 13 19:30:49.643610 kubelet[2600]: E0213 19:30:49.643552 2600 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-bxg8d_kube-system(402f0fa8-6d39-4a67-b618-1d216e220aea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-bxg8d_kube-system(402f0fa8-6d39-4a67-b618-1d216e220aea)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a615d24e84e74129058bc9c7cb028777bb16a481796829c1a98ceaf03287d6c5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-bxg8d" podUID="402f0fa8-6d39-4a67-b618-1d216e220aea" Feb 13 19:30:49.682605 containerd[1502]: time="2025-02-13T19:30:49.682555746Z" level=error msg="Failed to destroy network for sandbox \"06b76013e050990e084073338277aecc1ce3e9a19060a462af91ada291c27c21\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:49.683068 containerd[1502]: time="2025-02-13T19:30:49.683038714Z" level=error msg="encountered an error cleaning up failed sandbox \"06b76013e050990e084073338277aecc1ce3e9a19060a462af91ada291c27c21\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:49.683131 containerd[1502]: time="2025-02-13T19:30:49.683114046Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cfdf6,Uid:61978848-76c7-4692-bab4-3c8c891d5468,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"06b76013e050990e084073338277aecc1ce3e9a19060a462af91ada291c27c21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:49.683442 kubelet[2600]: E0213 19:30:49.683392 2600 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06b76013e050990e084073338277aecc1ce3e9a19060a462af91ada291c27c21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:49.683510 kubelet[2600]: E0213 19:30:49.683479 2600 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06b76013e050990e084073338277aecc1ce3e9a19060a462af91ada291c27c21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cfdf6" Feb 13 19:30:49.683510 kubelet[2600]: E0213 19:30:49.683500 2600 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06b76013e050990e084073338277aecc1ce3e9a19060a462af91ada291c27c21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cfdf6" Feb 13 19:30:49.683568 kubelet[2600]: E0213 19:30:49.683542 2600 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-cfdf6_calico-system(61978848-76c7-4692-bab4-3c8c891d5468)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-cfdf6_calico-system(61978848-76c7-4692-bab4-3c8c891d5468)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"06b76013e050990e084073338277aecc1ce3e9a19060a462af91ada291c27c21\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cfdf6" podUID="61978848-76c7-4692-bab4-3c8c891d5468" Feb 13 19:30:49.778666 containerd[1502]: time="2025-02-13T19:30:49.778565952Z" level=error msg="Failed to destroy network for sandbox \"92aa5e65cd130c3a58ac37b4adf370bc37fd5b279afabfde37dbc8bf70e38b6a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:49.779016 containerd[1502]: time="2025-02-13T19:30:49.778977565Z" level=error msg="encountered an error cleaning up failed sandbox \"92aa5e65cd130c3a58ac37b4adf370bc37fd5b279afabfde37dbc8bf70e38b6a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:49.779070 containerd[1502]: time="2025-02-13T19:30:49.779039051Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6794f4445b-q9q8b,Uid:9eba37d3-14ec-4521-9302-789cbdb496aa,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"92aa5e65cd130c3a58ac37b4adf370bc37fd5b279afabfde37dbc8bf70e38b6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:49.779367 kubelet[2600]: E0213 19:30:49.779298 2600 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92aa5e65cd130c3a58ac37b4adf370bc37fd5b279afabfde37dbc8bf70e38b6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:49.779910 kubelet[2600]: E0213 19:30:49.779385 2600 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92aa5e65cd130c3a58ac37b4adf370bc37fd5b279afabfde37dbc8bf70e38b6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6794f4445b-q9q8b" Feb 13 19:30:49.779910 kubelet[2600]: E0213 19:30:49.779408 2600 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92aa5e65cd130c3a58ac37b4adf370bc37fd5b279afabfde37dbc8bf70e38b6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6794f4445b-q9q8b" Feb 13 19:30:49.779910 kubelet[2600]: E0213 19:30:49.779460 2600 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6794f4445b-q9q8b_calico-apiserver(9eba37d3-14ec-4521-9302-789cbdb496aa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6794f4445b-q9q8b_calico-apiserver(9eba37d3-14ec-4521-9302-789cbdb496aa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"92aa5e65cd130c3a58ac37b4adf370bc37fd5b279afabfde37dbc8bf70e38b6a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6794f4445b-q9q8b" podUID="9eba37d3-14ec-4521-9302-789cbdb496aa" Feb 13 19:30:49.979597 kubelet[2600]: I0213 19:30:49.979566 2600 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a615d24e84e74129058bc9c7cb028777bb16a481796829c1a98ceaf03287d6c5" Feb 13 19:30:49.980104 containerd[1502]: time="2025-02-13T19:30:49.980068276Z" level=info msg="StopPodSandbox for \"a615d24e84e74129058bc9c7cb028777bb16a481796829c1a98ceaf03287d6c5\"" Feb 13 19:30:49.980713 containerd[1502]: time="2025-02-13T19:30:49.980691067Z" level=info msg="Ensure that sandbox a615d24e84e74129058bc9c7cb028777bb16a481796829c1a98ceaf03287d6c5 in task-service has been cleanup successfully" Feb 13 19:30:49.980987 containerd[1502]: time="2025-02-13T19:30:49.980957207Z" level=info msg="TearDown network for sandbox \"a615d24e84e74129058bc9c7cb028777bb16a481796829c1a98ceaf03287d6c5\" successfully" Feb 13 19:30:49.980987 containerd[1502]: time="2025-02-13T19:30:49.980973197Z" level=info msg="StopPodSandbox for \"a615d24e84e74129058bc9c7cb028777bb16a481796829c1a98ceaf03287d6c5\" returns successfully" Feb 13 19:30:49.981138 kubelet[2600]: I0213 19:30:49.981047 2600 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="919def795fcabe325c259e12ac19e82f4306388eba272371855df017ef8e2d5d" Feb 13 19:30:49.981287 containerd[1502]: time="2025-02-13T19:30:49.981247482Z" level=info msg="StopPodSandbox for \"1deac3e8f1315485d945ed4d7ff726c16660ae79ed6279b69ac45afcd92e7fed\"" Feb 13 19:30:49.981549 containerd[1502]: time="2025-02-13T19:30:49.981427180Z" level=info msg="TearDown network for sandbox \"1deac3e8f1315485d945ed4d7ff726c16660ae79ed6279b69ac45afcd92e7fed\" successfully" Feb 13 19:30:49.981549 containerd[1502]: time="2025-02-13T19:30:49.981449342Z" level=info msg="StopPodSandbox for \"1deac3e8f1315485d945ed4d7ff726c16660ae79ed6279b69ac45afcd92e7fed\" returns successfully" Feb 13 19:30:49.981914 containerd[1502]: time="2025-02-13T19:30:49.981772629Z" level=info msg="StopPodSandbox for \"197b50055a8d04a83b5b696198cca280193728180dffd10631ed830366a0c893\"" Feb 13 19:30:49.981914 containerd[1502]: time="2025-02-13T19:30:49.981813937Z" level=info msg="StopPodSandbox for \"919def795fcabe325c259e12ac19e82f4306388eba272371855df017ef8e2d5d\"" Feb 13 19:30:49.981914 containerd[1502]: time="2025-02-13T19:30:49.981857018Z" level=info msg="TearDown network for sandbox \"197b50055a8d04a83b5b696198cca280193728180dffd10631ed830366a0c893\" successfully" Feb 13 19:30:49.981914 containerd[1502]: time="2025-02-13T19:30:49.981867378Z" level=info msg="StopPodSandbox for \"197b50055a8d04a83b5b696198cca280193728180dffd10631ed830366a0c893\" returns successfully" Feb 13 19:30:49.982138 containerd[1502]: time="2025-02-13T19:30:49.982011548Z" level=info msg="Ensure that sandbox 919def795fcabe325c259e12ac19e82f4306388eba272371855df017ef8e2d5d in task-service has been cleanup successfully" Feb 13 19:30:49.982465 kubelet[2600]: E0213 19:30:49.982306 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:49.982592 containerd[1502]: time="2025-02-13T19:30:49.982569998Z" level=info msg="TearDown network for sandbox \"919def795fcabe325c259e12ac19e82f4306388eba272371855df017ef8e2d5d\" successfully" Feb 13 19:30:49.982592 containerd[1502]: time="2025-02-13T19:30:49.982589264Z" level=info msg="StopPodSandbox for \"919def795fcabe325c259e12ac19e82f4306388eba272371855df017ef8e2d5d\" returns successfully" Feb 13 19:30:49.982699 containerd[1502]: time="2025-02-13T19:30:49.982684713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bxg8d,Uid:402f0fa8-6d39-4a67-b618-1d216e220aea,Namespace:kube-system,Attempt:3,}" Feb 13 19:30:49.982790 containerd[1502]: time="2025-02-13T19:30:49.982763693Z" level=info msg="StopPodSandbox for \"9a576abef7e3f4083c481695c9f493c39417d50066da6a1bc9040a8a2fdf6340\"" Feb 13 19:30:49.982824 kubelet[2600]: I0213 19:30:49.982771 2600 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d128712c7ee019c611fbc612ab74775c9fe61e0291a6f84232248274b76f860a" Feb 13 19:30:49.982885 containerd[1502]: time="2025-02-13T19:30:49.982865574Z" level=info msg="TearDown network for sandbox \"9a576abef7e3f4083c481695c9f493c39417d50066da6a1bc9040a8a2fdf6340\" successfully" Feb 13 19:30:49.982885 containerd[1502]: time="2025-02-13T19:30:49.982882545Z" level=info msg="StopPodSandbox for \"9a576abef7e3f4083c481695c9f493c39417d50066da6a1bc9040a8a2fdf6340\" returns successfully" Feb 13 19:30:49.983083 kubelet[2600]: E0213 19:30:49.983065 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:49.983340 containerd[1502]: time="2025-02-13T19:30:49.983281726Z" level=info msg="StopPodSandbox for \"d128712c7ee019c611fbc612ab74775c9fe61e0291a6f84232248274b76f860a\"" Feb 13 19:30:49.983340 containerd[1502]: time="2025-02-13T19:30:49.983305581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-gvd66,Uid:f2d2c41c-272d-4c74-897d-79c94986b647,Namespace:kube-system,Attempt:2,}" Feb 13 19:30:49.983582 containerd[1502]: time="2025-02-13T19:30:49.983545412Z" level=info msg="Ensure that sandbox d128712c7ee019c611fbc612ab74775c9fe61e0291a6f84232248274b76f860a in task-service has been cleanup successfully" Feb 13 19:30:49.983716 containerd[1502]: time="2025-02-13T19:30:49.983697478Z" level=info msg="TearDown network for sandbox \"d128712c7ee019c611fbc612ab74775c9fe61e0291a6f84232248274b76f860a\" successfully" Feb 13 19:30:49.983716 containerd[1502]: time="2025-02-13T19:30:49.983712987Z" level=info msg="StopPodSandbox for \"d128712c7ee019c611fbc612ab74775c9fe61e0291a6f84232248274b76f860a\" returns successfully" Feb 13 19:30:49.984045 containerd[1502]: time="2025-02-13T19:30:49.983910418Z" level=info msg="StopPodSandbox for \"b5ee6d231558bb9e6055e52293f90a21d1ac84e5b91aa9ef485efd32a23f0760\"" Feb 13 19:30:49.984045 containerd[1502]: time="2025-02-13T19:30:49.983998904Z" level=info msg="TearDown network for sandbox \"b5ee6d231558bb9e6055e52293f90a21d1ac84e5b91aa9ef485efd32a23f0760\" successfully" Feb 13 19:30:49.984045 containerd[1502]: time="2025-02-13T19:30:49.984010606Z" level=info msg="StopPodSandbox for \"b5ee6d231558bb9e6055e52293f90a21d1ac84e5b91aa9ef485efd32a23f0760\" returns successfully" Feb 13 19:30:49.984404 containerd[1502]: time="2025-02-13T19:30:49.984359611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6794f4445b-7cftj,Uid:9bf3d155-222d-46fc-9867-b55f1df961f7,Namespace:calico-apiserver,Attempt:2,}" Feb 13 19:30:49.984776 kubelet[2600]: I0213 19:30:49.984755 2600 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92aa5e65cd130c3a58ac37b4adf370bc37fd5b279afabfde37dbc8bf70e38b6a" Feb 13 19:30:49.985127 containerd[1502]: time="2025-02-13T19:30:49.985103621Z" level=info msg="StopPodSandbox for \"92aa5e65cd130c3a58ac37b4adf370bc37fd5b279afabfde37dbc8bf70e38b6a\"" Feb 13 19:30:49.985476 containerd[1502]: time="2025-02-13T19:30:49.985355384Z" level=info msg="Ensure that sandbox 92aa5e65cd130c3a58ac37b4adf370bc37fd5b279afabfde37dbc8bf70e38b6a in task-service has been cleanup successfully" Feb 13 19:30:49.985578 containerd[1502]: time="2025-02-13T19:30:49.985552254Z" level=info msg="TearDown network for sandbox \"92aa5e65cd130c3a58ac37b4adf370bc37fd5b279afabfde37dbc8bf70e38b6a\" successfully" Feb 13 19:30:49.985578 containerd[1502]: time="2025-02-13T19:30:49.985566711Z" level=info msg="StopPodSandbox for \"92aa5e65cd130c3a58ac37b4adf370bc37fd5b279afabfde37dbc8bf70e38b6a\" returns successfully" Feb 13 19:30:49.985767 containerd[1502]: time="2025-02-13T19:30:49.985744725Z" level=info msg="StopPodSandbox for \"029de7693a9f8fae3fc5f15fc92d9b4380854aa9a9856579c8e3ed5055510b90\"" Feb 13 19:30:49.985841 containerd[1502]: time="2025-02-13T19:30:49.985827652Z" level=info msg="TearDown network for sandbox \"029de7693a9f8fae3fc5f15fc92d9b4380854aa9a9856579c8e3ed5055510b90\" successfully" Feb 13 19:30:49.985864 containerd[1502]: time="2025-02-13T19:30:49.985839594Z" level=info msg="StopPodSandbox for \"029de7693a9f8fae3fc5f15fc92d9b4380854aa9a9856579c8e3ed5055510b90\" returns successfully" Feb 13 19:30:49.986038 kubelet[2600]: I0213 19:30:49.986012 2600 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a99dde00b5a0c5753dc22e79a23fa5cb756241ff3d7fe4eed74bd21fcf14d6f" Feb 13 19:30:49.986265 containerd[1502]: time="2025-02-13T19:30:49.986097058Z" level=info msg="StopPodSandbox for \"259a6c2041cb9f5c709110e833b792e9227b4e6e9df73f046bd2d34be5e0fa0a\"" Feb 13 19:30:49.986265 containerd[1502]: time="2025-02-13T19:30:49.986204159Z" level=info msg="TearDown network for sandbox \"259a6c2041cb9f5c709110e833b792e9227b4e6e9df73f046bd2d34be5e0fa0a\" successfully" Feb 13 19:30:49.986265 containerd[1502]: time="2025-02-13T19:30:49.986214919Z" level=info msg="StopPodSandbox for \"259a6c2041cb9f5c709110e833b792e9227b4e6e9df73f046bd2d34be5e0fa0a\" returns successfully" Feb 13 19:30:49.986427 containerd[1502]: time="2025-02-13T19:30:49.986408433Z" level=info msg="StopPodSandbox for \"9a99dde00b5a0c5753dc22e79a23fa5cb756241ff3d7fe4eed74bd21fcf14d6f\"" Feb 13 19:30:49.986559 containerd[1502]: time="2025-02-13T19:30:49.986538337Z" level=info msg="Ensure that sandbox 9a99dde00b5a0c5753dc22e79a23fa5cb756241ff3d7fe4eed74bd21fcf14d6f in task-service has been cleanup successfully" Feb 13 19:30:49.986788 containerd[1502]: time="2025-02-13T19:30:49.986695382Z" level=info msg="TearDown network for sandbox \"9a99dde00b5a0c5753dc22e79a23fa5cb756241ff3d7fe4eed74bd21fcf14d6f\" successfully" Feb 13 19:30:49.986788 containerd[1502]: time="2025-02-13T19:30:49.986717875Z" level=info msg="StopPodSandbox for \"9a99dde00b5a0c5753dc22e79a23fa5cb756241ff3d7fe4eed74bd21fcf14d6f\" returns successfully" Feb 13 19:30:49.987003 containerd[1502]: time="2025-02-13T19:30:49.986970520Z" level=info msg="StopPodSandbox for \"fd20a9c9c675e34be18747d0be53656eb71ddfce7fadca259852046e986978fd\"" Feb 13 19:30:49.987073 containerd[1502]: time="2025-02-13T19:30:49.987051983Z" level=info msg="TearDown network for sandbox \"fd20a9c9c675e34be18747d0be53656eb71ddfce7fadca259852046e986978fd\" successfully" Feb 13 19:30:49.987073 containerd[1502]: time="2025-02-13T19:30:49.987067301Z" level=info msg="StopPodSandbox for \"fd20a9c9c675e34be18747d0be53656eb71ddfce7fadca259852046e986978fd\" returns successfully" Feb 13 19:30:49.987119 containerd[1502]: time="2025-02-13T19:30:49.987068293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6794f4445b-q9q8b,Uid:9eba37d3-14ec-4521-9302-789cbdb496aa,Namespace:calico-apiserver,Attempt:3,}" Feb 13 19:30:49.987623 containerd[1502]: time="2025-02-13T19:30:49.987515374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68b8f8cf95-qdbnh,Uid:d38faaa4-494c-4c4e-87ff-0a00aa82bf88,Namespace:calico-system,Attempt:2,}" Feb 13 19:30:49.987686 kubelet[2600]: I0213 19:30:49.987665 2600 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06b76013e050990e084073338277aecc1ce3e9a19060a462af91ada291c27c21" Feb 13 19:30:49.988071 containerd[1502]: time="2025-02-13T19:30:49.988044578Z" level=info msg="StopPodSandbox for \"06b76013e050990e084073338277aecc1ce3e9a19060a462af91ada291c27c21\"" Feb 13 19:30:49.988226 containerd[1502]: time="2025-02-13T19:30:49.988202175Z" level=info msg="Ensure that sandbox 06b76013e050990e084073338277aecc1ce3e9a19060a462af91ada291c27c21 in task-service has been cleanup successfully" Feb 13 19:30:49.988373 containerd[1502]: time="2025-02-13T19:30:49.988356134Z" level=info msg="TearDown network for sandbox \"06b76013e050990e084073338277aecc1ce3e9a19060a462af91ada291c27c21\" successfully" Feb 13 19:30:49.988373 containerd[1502]: time="2025-02-13T19:30:49.988370722Z" level=info msg="StopPodSandbox for \"06b76013e050990e084073338277aecc1ce3e9a19060a462af91ada291c27c21\" returns successfully" Feb 13 19:30:49.988601 containerd[1502]: time="2025-02-13T19:30:49.988573002Z" level=info msg="StopPodSandbox for \"4d57794a877b0b6953752ccedec2bd85d68373c09318c76ac2c537449d27f46f\"" Feb 13 19:30:49.988732 containerd[1502]: time="2025-02-13T19:30:49.988644356Z" level=info msg="TearDown network for sandbox \"4d57794a877b0b6953752ccedec2bd85d68373c09318c76ac2c537449d27f46f\" successfully" Feb 13 19:30:49.988732 containerd[1502]: time="2025-02-13T19:30:49.988660947Z" level=info msg="StopPodSandbox for \"4d57794a877b0b6953752ccedec2bd85d68373c09318c76ac2c537449d27f46f\" returns successfully" Feb 13 19:30:49.989008 containerd[1502]: time="2025-02-13T19:30:49.988984284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cfdf6,Uid:61978848-76c7-4692-bab4-3c8c891d5468,Namespace:calico-system,Attempt:2,}" Feb 13 19:30:50.342265 kubelet[2600]: I0213 19:30:50.341597 2600 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:30:50.342265 kubelet[2600]: E0213 19:30:50.341910 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:50.342031 systemd[1]: run-netns-cni\x2d07e91f79\x2dcb17\x2d6187\x2dcdb0\x2d12dcfc9eb54a.mount: Deactivated successfully. Feb 13 19:30:50.342143 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-06b76013e050990e084073338277aecc1ce3e9a19060a462af91ada291c27c21-shm.mount: Deactivated successfully. Feb 13 19:30:50.342250 systemd[1]: run-netns-cni\x2d311176bf\x2d0bb2\x2d267c\x2d45d7\x2d697742d35065.mount: Deactivated successfully. Feb 13 19:30:50.342895 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a615d24e84e74129058bc9c7cb028777bb16a481796829c1a98ceaf03287d6c5-shm.mount: Deactivated successfully. Feb 13 19:30:50.343032 systemd[1]: run-netns-cni\x2da1ca4e2d\x2d7842\x2d6416\x2d222b\x2da31c4bafe760.mount: Deactivated successfully. Feb 13 19:30:50.343126 systemd[1]: run-netns-cni\x2de4166076\x2dbad3\x2db3c3\x2d2fb4\x2de2df3bc2922c.mount: Deactivated successfully. Feb 13 19:30:50.343237 systemd[1]: run-netns-cni\x2d654617a6\x2d77fa\x2d8d36\x2d5411\x2dc770bbb8e3af.mount: Deactivated successfully. Feb 13 19:30:50.990380 kubelet[2600]: E0213 19:30:50.990346 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:51.541193 systemd[1]: Started sshd@10-10.0.0.116:22-10.0.0.1:60218.service - OpenSSH per-connection server daemon (10.0.0.1:60218). Feb 13 19:30:51.597502 sshd[3984]: Accepted publickey for core from 10.0.0.1 port 60218 ssh2: RSA SHA256:ENn9hOvI2hLUXcV6iHA8gc9Z4CTEPvGDMkoVtxIuYbg Feb 13 19:30:51.599067 sshd-session[3984]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:30:51.603344 systemd-logind[1485]: New session 11 of user core. Feb 13 19:30:51.610459 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 19:30:51.728562 containerd[1502]: time="2025-02-13T19:30:51.728497594Z" level=error msg="Failed to destroy network for sandbox \"6d5d01f5764dfa313010e551a484fd652da553aef24f0843408d75f6c678d0dc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:51.729075 containerd[1502]: time="2025-02-13T19:30:51.728958038Z" level=error msg="encountered an error cleaning up failed sandbox \"6d5d01f5764dfa313010e551a484fd652da553aef24f0843408d75f6c678d0dc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:51.729075 containerd[1502]: time="2025-02-13T19:30:51.729029503Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6794f4445b-q9q8b,Uid:9eba37d3-14ec-4521-9302-789cbdb496aa,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"6d5d01f5764dfa313010e551a484fd652da553aef24f0843408d75f6c678d0dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:51.729380 kubelet[2600]: E0213 19:30:51.729290 2600 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d5d01f5764dfa313010e551a484fd652da553aef24f0843408d75f6c678d0dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:51.729447 kubelet[2600]: E0213 19:30:51.729402 2600 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d5d01f5764dfa313010e551a484fd652da553aef24f0843408d75f6c678d0dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6794f4445b-q9q8b" Feb 13 19:30:51.729447 kubelet[2600]: E0213 19:30:51.729427 2600 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d5d01f5764dfa313010e551a484fd652da553aef24f0843408d75f6c678d0dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6794f4445b-q9q8b" Feb 13 19:30:51.729509 kubelet[2600]: E0213 19:30:51.729475 2600 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6794f4445b-q9q8b_calico-apiserver(9eba37d3-14ec-4521-9302-789cbdb496aa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6794f4445b-q9q8b_calico-apiserver(9eba37d3-14ec-4521-9302-789cbdb496aa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6d5d01f5764dfa313010e551a484fd652da553aef24f0843408d75f6c678d0dc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6794f4445b-q9q8b" podUID="9eba37d3-14ec-4521-9302-789cbdb496aa" Feb 13 19:30:51.739453 sshd[3988]: Connection closed by 10.0.0.1 port 60218 Feb 13 19:30:51.739805 sshd-session[3984]: pam_unix(sshd:session): session closed for user core Feb 13 19:30:51.743770 systemd-logind[1485]: Session 11 logged out. Waiting for processes to exit. Feb 13 19:30:51.744850 systemd[1]: sshd@10-10.0.0.116:22-10.0.0.1:60218.service: Deactivated successfully. Feb 13 19:30:51.747202 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 19:30:51.748234 systemd-logind[1485]: Removed session 11. Feb 13 19:30:51.781112 containerd[1502]: time="2025-02-13T19:30:51.780986637Z" level=error msg="Failed to destroy network for sandbox \"3ce1ab7c9e3b75163c498673b614ea401518c42e9012bdd7e9cd033bba480db0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:51.781536 containerd[1502]: time="2025-02-13T19:30:51.781493470Z" level=error msg="encountered an error cleaning up failed sandbox \"3ce1ab7c9e3b75163c498673b614ea401518c42e9012bdd7e9cd033bba480db0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:51.781671 containerd[1502]: time="2025-02-13T19:30:51.781550326Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bxg8d,Uid:402f0fa8-6d39-4a67-b618-1d216e220aea,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"3ce1ab7c9e3b75163c498673b614ea401518c42e9012bdd7e9cd033bba480db0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:51.781825 kubelet[2600]: E0213 19:30:51.781769 2600 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ce1ab7c9e3b75163c498673b614ea401518c42e9012bdd7e9cd033bba480db0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:51.781900 kubelet[2600]: E0213 19:30:51.781845 2600 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ce1ab7c9e3b75163c498673b614ea401518c42e9012bdd7e9cd033bba480db0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-bxg8d" Feb 13 19:30:51.781900 kubelet[2600]: E0213 19:30:51.781870 2600 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ce1ab7c9e3b75163c498673b614ea401518c42e9012bdd7e9cd033bba480db0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-bxg8d" Feb 13 19:30:51.781949 kubelet[2600]: E0213 19:30:51.781921 2600 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-bxg8d_kube-system(402f0fa8-6d39-4a67-b618-1d216e220aea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-bxg8d_kube-system(402f0fa8-6d39-4a67-b618-1d216e220aea)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3ce1ab7c9e3b75163c498673b614ea401518c42e9012bdd7e9cd033bba480db0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-bxg8d" podUID="402f0fa8-6d39-4a67-b618-1d216e220aea" Feb 13 19:30:51.846291 containerd[1502]: time="2025-02-13T19:30:51.846173692Z" level=error msg="Failed to destroy network for sandbox \"68b36cedfb71a4b13925ca9c776e9f716e294ad3c104d860f484f0a21e16f665\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:51.846687 containerd[1502]: time="2025-02-13T19:30:51.846639628Z" level=error msg="encountered an error cleaning up failed sandbox \"68b36cedfb71a4b13925ca9c776e9f716e294ad3c104d860f484f0a21e16f665\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:51.846763 containerd[1502]: time="2025-02-13T19:30:51.846714237Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-gvd66,Uid:f2d2c41c-272d-4c74-897d-79c94986b647,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"68b36cedfb71a4b13925ca9c776e9f716e294ad3c104d860f484f0a21e16f665\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:51.847075 kubelet[2600]: E0213 19:30:51.846959 2600 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68b36cedfb71a4b13925ca9c776e9f716e294ad3c104d860f484f0a21e16f665\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:51.847075 kubelet[2600]: E0213 19:30:51.847038 2600 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68b36cedfb71a4b13925ca9c776e9f716e294ad3c104d860f484f0a21e16f665\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-gvd66" Feb 13 19:30:51.847075 kubelet[2600]: E0213 19:30:51.847065 2600 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68b36cedfb71a4b13925ca9c776e9f716e294ad3c104d860f484f0a21e16f665\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-gvd66" Feb 13 19:30:51.847231 kubelet[2600]: E0213 19:30:51.847129 2600 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-gvd66_kube-system(f2d2c41c-272d-4c74-897d-79c94986b647)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-gvd66_kube-system(f2d2c41c-272d-4c74-897d-79c94986b647)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"68b36cedfb71a4b13925ca9c776e9f716e294ad3c104d860f484f0a21e16f665\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-gvd66" podUID="f2d2c41c-272d-4c74-897d-79c94986b647" Feb 13 19:30:51.875690 containerd[1502]: time="2025-02-13T19:30:51.874824351Z" level=error msg="Failed to destroy network for sandbox \"ddacbc97dad8b654492a3ede378ba89ed873bd2206e50dd63b006c03446e6466\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:51.876075 containerd[1502]: time="2025-02-13T19:30:51.876018364Z" level=error msg="encountered an error cleaning up failed sandbox \"ddacbc97dad8b654492a3ede378ba89ed873bd2206e50dd63b006c03446e6466\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:51.876169 containerd[1502]: time="2025-02-13T19:30:51.876129253Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6794f4445b-7cftj,Uid:9bf3d155-222d-46fc-9867-b55f1df961f7,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"ddacbc97dad8b654492a3ede378ba89ed873bd2206e50dd63b006c03446e6466\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:51.876522 kubelet[2600]: E0213 19:30:51.876471 2600 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddacbc97dad8b654492a3ede378ba89ed873bd2206e50dd63b006c03446e6466\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:51.876583 kubelet[2600]: E0213 19:30:51.876547 2600 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddacbc97dad8b654492a3ede378ba89ed873bd2206e50dd63b006c03446e6466\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6794f4445b-7cftj" Feb 13 19:30:51.876583 kubelet[2600]: E0213 19:30:51.876567 2600 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddacbc97dad8b654492a3ede378ba89ed873bd2206e50dd63b006c03446e6466\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6794f4445b-7cftj" Feb 13 19:30:51.876635 kubelet[2600]: E0213 19:30:51.876610 2600 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6794f4445b-7cftj_calico-apiserver(9bf3d155-222d-46fc-9867-b55f1df961f7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6794f4445b-7cftj_calico-apiserver(9bf3d155-222d-46fc-9867-b55f1df961f7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ddacbc97dad8b654492a3ede378ba89ed873bd2206e50dd63b006c03446e6466\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6794f4445b-7cftj" podUID="9bf3d155-222d-46fc-9867-b55f1df961f7" Feb 13 19:30:51.911918 containerd[1502]: time="2025-02-13T19:30:51.911860186Z" level=error msg="Failed to destroy network for sandbox \"238e6f92adeb80ea69f9c84d0bfdba6cf65400a9690652509bdfbb946b589a8f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:51.912371 containerd[1502]: time="2025-02-13T19:30:51.912337974Z" level=error msg="encountered an error cleaning up failed sandbox \"238e6f92adeb80ea69f9c84d0bfdba6cf65400a9690652509bdfbb946b589a8f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:51.912443 containerd[1502]: time="2025-02-13T19:30:51.912416010Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68b8f8cf95-qdbnh,Uid:d38faaa4-494c-4c4e-87ff-0a00aa82bf88,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"238e6f92adeb80ea69f9c84d0bfdba6cf65400a9690652509bdfbb946b589a8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:51.912720 kubelet[2600]: E0213 19:30:51.912673 2600 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"238e6f92adeb80ea69f9c84d0bfdba6cf65400a9690652509bdfbb946b589a8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:51.912769 kubelet[2600]: E0213 19:30:51.912743 2600 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"238e6f92adeb80ea69f9c84d0bfdba6cf65400a9690652509bdfbb946b589a8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68b8f8cf95-qdbnh" Feb 13 19:30:51.912769 kubelet[2600]: E0213 19:30:51.912763 2600 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"238e6f92adeb80ea69f9c84d0bfdba6cf65400a9690652509bdfbb946b589a8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68b8f8cf95-qdbnh" Feb 13 19:30:51.912848 kubelet[2600]: E0213 19:30:51.912812 2600 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-68b8f8cf95-qdbnh_calico-system(d38faaa4-494c-4c4e-87ff-0a00aa82bf88)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-68b8f8cf95-qdbnh_calico-system(d38faaa4-494c-4c4e-87ff-0a00aa82bf88)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"238e6f92adeb80ea69f9c84d0bfdba6cf65400a9690652509bdfbb946b589a8f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68b8f8cf95-qdbnh" podUID="d38faaa4-494c-4c4e-87ff-0a00aa82bf88" Feb 13 19:30:51.926925 containerd[1502]: time="2025-02-13T19:30:51.926882976Z" level=error msg="Failed to destroy network for sandbox \"8a68fb01407d9c834800a26cc46bd4437b62dc056b0e1350c4907a96b87c511d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:51.927268 containerd[1502]: time="2025-02-13T19:30:51.927238443Z" level=error msg="encountered an error cleaning up failed sandbox \"8a68fb01407d9c834800a26cc46bd4437b62dc056b0e1350c4907a96b87c511d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:51.927348 containerd[1502]: time="2025-02-13T19:30:51.927322722Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cfdf6,Uid:61978848-76c7-4692-bab4-3c8c891d5468,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"8a68fb01407d9c834800a26cc46bd4437b62dc056b0e1350c4907a96b87c511d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:51.927566 kubelet[2600]: E0213 19:30:51.927531 2600 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a68fb01407d9c834800a26cc46bd4437b62dc056b0e1350c4907a96b87c511d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:51.927600 kubelet[2600]: E0213 19:30:51.927590 2600 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a68fb01407d9c834800a26cc46bd4437b62dc056b0e1350c4907a96b87c511d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cfdf6" Feb 13 19:30:51.927667 kubelet[2600]: E0213 19:30:51.927609 2600 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a68fb01407d9c834800a26cc46bd4437b62dc056b0e1350c4907a96b87c511d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cfdf6" Feb 13 19:30:51.927698 kubelet[2600]: E0213 19:30:51.927655 2600 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-cfdf6_calico-system(61978848-76c7-4692-bab4-3c8c891d5468)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-cfdf6_calico-system(61978848-76c7-4692-bab4-3c8c891d5468)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8a68fb01407d9c834800a26cc46bd4437b62dc056b0e1350c4907a96b87c511d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cfdf6" podUID="61978848-76c7-4692-bab4-3c8c891d5468" Feb 13 19:30:51.993365 kubelet[2600]: I0213 19:30:51.993308 2600 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ddacbc97dad8b654492a3ede378ba89ed873bd2206e50dd63b006c03446e6466" Feb 13 19:30:51.994123 containerd[1502]: time="2025-02-13T19:30:51.994062524Z" level=info msg="StopPodSandbox for \"ddacbc97dad8b654492a3ede378ba89ed873bd2206e50dd63b006c03446e6466\"" Feb 13 19:30:51.994377 containerd[1502]: time="2025-02-13T19:30:51.994355825Z" level=info msg="Ensure that sandbox ddacbc97dad8b654492a3ede378ba89ed873bd2206e50dd63b006c03446e6466 in task-service has been cleanup successfully" Feb 13 19:30:51.994610 containerd[1502]: time="2025-02-13T19:30:51.994573895Z" level=info msg="TearDown network for sandbox \"ddacbc97dad8b654492a3ede378ba89ed873bd2206e50dd63b006c03446e6466\" successfully" Feb 13 19:30:51.994610 containerd[1502]: time="2025-02-13T19:30:51.994606927Z" level=info msg="StopPodSandbox for \"ddacbc97dad8b654492a3ede378ba89ed873bd2206e50dd63b006c03446e6466\" returns successfully" Feb 13 19:30:51.994998 containerd[1502]: time="2025-02-13T19:30:51.994970932Z" level=info msg="StopPodSandbox for \"d128712c7ee019c611fbc612ab74775c9fe61e0291a6f84232248274b76f860a\"" Feb 13 19:30:51.995086 containerd[1502]: time="2025-02-13T19:30:51.995065269Z" level=info msg="TearDown network for sandbox \"d128712c7ee019c611fbc612ab74775c9fe61e0291a6f84232248274b76f860a\" successfully" Feb 13 19:30:51.995134 containerd[1502]: time="2025-02-13T19:30:51.995083423Z" level=info msg="StopPodSandbox for \"d128712c7ee019c611fbc612ab74775c9fe61e0291a6f84232248274b76f860a\" returns successfully" Feb 13 19:30:51.995370 containerd[1502]: time="2025-02-13T19:30:51.995343151Z" level=info msg="StopPodSandbox for \"b5ee6d231558bb9e6055e52293f90a21d1ac84e5b91aa9ef485efd32a23f0760\"" Feb 13 19:30:51.995735 containerd[1502]: time="2025-02-13T19:30:51.995426687Z" level=info msg="TearDown network for sandbox \"b5ee6d231558bb9e6055e52293f90a21d1ac84e5b91aa9ef485efd32a23f0760\" successfully" Feb 13 19:30:51.995735 containerd[1502]: time="2025-02-13T19:30:51.995444531Z" level=info msg="StopPodSandbox for \"b5ee6d231558bb9e6055e52293f90a21d1ac84e5b91aa9ef485efd32a23f0760\" returns successfully" Feb 13 19:30:51.995937 containerd[1502]: time="2025-02-13T19:30:51.995856976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6794f4445b-7cftj,Uid:9bf3d155-222d-46fc-9867-b55f1df961f7,Namespace:calico-apiserver,Attempt:3,}" Feb 13 19:30:51.996260 kubelet[2600]: I0213 19:30:51.996233 2600 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d5d01f5764dfa313010e551a484fd652da553aef24f0843408d75f6c678d0dc" Feb 13 19:30:51.996636 containerd[1502]: time="2025-02-13T19:30:51.996608969Z" level=info msg="StopPodSandbox for \"6d5d01f5764dfa313010e551a484fd652da553aef24f0843408d75f6c678d0dc\"" Feb 13 19:30:51.996825 containerd[1502]: time="2025-02-13T19:30:51.996803075Z" level=info msg="Ensure that sandbox 6d5d01f5764dfa313010e551a484fd652da553aef24f0843408d75f6c678d0dc in task-service has been cleanup successfully" Feb 13 19:30:51.996999 containerd[1502]: time="2025-02-13T19:30:51.996978694Z" level=info msg="TearDown network for sandbox \"6d5d01f5764dfa313010e551a484fd652da553aef24f0843408d75f6c678d0dc\" successfully" Feb 13 19:30:51.997050 containerd[1502]: time="2025-02-13T19:30:51.996997689Z" level=info msg="StopPodSandbox for \"6d5d01f5764dfa313010e551a484fd652da553aef24f0843408d75f6c678d0dc\" returns successfully" Feb 13 19:30:51.997388 containerd[1502]: time="2025-02-13T19:30:51.997355382Z" level=info msg="StopPodSandbox for \"92aa5e65cd130c3a58ac37b4adf370bc37fd5b279afabfde37dbc8bf70e38b6a\"" Feb 13 19:30:51.997482 containerd[1502]: time="2025-02-13T19:30:51.997459628Z" level=info msg="TearDown network for sandbox \"92aa5e65cd130c3a58ac37b4adf370bc37fd5b279afabfde37dbc8bf70e38b6a\" successfully" Feb 13 19:30:51.997482 containerd[1502]: time="2025-02-13T19:30:51.997474265Z" level=info msg="StopPodSandbox for \"92aa5e65cd130c3a58ac37b4adf370bc37fd5b279afabfde37dbc8bf70e38b6a\" returns successfully" Feb 13 19:30:51.997711 containerd[1502]: time="2025-02-13T19:30:51.997686815Z" level=info msg="StopPodSandbox for \"029de7693a9f8fae3fc5f15fc92d9b4380854aa9a9856579c8e3ed5055510b90\"" Feb 13 19:30:51.997781 containerd[1502]: time="2025-02-13T19:30:51.997764811Z" level=info msg="TearDown network for sandbox \"029de7693a9f8fae3fc5f15fc92d9b4380854aa9a9856579c8e3ed5055510b90\" successfully" Feb 13 19:30:51.997781 containerd[1502]: time="2025-02-13T19:30:51.997778346Z" level=info msg="StopPodSandbox for \"029de7693a9f8fae3fc5f15fc92d9b4380854aa9a9856579c8e3ed5055510b90\" returns successfully" Feb 13 19:30:51.998005 containerd[1502]: time="2025-02-13T19:30:51.997976219Z" level=info msg="StopPodSandbox for \"259a6c2041cb9f5c709110e833b792e9227b4e6e9df73f046bd2d34be5e0fa0a\"" Feb 13 19:30:51.998099 containerd[1502]: time="2025-02-13T19:30:51.998048765Z" level=info msg="TearDown network for sandbox \"259a6c2041cb9f5c709110e833b792e9227b4e6e9df73f046bd2d34be5e0fa0a\" successfully" Feb 13 19:30:51.998099 containerd[1502]: time="2025-02-13T19:30:51.998063192Z" level=info msg="StopPodSandbox for \"259a6c2041cb9f5c709110e833b792e9227b4e6e9df73f046bd2d34be5e0fa0a\" returns successfully" Feb 13 19:30:51.999555 containerd[1502]: time="2025-02-13T19:30:51.999477029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6794f4445b-q9q8b,Uid:9eba37d3-14ec-4521-9302-789cbdb496aa,Namespace:calico-apiserver,Attempt:4,}" Feb 13 19:30:52.004590 kubelet[2600]: I0213 19:30:52.004563 2600 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ce1ab7c9e3b75163c498673b614ea401518c42e9012bdd7e9cd033bba480db0" Feb 13 19:30:52.005287 containerd[1502]: time="2025-02-13T19:30:52.004949140Z" level=info msg="StopPodSandbox for \"3ce1ab7c9e3b75163c498673b614ea401518c42e9012bdd7e9cd033bba480db0\"" Feb 13 19:30:52.005287 containerd[1502]: time="2025-02-13T19:30:52.005151861Z" level=info msg="Ensure that sandbox 3ce1ab7c9e3b75163c498673b614ea401518c42e9012bdd7e9cd033bba480db0 in task-service has been cleanup successfully" Feb 13 19:30:52.005467 containerd[1502]: time="2025-02-13T19:30:52.005445653Z" level=info msg="TearDown network for sandbox \"3ce1ab7c9e3b75163c498673b614ea401518c42e9012bdd7e9cd033bba480db0\" successfully" Feb 13 19:30:52.005556 containerd[1502]: time="2025-02-13T19:30:52.005536665Z" level=info msg="StopPodSandbox for \"3ce1ab7c9e3b75163c498673b614ea401518c42e9012bdd7e9cd033bba480db0\" returns successfully" Feb 13 19:30:52.005892 containerd[1502]: time="2025-02-13T19:30:52.005847850Z" level=info msg="StopPodSandbox for \"a615d24e84e74129058bc9c7cb028777bb16a481796829c1a98ceaf03287d6c5\"" Feb 13 19:30:52.005981 containerd[1502]: time="2025-02-13T19:30:52.005939692Z" level=info msg="TearDown network for sandbox \"a615d24e84e74129058bc9c7cb028777bb16a481796829c1a98ceaf03287d6c5\" successfully" Feb 13 19:30:52.005981 containerd[1502]: time="2025-02-13T19:30:52.005956553Z" level=info msg="StopPodSandbox for \"a615d24e84e74129058bc9c7cb028777bb16a481796829c1a98ceaf03287d6c5\" returns successfully" Feb 13 19:30:52.006342 containerd[1502]: time="2025-02-13T19:30:52.006233444Z" level=info msg="StopPodSandbox for \"1deac3e8f1315485d945ed4d7ff726c16660ae79ed6279b69ac45afcd92e7fed\"" Feb 13 19:30:52.006437 containerd[1502]: time="2025-02-13T19:30:52.006351716Z" level=info msg="TearDown network for sandbox \"1deac3e8f1315485d945ed4d7ff726c16660ae79ed6279b69ac45afcd92e7fed\" successfully" Feb 13 19:30:52.006437 containerd[1502]: time="2025-02-13T19:30:52.006367396Z" level=info msg="StopPodSandbox for \"1deac3e8f1315485d945ed4d7ff726c16660ae79ed6279b69ac45afcd92e7fed\" returns successfully" Feb 13 19:30:52.007792 containerd[1502]: time="2025-02-13T19:30:52.007038797Z" level=info msg="StopPodSandbox for \"197b50055a8d04a83b5b696198cca280193728180dffd10631ed830366a0c893\"" Feb 13 19:30:52.007792 containerd[1502]: time="2025-02-13T19:30:52.007136180Z" level=info msg="TearDown network for sandbox \"197b50055a8d04a83b5b696198cca280193728180dffd10631ed830366a0c893\" successfully" Feb 13 19:30:52.007792 containerd[1502]: time="2025-02-13T19:30:52.007147020Z" level=info msg="StopPodSandbox for \"197b50055a8d04a83b5b696198cca280193728180dffd10631ed830366a0c893\" returns successfully" Feb 13 19:30:52.007910 kubelet[2600]: E0213 19:30:52.007386 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:52.007910 kubelet[2600]: I0213 19:30:52.007546 2600 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68b36cedfb71a4b13925ca9c776e9f716e294ad3c104d860f484f0a21e16f665" Feb 13 19:30:52.008071 containerd[1502]: time="2025-02-13T19:30:52.008035790Z" level=info msg="StopPodSandbox for \"68b36cedfb71a4b13925ca9c776e9f716e294ad3c104d860f484f0a21e16f665\"" Feb 13 19:30:52.008346 containerd[1502]: time="2025-02-13T19:30:52.008297291Z" level=info msg="Ensure that sandbox 68b36cedfb71a4b13925ca9c776e9f716e294ad3c104d860f484f0a21e16f665 in task-service has been cleanup successfully" Feb 13 19:30:52.008433 containerd[1502]: time="2025-02-13T19:30:52.008047793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bxg8d,Uid:402f0fa8-6d39-4a67-b618-1d216e220aea,Namespace:kube-system,Attempt:4,}" Feb 13 19:30:52.008693 containerd[1502]: time="2025-02-13T19:30:52.008625298Z" level=info msg="TearDown network for sandbox \"68b36cedfb71a4b13925ca9c776e9f716e294ad3c104d860f484f0a21e16f665\" successfully" Feb 13 19:30:52.008693 containerd[1502]: time="2025-02-13T19:30:52.008684780Z" level=info msg="StopPodSandbox for \"68b36cedfb71a4b13925ca9c776e9f716e294ad3c104d860f484f0a21e16f665\" returns successfully" Feb 13 19:30:52.009012 containerd[1502]: time="2025-02-13T19:30:52.008976969Z" level=info msg="StopPodSandbox for \"919def795fcabe325c259e12ac19e82f4306388eba272371855df017ef8e2d5d\"" Feb 13 19:30:52.010380 kubelet[2600]: I0213 19:30:52.010353 2600 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="238e6f92adeb80ea69f9c84d0bfdba6cf65400a9690652509bdfbb946b589a8f" Feb 13 19:30:52.011744 kubelet[2600]: I0213 19:30:52.011444 2600 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a68fb01407d9c834800a26cc46bd4437b62dc056b0e1350c4907a96b87c511d" Feb 13 19:30:52.026515 containerd[1502]: time="2025-02-13T19:30:52.009126239Z" level=info msg="TearDown network for sandbox \"919def795fcabe325c259e12ac19e82f4306388eba272371855df017ef8e2d5d\" successfully" Feb 13 19:30:52.026578 containerd[1502]: time="2025-02-13T19:30:52.026511815Z" level=info msg="StopPodSandbox for \"919def795fcabe325c259e12ac19e82f4306388eba272371855df017ef8e2d5d\" returns successfully" Feb 13 19:30:52.026578 containerd[1502]: time="2025-02-13T19:30:52.010778864Z" level=info msg="StopPodSandbox for \"238e6f92adeb80ea69f9c84d0bfdba6cf65400a9690652509bdfbb946b589a8f\"" Feb 13 19:30:52.026683 containerd[1502]: time="2025-02-13T19:30:52.011797268Z" level=info msg="StopPodSandbox for \"8a68fb01407d9c834800a26cc46bd4437b62dc056b0e1350c4907a96b87c511d\"" Feb 13 19:30:52.026836 containerd[1502]: time="2025-02-13T19:30:52.026801740Z" level=info msg="Ensure that sandbox 238e6f92adeb80ea69f9c84d0bfdba6cf65400a9690652509bdfbb946b589a8f in task-service has been cleanup successfully" Feb 13 19:30:52.026897 containerd[1502]: time="2025-02-13T19:30:52.026842696Z" level=info msg="StopPodSandbox for \"9a576abef7e3f4083c481695c9f493c39417d50066da6a1bc9040a8a2fdf6340\"" Feb 13 19:30:52.026897 containerd[1502]: time="2025-02-13T19:30:52.026871570Z" level=info msg="Ensure that sandbox 8a68fb01407d9c834800a26cc46bd4437b62dc056b0e1350c4907a96b87c511d in task-service has been cleanup successfully" Feb 13 19:30:52.026975 containerd[1502]: time="2025-02-13T19:30:52.026927515Z" level=info msg="TearDown network for sandbox \"9a576abef7e3f4083c481695c9f493c39417d50066da6a1bc9040a8a2fdf6340\" successfully" Feb 13 19:30:52.026975 containerd[1502]: time="2025-02-13T19:30:52.026936632Z" level=info msg="StopPodSandbox for \"9a576abef7e3f4083c481695c9f493c39417d50066da6a1bc9040a8a2fdf6340\" returns successfully" Feb 13 19:30:52.027048 containerd[1502]: time="2025-02-13T19:30:52.027007306Z" level=info msg="TearDown network for sandbox \"238e6f92adeb80ea69f9c84d0bfdba6cf65400a9690652509bdfbb946b589a8f\" successfully" Feb 13 19:30:52.027048 containerd[1502]: time="2025-02-13T19:30:52.027019409Z" level=info msg="StopPodSandbox for \"238e6f92adeb80ea69f9c84d0bfdba6cf65400a9690652509bdfbb946b589a8f\" returns successfully" Feb 13 19:30:52.027130 containerd[1502]: time="2025-02-13T19:30:52.027101092Z" level=info msg="TearDown network for sandbox \"8a68fb01407d9c834800a26cc46bd4437b62dc056b0e1350c4907a96b87c511d\" successfully" Feb 13 19:30:52.027165 kubelet[2600]: E0213 19:30:52.027100 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:52.027208 containerd[1502]: time="2025-02-13T19:30:52.027138282Z" level=info msg="StopPodSandbox for \"8a68fb01407d9c834800a26cc46bd4437b62dc056b0e1350c4907a96b87c511d\" returns successfully" Feb 13 19:30:52.027393 containerd[1502]: time="2025-02-13T19:30:52.027377251Z" level=info msg="StopPodSandbox for \"9a99dde00b5a0c5753dc22e79a23fa5cb756241ff3d7fe4eed74bd21fcf14d6f\"" Feb 13 19:30:52.027448 containerd[1502]: time="2025-02-13T19:30:52.027401266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-gvd66,Uid:f2d2c41c-272d-4c74-897d-79c94986b647,Namespace:kube-system,Attempt:3,}" Feb 13 19:30:52.027485 containerd[1502]: time="2025-02-13T19:30:52.027449677Z" level=info msg="TearDown network for sandbox \"9a99dde00b5a0c5753dc22e79a23fa5cb756241ff3d7fe4eed74bd21fcf14d6f\" successfully" Feb 13 19:30:52.027485 containerd[1502]: time="2025-02-13T19:30:52.027458663Z" level=info msg="StopPodSandbox for \"9a99dde00b5a0c5753dc22e79a23fa5cb756241ff3d7fe4eed74bd21fcf14d6f\" returns successfully" Feb 13 19:30:52.027485 containerd[1502]: time="2025-02-13T19:30:52.027451440Z" level=info msg="StopPodSandbox for \"06b76013e050990e084073338277aecc1ce3e9a19060a462af91ada291c27c21\"" Feb 13 19:30:52.027585 containerd[1502]: time="2025-02-13T19:30:52.027533444Z" level=info msg="TearDown network for sandbox \"06b76013e050990e084073338277aecc1ce3e9a19060a462af91ada291c27c21\" successfully" Feb 13 19:30:52.027585 containerd[1502]: time="2025-02-13T19:30:52.027541129Z" level=info msg="StopPodSandbox for \"06b76013e050990e084073338277aecc1ce3e9a19060a462af91ada291c27c21\" returns successfully" Feb 13 19:30:52.027812 containerd[1502]: time="2025-02-13T19:30:52.027773725Z" level=info msg="StopPodSandbox for \"4d57794a877b0b6953752ccedec2bd85d68373c09318c76ac2c537449d27f46f\"" Feb 13 19:30:52.027812 containerd[1502]: time="2025-02-13T19:30:52.027806607Z" level=info msg="StopPodSandbox for \"fd20a9c9c675e34be18747d0be53656eb71ddfce7fadca259852046e986978fd\"" Feb 13 19:30:52.027890 containerd[1502]: time="2025-02-13T19:30:52.027870628Z" level=info msg="TearDown network for sandbox \"4d57794a877b0b6953752ccedec2bd85d68373c09318c76ac2c537449d27f46f\" successfully" Feb 13 19:30:52.027890 containerd[1502]: time="2025-02-13T19:30:52.027885666Z" level=info msg="StopPodSandbox for \"4d57794a877b0b6953752ccedec2bd85d68373c09318c76ac2c537449d27f46f\" returns successfully" Feb 13 19:30:52.028260 containerd[1502]: time="2025-02-13T19:30:52.028239902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cfdf6,Uid:61978848-76c7-4692-bab4-3c8c891d5468,Namespace:calico-system,Attempt:3,}" Feb 13 19:30:52.036686 containerd[1502]: time="2025-02-13T19:30:52.036647599Z" level=info msg="TearDown network for sandbox \"fd20a9c9c675e34be18747d0be53656eb71ddfce7fadca259852046e986978fd\" successfully" Feb 13 19:30:52.036686 containerd[1502]: time="2025-02-13T19:30:52.036668548Z" level=info msg="StopPodSandbox for \"fd20a9c9c675e34be18747d0be53656eb71ddfce7fadca259852046e986978fd\" returns successfully" Feb 13 19:30:52.037158 containerd[1502]: time="2025-02-13T19:30:52.037132130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68b8f8cf95-qdbnh,Uid:d38faaa4-494c-4c4e-87ff-0a00aa82bf88,Namespace:calico-system,Attempt:3,}" Feb 13 19:30:52.403770 systemd[1]: run-netns-cni\x2d813a816b\x2d03d8\x2dd38a\x2d2094\x2d3e25eb6a7cd7.mount: Deactivated successfully. Feb 13 19:30:52.403884 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-68b36cedfb71a4b13925ca9c776e9f716e294ad3c104d860f484f0a21e16f665-shm.mount: Deactivated successfully. Feb 13 19:30:52.403998 systemd[1]: run-netns-cni\x2d6047a7f6\x2d1582\x2df2d3\x2d5dcb\x2d7355ed7da625.mount: Deactivated successfully. Feb 13 19:30:52.404088 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3ce1ab7c9e3b75163c498673b614ea401518c42e9012bdd7e9cd033bba480db0-shm.mount: Deactivated successfully. Feb 13 19:30:52.404197 systemd[1]: run-netns-cni\x2d3cf008a8\x2d7988\x2df0cb\x2d9913\x2d6814212c2f35.mount: Deactivated successfully. Feb 13 19:30:52.404292 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6d5d01f5764dfa313010e551a484fd652da553aef24f0843408d75f6c678d0dc-shm.mount: Deactivated successfully. Feb 13 19:30:54.043602 containerd[1502]: time="2025-02-13T19:30:54.043538274Z" level=error msg="Failed to destroy network for sandbox \"da21058157a478777dd793b0c03fcbc23d8d5e245d11dd697c88feb605601cc5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:54.044806 containerd[1502]: time="2025-02-13T19:30:54.044762012Z" level=error msg="encountered an error cleaning up failed sandbox \"da21058157a478777dd793b0c03fcbc23d8d5e245d11dd697c88feb605601cc5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:54.044882 containerd[1502]: time="2025-02-13T19:30:54.044859326Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6794f4445b-q9q8b,Uid:9eba37d3-14ec-4521-9302-789cbdb496aa,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"da21058157a478777dd793b0c03fcbc23d8d5e245d11dd697c88feb605601cc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:54.047334 kubelet[2600]: E0213 19:30:54.046891 2600 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da21058157a478777dd793b0c03fcbc23d8d5e245d11dd697c88feb605601cc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:54.047334 kubelet[2600]: E0213 19:30:54.046962 2600 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da21058157a478777dd793b0c03fcbc23d8d5e245d11dd697c88feb605601cc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6794f4445b-q9q8b" Feb 13 19:30:54.047334 kubelet[2600]: E0213 19:30:54.046986 2600 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da21058157a478777dd793b0c03fcbc23d8d5e245d11dd697c88feb605601cc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6794f4445b-q9q8b" Feb 13 19:30:54.047025 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-da21058157a478777dd793b0c03fcbc23d8d5e245d11dd697c88feb605601cc5-shm.mount: Deactivated successfully. Feb 13 19:30:54.047847 kubelet[2600]: E0213 19:30:54.047027 2600 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6794f4445b-q9q8b_calico-apiserver(9eba37d3-14ec-4521-9302-789cbdb496aa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6794f4445b-q9q8b_calico-apiserver(9eba37d3-14ec-4521-9302-789cbdb496aa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"da21058157a478777dd793b0c03fcbc23d8d5e245d11dd697c88feb605601cc5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6794f4445b-q9q8b" podUID="9eba37d3-14ec-4521-9302-789cbdb496aa" Feb 13 19:30:54.094274 containerd[1502]: time="2025-02-13T19:30:54.094209648Z" level=error msg="Failed to destroy network for sandbox \"9b60fb3683d6c0de790a6b7336486ff63455b4499390069c146ad9dffdf344af\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:54.095152 containerd[1502]: time="2025-02-13T19:30:54.095120389Z" level=error msg="encountered an error cleaning up failed sandbox \"9b60fb3683d6c0de790a6b7336486ff63455b4499390069c146ad9dffdf344af\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:54.095203 containerd[1502]: time="2025-02-13T19:30:54.095179440Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6794f4445b-7cftj,Uid:9bf3d155-222d-46fc-9867-b55f1df961f7,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"9b60fb3683d6c0de790a6b7336486ff63455b4499390069c146ad9dffdf344af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:54.095463 kubelet[2600]: E0213 19:30:54.095419 2600 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b60fb3683d6c0de790a6b7336486ff63455b4499390069c146ad9dffdf344af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:54.095522 kubelet[2600]: E0213 19:30:54.095480 2600 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b60fb3683d6c0de790a6b7336486ff63455b4499390069c146ad9dffdf344af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6794f4445b-7cftj" Feb 13 19:30:54.095522 kubelet[2600]: E0213 19:30:54.095500 2600 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b60fb3683d6c0de790a6b7336486ff63455b4499390069c146ad9dffdf344af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6794f4445b-7cftj" Feb 13 19:30:54.095579 kubelet[2600]: E0213 19:30:54.095542 2600 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6794f4445b-7cftj_calico-apiserver(9bf3d155-222d-46fc-9867-b55f1df961f7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6794f4445b-7cftj_calico-apiserver(9bf3d155-222d-46fc-9867-b55f1df961f7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9b60fb3683d6c0de790a6b7336486ff63455b4499390069c146ad9dffdf344af\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6794f4445b-7cftj" podUID="9bf3d155-222d-46fc-9867-b55f1df961f7" Feb 13 19:30:54.136028 containerd[1502]: time="2025-02-13T19:30:54.135887909Z" level=error msg="Failed to destroy network for sandbox \"cdeca7bc8c5d40767aa8b82f58ac84479cad41960dfa49f467a04bf46aa03b20\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:54.136353 containerd[1502]: time="2025-02-13T19:30:54.136267973Z" level=error msg="encountered an error cleaning up failed sandbox \"cdeca7bc8c5d40767aa8b82f58ac84479cad41960dfa49f467a04bf46aa03b20\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:54.136353 containerd[1502]: time="2025-02-13T19:30:54.136339168Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-gvd66,Uid:f2d2c41c-272d-4c74-897d-79c94986b647,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"cdeca7bc8c5d40767aa8b82f58ac84479cad41960dfa49f467a04bf46aa03b20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:54.136604 kubelet[2600]: E0213 19:30:54.136556 2600 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cdeca7bc8c5d40767aa8b82f58ac84479cad41960dfa49f467a04bf46aa03b20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:54.136604 kubelet[2600]: E0213 19:30:54.136620 2600 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cdeca7bc8c5d40767aa8b82f58ac84479cad41960dfa49f467a04bf46aa03b20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-gvd66" Feb 13 19:30:54.136782 kubelet[2600]: E0213 19:30:54.136641 2600 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cdeca7bc8c5d40767aa8b82f58ac84479cad41960dfa49f467a04bf46aa03b20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-gvd66" Feb 13 19:30:54.136782 kubelet[2600]: E0213 19:30:54.136686 2600 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-gvd66_kube-system(f2d2c41c-272d-4c74-897d-79c94986b647)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-gvd66_kube-system(f2d2c41c-272d-4c74-897d-79c94986b647)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cdeca7bc8c5d40767aa8b82f58ac84479cad41960dfa49f467a04bf46aa03b20\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-gvd66" podUID="f2d2c41c-272d-4c74-897d-79c94986b647" Feb 13 19:30:54.192068 containerd[1502]: time="2025-02-13T19:30:54.189670248Z" level=error msg="Failed to destroy network for sandbox \"29892c77e2068e69cefb658e7f5afd0cfb682cfc7981da5bee2ea2b80177b4bc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:54.192068 containerd[1502]: time="2025-02-13T19:30:54.190159186Z" level=error msg="encountered an error cleaning up failed sandbox \"29892c77e2068e69cefb658e7f5afd0cfb682cfc7981da5bee2ea2b80177b4bc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:54.192068 containerd[1502]: time="2025-02-13T19:30:54.190273652Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bxg8d,Uid:402f0fa8-6d39-4a67-b618-1d216e220aea,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"29892c77e2068e69cefb658e7f5afd0cfb682cfc7981da5bee2ea2b80177b4bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:54.192354 kubelet[2600]: E0213 19:30:54.190546 2600 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29892c77e2068e69cefb658e7f5afd0cfb682cfc7981da5bee2ea2b80177b4bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:54.192354 kubelet[2600]: E0213 19:30:54.190613 2600 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29892c77e2068e69cefb658e7f5afd0cfb682cfc7981da5bee2ea2b80177b4bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-bxg8d" Feb 13 19:30:54.192354 kubelet[2600]: E0213 19:30:54.190634 2600 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29892c77e2068e69cefb658e7f5afd0cfb682cfc7981da5bee2ea2b80177b4bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-bxg8d" Feb 13 19:30:54.192467 kubelet[2600]: E0213 19:30:54.190673 2600 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-bxg8d_kube-system(402f0fa8-6d39-4a67-b618-1d216e220aea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-bxg8d_kube-system(402f0fa8-6d39-4a67-b618-1d216e220aea)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"29892c77e2068e69cefb658e7f5afd0cfb682cfc7981da5bee2ea2b80177b4bc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-bxg8d" podUID="402f0fa8-6d39-4a67-b618-1d216e220aea" Feb 13 19:30:54.238065 containerd[1502]: time="2025-02-13T19:30:54.238016815Z" level=error msg="Failed to destroy network for sandbox \"737878d4cc9b2ec49100882d0cb04450e80ddc4b588a5fc065c8bb77559dc6ae\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:54.238754 containerd[1502]: time="2025-02-13T19:30:54.238549666Z" level=error msg="encountered an error cleaning up failed sandbox \"737878d4cc9b2ec49100882d0cb04450e80ddc4b588a5fc065c8bb77559dc6ae\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:54.238793 containerd[1502]: time="2025-02-13T19:30:54.238777975Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68b8f8cf95-qdbnh,Uid:d38faaa4-494c-4c4e-87ff-0a00aa82bf88,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"737878d4cc9b2ec49100882d0cb04450e80ddc4b588a5fc065c8bb77559dc6ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:54.238862 containerd[1502]: time="2025-02-13T19:30:54.238635207Z" level=error msg="Failed to destroy network for sandbox \"cf302ac4983e4b5752dc5e187cea36041dfa162432d3e8affbda52f81e45b173\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:54.239236 kubelet[2600]: E0213 19:30:54.239023 2600 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"737878d4cc9b2ec49100882d0cb04450e80ddc4b588a5fc065c8bb77559dc6ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:54.239236 kubelet[2600]: E0213 19:30:54.239095 2600 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"737878d4cc9b2ec49100882d0cb04450e80ddc4b588a5fc065c8bb77559dc6ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68b8f8cf95-qdbnh" Feb 13 19:30:54.239236 kubelet[2600]: E0213 19:30:54.239114 2600 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"737878d4cc9b2ec49100882d0cb04450e80ddc4b588a5fc065c8bb77559dc6ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68b8f8cf95-qdbnh" Feb 13 19:30:54.239378 containerd[1502]: time="2025-02-13T19:30:54.239217811Z" level=error msg="encountered an error cleaning up failed sandbox \"cf302ac4983e4b5752dc5e187cea36041dfa162432d3e8affbda52f81e45b173\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:54.239378 containerd[1502]: time="2025-02-13T19:30:54.239276121Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cfdf6,Uid:61978848-76c7-4692-bab4-3c8c891d5468,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"cf302ac4983e4b5752dc5e187cea36041dfa162432d3e8affbda52f81e45b173\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:54.239433 kubelet[2600]: E0213 19:30:54.239193 2600 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-68b8f8cf95-qdbnh_calico-system(d38faaa4-494c-4c4e-87ff-0a00aa82bf88)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-68b8f8cf95-qdbnh_calico-system(d38faaa4-494c-4c4e-87ff-0a00aa82bf88)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"737878d4cc9b2ec49100882d0cb04450e80ddc4b588a5fc065c8bb77559dc6ae\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68b8f8cf95-qdbnh" podUID="d38faaa4-494c-4c4e-87ff-0a00aa82bf88" Feb 13 19:30:54.239433 kubelet[2600]: E0213 19:30:54.239422 2600 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf302ac4983e4b5752dc5e187cea36041dfa162432d3e8affbda52f81e45b173\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:54.239501 kubelet[2600]: E0213 19:30:54.239447 2600 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf302ac4983e4b5752dc5e187cea36041dfa162432d3e8affbda52f81e45b173\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cfdf6" Feb 13 19:30:54.239501 kubelet[2600]: E0213 19:30:54.239461 2600 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf302ac4983e4b5752dc5e187cea36041dfa162432d3e8affbda52f81e45b173\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cfdf6" Feb 13 19:30:54.239501 kubelet[2600]: E0213 19:30:54.239483 2600 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-cfdf6_calico-system(61978848-76c7-4692-bab4-3c8c891d5468)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-cfdf6_calico-system(61978848-76c7-4692-bab4-3c8c891d5468)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cf302ac4983e4b5752dc5e187cea36041dfa162432d3e8affbda52f81e45b173\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cfdf6" podUID="61978848-76c7-4692-bab4-3c8c891d5468" Feb 13 19:30:54.942524 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-737878d4cc9b2ec49100882d0cb04450e80ddc4b588a5fc065c8bb77559dc6ae-shm.mount: Deactivated successfully. Feb 13 19:30:54.942889 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-29892c77e2068e69cefb658e7f5afd0cfb682cfc7981da5bee2ea2b80177b4bc-shm.mount: Deactivated successfully. Feb 13 19:30:54.943035 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9b60fb3683d6c0de790a6b7336486ff63455b4499390069c146ad9dffdf344af-shm.mount: Deactivated successfully. Feb 13 19:30:55.028534 kubelet[2600]: I0213 19:30:55.028495 2600 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da21058157a478777dd793b0c03fcbc23d8d5e245d11dd697c88feb605601cc5" Feb 13 19:30:55.029218 containerd[1502]: time="2025-02-13T19:30:55.029187928Z" level=info msg="StopPodSandbox for \"da21058157a478777dd793b0c03fcbc23d8d5e245d11dd697c88feb605601cc5\"" Feb 13 19:30:55.029428 containerd[1502]: time="2025-02-13T19:30:55.029407791Z" level=info msg="Ensure that sandbox da21058157a478777dd793b0c03fcbc23d8d5e245d11dd697c88feb605601cc5 in task-service has been cleanup successfully" Feb 13 19:30:55.030645 containerd[1502]: time="2025-02-13T19:30:55.029583821Z" level=info msg="TearDown network for sandbox \"da21058157a478777dd793b0c03fcbc23d8d5e245d11dd697c88feb605601cc5\" successfully" Feb 13 19:30:55.030645 containerd[1502]: time="2025-02-13T19:30:55.029598048Z" level=info msg="StopPodSandbox for \"da21058157a478777dd793b0c03fcbc23d8d5e245d11dd697c88feb605601cc5\" returns successfully" Feb 13 19:30:55.030645 containerd[1502]: time="2025-02-13T19:30:55.029887742Z" level=info msg="StopPodSandbox for \"6d5d01f5764dfa313010e551a484fd652da553aef24f0843408d75f6c678d0dc\"" Feb 13 19:30:55.030645 containerd[1502]: time="2025-02-13T19:30:55.029952363Z" level=info msg="TearDown network for sandbox \"6d5d01f5764dfa313010e551a484fd652da553aef24f0843408d75f6c678d0dc\" successfully" Feb 13 19:30:55.030645 containerd[1502]: time="2025-02-13T19:30:55.029960909Z" level=info msg="StopPodSandbox for \"6d5d01f5764dfa313010e551a484fd652da553aef24f0843408d75f6c678d0dc\" returns successfully" Feb 13 19:30:55.030645 containerd[1502]: time="2025-02-13T19:30:55.030253008Z" level=info msg="StopPodSandbox for \"92aa5e65cd130c3a58ac37b4adf370bc37fd5b279afabfde37dbc8bf70e38b6a\"" Feb 13 19:30:55.030645 containerd[1502]: time="2025-02-13T19:30:55.030331505Z" level=info msg="TearDown network for sandbox \"92aa5e65cd130c3a58ac37b4adf370bc37fd5b279afabfde37dbc8bf70e38b6a\" successfully" Feb 13 19:30:55.030645 containerd[1502]: time="2025-02-13T19:30:55.030342316Z" level=info msg="StopPodSandbox for \"92aa5e65cd130c3a58ac37b4adf370bc37fd5b279afabfde37dbc8bf70e38b6a\" returns successfully" Feb 13 19:30:55.030645 containerd[1502]: time="2025-02-13T19:30:55.030554765Z" level=info msg="StopPodSandbox for \"029de7693a9f8fae3fc5f15fc92d9b4380854aa9a9856579c8e3ed5055510b90\"" Feb 13 19:30:55.030645 containerd[1502]: time="2025-02-13T19:30:55.030614056Z" level=info msg="TearDown network for sandbox \"029de7693a9f8fae3fc5f15fc92d9b4380854aa9a9856579c8e3ed5055510b90\" successfully" Feb 13 19:30:55.030645 containerd[1502]: time="2025-02-13T19:30:55.030621741Z" level=info msg="StopPodSandbox for \"029de7693a9f8fae3fc5f15fc92d9b4380854aa9a9856579c8e3ed5055510b90\" returns successfully" Feb 13 19:30:55.030946 kubelet[2600]: I0213 19:30:55.030302 2600 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="737878d4cc9b2ec49100882d0cb04450e80ddc4b588a5fc065c8bb77559dc6ae" Feb 13 19:30:55.031236 containerd[1502]: time="2025-02-13T19:30:55.031216979Z" level=info msg="StopPodSandbox for \"737878d4cc9b2ec49100882d0cb04450e80ddc4b588a5fc065c8bb77559dc6ae\"" Feb 13 19:30:55.031460 containerd[1502]: time="2025-02-13T19:30:55.031443565Z" level=info msg="Ensure that sandbox 737878d4cc9b2ec49100882d0cb04450e80ddc4b588a5fc065c8bb77559dc6ae in task-service has been cleanup successfully" Feb 13 19:30:55.031679 containerd[1502]: time="2025-02-13T19:30:55.031606570Z" level=info msg="StopPodSandbox for \"259a6c2041cb9f5c709110e833b792e9227b4e6e9df73f046bd2d34be5e0fa0a\"" Feb 13 19:30:55.031721 containerd[1502]: time="2025-02-13T19:30:55.031684627Z" level=info msg="TearDown network for sandbox \"259a6c2041cb9f5c709110e833b792e9227b4e6e9df73f046bd2d34be5e0fa0a\" successfully" Feb 13 19:30:55.031721 containerd[1502]: time="2025-02-13T19:30:55.031694215Z" level=info msg="StopPodSandbox for \"259a6c2041cb9f5c709110e833b792e9227b4e6e9df73f046bd2d34be5e0fa0a\" returns successfully" Feb 13 19:30:55.032065 containerd[1502]: time="2025-02-13T19:30:55.031872530Z" level=info msg="TearDown network for sandbox \"737878d4cc9b2ec49100882d0cb04450e80ddc4b588a5fc065c8bb77559dc6ae\" successfully" Feb 13 19:30:55.032065 containerd[1502]: time="2025-02-13T19:30:55.031886136Z" level=info msg="StopPodSandbox for \"737878d4cc9b2ec49100882d0cb04450e80ddc4b588a5fc065c8bb77559dc6ae\" returns successfully" Feb 13 19:30:55.032364 containerd[1502]: time="2025-02-13T19:30:55.032220293Z" level=info msg="StopPodSandbox for \"238e6f92adeb80ea69f9c84d0bfdba6cf65400a9690652509bdfbb946b589a8f\"" Feb 13 19:30:55.032364 containerd[1502]: time="2025-02-13T19:30:55.032295745Z" level=info msg="TearDown network for sandbox \"238e6f92adeb80ea69f9c84d0bfdba6cf65400a9690652509bdfbb946b589a8f\" successfully" Feb 13 19:30:55.032364 containerd[1502]: time="2025-02-13T19:30:55.032302357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6794f4445b-q9q8b,Uid:9eba37d3-14ec-4521-9302-789cbdb496aa,Namespace:calico-apiserver,Attempt:5,}" Feb 13 19:30:55.032364 containerd[1502]: time="2025-02-13T19:30:55.032305072Z" level=info msg="StopPodSandbox for \"238e6f92adeb80ea69f9c84d0bfdba6cf65400a9690652509bdfbb946b589a8f\" returns successfully" Feb 13 19:30:55.032832 containerd[1502]: time="2025-02-13T19:30:55.032782139Z" level=info msg="StopPodSandbox for \"9a99dde00b5a0c5753dc22e79a23fa5cb756241ff3d7fe4eed74bd21fcf14d6f\"" Feb 13 19:30:55.032875 containerd[1502]: time="2025-02-13T19:30:55.032853904Z" level=info msg="TearDown network for sandbox \"9a99dde00b5a0c5753dc22e79a23fa5cb756241ff3d7fe4eed74bd21fcf14d6f\" successfully" Feb 13 19:30:55.032875 containerd[1502]: time="2025-02-13T19:30:55.032863121Z" level=info msg="StopPodSandbox for \"9a99dde00b5a0c5753dc22e79a23fa5cb756241ff3d7fe4eed74bd21fcf14d6f\" returns successfully" Feb 13 19:30:55.033118 kubelet[2600]: I0213 19:30:55.033039 2600 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf302ac4983e4b5752dc5e187cea36041dfa162432d3e8affbda52f81e45b173" Feb 13 19:30:55.033687 containerd[1502]: time="2025-02-13T19:30:55.033624891Z" level=info msg="StopPodSandbox for \"cf302ac4983e4b5752dc5e187cea36041dfa162432d3e8affbda52f81e45b173\"" Feb 13 19:30:55.033987 containerd[1502]: time="2025-02-13T19:30:55.033773089Z" level=info msg="StopPodSandbox for \"fd20a9c9c675e34be18747d0be53656eb71ddfce7fadca259852046e986978fd\"" Feb 13 19:30:55.034030 containerd[1502]: time="2025-02-13T19:30:55.034018601Z" level=info msg="TearDown network for sandbox \"fd20a9c9c675e34be18747d0be53656eb71ddfce7fadca259852046e986978fd\" successfully" Feb 13 19:30:55.034030 containerd[1502]: time="2025-02-13T19:30:55.034026856Z" level=info msg="StopPodSandbox for \"fd20a9c9c675e34be18747d0be53656eb71ddfce7fadca259852046e986978fd\" returns successfully" Feb 13 19:30:55.034960 containerd[1502]: time="2025-02-13T19:30:55.034772226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68b8f8cf95-qdbnh,Uid:d38faaa4-494c-4c4e-87ff-0a00aa82bf88,Namespace:calico-system,Attempt:4,}" Feb 13 19:30:55.035059 containerd[1502]: time="2025-02-13T19:30:55.035014952Z" level=info msg="Ensure that sandbox cf302ac4983e4b5752dc5e187cea36041dfa162432d3e8affbda52f81e45b173 in task-service has been cleanup successfully" Feb 13 19:30:55.035250 containerd[1502]: time="2025-02-13T19:30:55.035235808Z" level=info msg="TearDown network for sandbox \"cf302ac4983e4b5752dc5e187cea36041dfa162432d3e8affbda52f81e45b173\" successfully" Feb 13 19:30:55.035363 containerd[1502]: time="2025-02-13T19:30:55.035305138Z" level=info msg="StopPodSandbox for \"cf302ac4983e4b5752dc5e187cea36041dfa162432d3e8affbda52f81e45b173\" returns successfully" Feb 13 19:30:55.035849 containerd[1502]: time="2025-02-13T19:30:55.035829522Z" level=info msg="StopPodSandbox for \"8a68fb01407d9c834800a26cc46bd4437b62dc056b0e1350c4907a96b87c511d\"" Feb 13 19:30:55.035914 containerd[1502]: time="2025-02-13T19:30:55.035898853Z" level=info msg="TearDown network for sandbox \"8a68fb01407d9c834800a26cc46bd4437b62dc056b0e1350c4907a96b87c511d\" successfully" Feb 13 19:30:55.035954 containerd[1502]: time="2025-02-13T19:30:55.035911677Z" level=info msg="StopPodSandbox for \"8a68fb01407d9c834800a26cc46bd4437b62dc056b0e1350c4907a96b87c511d\" returns successfully" Feb 13 19:30:55.036212 containerd[1502]: time="2025-02-13T19:30:55.036091114Z" level=info msg="StopPodSandbox for \"06b76013e050990e084073338277aecc1ce3e9a19060a462af91ada291c27c21\"" Feb 13 19:30:55.036212 containerd[1502]: time="2025-02-13T19:30:55.036156997Z" level=info msg="TearDown network for sandbox \"06b76013e050990e084073338277aecc1ce3e9a19060a462af91ada291c27c21\" successfully" Feb 13 19:30:55.036212 containerd[1502]: time="2025-02-13T19:30:55.036165172Z" level=info msg="StopPodSandbox for \"06b76013e050990e084073338277aecc1ce3e9a19060a462af91ada291c27c21\" returns successfully" Feb 13 19:30:55.036294 kubelet[2600]: I0213 19:30:55.036168 2600 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b60fb3683d6c0de790a6b7336486ff63455b4499390069c146ad9dffdf344af" Feb 13 19:30:55.037344 containerd[1502]: time="2025-02-13T19:30:55.036587156Z" level=info msg="StopPodSandbox for \"9b60fb3683d6c0de790a6b7336486ff63455b4499390069c146ad9dffdf344af\"" Feb 13 19:30:55.037344 containerd[1502]: time="2025-02-13T19:30:55.036731296Z" level=info msg="Ensure that sandbox 9b60fb3683d6c0de790a6b7336486ff63455b4499390069c146ad9dffdf344af in task-service has been cleanup successfully" Feb 13 19:30:55.037344 containerd[1502]: time="2025-02-13T19:30:55.036909862Z" level=info msg="StopPodSandbox for \"4d57794a877b0b6953752ccedec2bd85d68373c09318c76ac2c537449d27f46f\"" Feb 13 19:30:55.037344 containerd[1502]: time="2025-02-13T19:30:55.036980124Z" level=info msg="TearDown network for sandbox \"4d57794a877b0b6953752ccedec2bd85d68373c09318c76ac2c537449d27f46f\" successfully" Feb 13 19:30:55.037344 containerd[1502]: time="2025-02-13T19:30:55.036988990Z" level=info msg="StopPodSandbox for \"4d57794a877b0b6953752ccedec2bd85d68373c09318c76ac2c537449d27f46f\" returns successfully" Feb 13 19:30:55.037344 containerd[1502]: time="2025-02-13T19:30:55.037027783Z" level=info msg="TearDown network for sandbox \"9b60fb3683d6c0de790a6b7336486ff63455b4499390069c146ad9dffdf344af\" successfully" Feb 13 19:30:55.037344 containerd[1502]: time="2025-02-13T19:30:55.037039134Z" level=info msg="StopPodSandbox for \"9b60fb3683d6c0de790a6b7336486ff63455b4499390069c146ad9dffdf344af\" returns successfully" Feb 13 19:30:55.036776 systemd[1]: run-netns-cni\x2de7077c05\x2de03e\x2d3008\x2d281c\x2d87aa0538b71b.mount: Deactivated successfully. Feb 13 19:30:55.037925 containerd[1502]: time="2025-02-13T19:30:55.037745120Z" level=info msg="StopPodSandbox for \"ddacbc97dad8b654492a3ede378ba89ed873bd2206e50dd63b006c03446e6466\"" Feb 13 19:30:55.037925 containerd[1502]: time="2025-02-13T19:30:55.037781649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cfdf6,Uid:61978848-76c7-4692-bab4-3c8c891d5468,Namespace:calico-system,Attempt:4,}" Feb 13 19:30:55.037925 containerd[1502]: time="2025-02-13T19:30:55.037819189Z" level=info msg="TearDown network for sandbox \"ddacbc97dad8b654492a3ede378ba89ed873bd2206e50dd63b006c03446e6466\" successfully" Feb 13 19:30:55.037925 containerd[1502]: time="2025-02-13T19:30:55.037829338Z" level=info msg="StopPodSandbox for \"ddacbc97dad8b654492a3ede378ba89ed873bd2206e50dd63b006c03446e6466\" returns successfully" Feb 13 19:30:55.038222 containerd[1502]: time="2025-02-13T19:30:55.038206346Z" level=info msg="StopPodSandbox for \"d128712c7ee019c611fbc612ab74775c9fe61e0291a6f84232248274b76f860a\"" Feb 13 19:30:55.038282 containerd[1502]: time="2025-02-13T19:30:55.038271249Z" level=info msg="TearDown network for sandbox \"d128712c7ee019c611fbc612ab74775c9fe61e0291a6f84232248274b76f860a\" successfully" Feb 13 19:30:55.038320 containerd[1502]: time="2025-02-13T19:30:55.038282029Z" level=info msg="StopPodSandbox for \"d128712c7ee019c611fbc612ab74775c9fe61e0291a6f84232248274b76f860a\" returns successfully" Feb 13 19:30:55.038654 containerd[1502]: time="2025-02-13T19:30:55.038575771Z" level=info msg="StopPodSandbox for \"b5ee6d231558bb9e6055e52293f90a21d1ac84e5b91aa9ef485efd32a23f0760\"" Feb 13 19:30:55.038654 containerd[1502]: time="2025-02-13T19:30:55.038638078Z" level=info msg="TearDown network for sandbox \"b5ee6d231558bb9e6055e52293f90a21d1ac84e5b91aa9ef485efd32a23f0760\" successfully" Feb 13 19:30:55.038654 containerd[1502]: time="2025-02-13T19:30:55.038645592Z" level=info msg="StopPodSandbox for \"b5ee6d231558bb9e6055e52293f90a21d1ac84e5b91aa9ef485efd32a23f0760\" returns successfully" Feb 13 19:30:55.038981 containerd[1502]: time="2025-02-13T19:30:55.038963309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6794f4445b-7cftj,Uid:9bf3d155-222d-46fc-9867-b55f1df961f7,Namespace:calico-apiserver,Attempt:4,}" Feb 13 19:30:55.039457 kubelet[2600]: I0213 19:30:55.039440 2600 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29892c77e2068e69cefb658e7f5afd0cfb682cfc7981da5bee2ea2b80177b4bc" Feb 13 19:30:55.040166 containerd[1502]: time="2025-02-13T19:30:55.040062854Z" level=info msg="StopPodSandbox for \"29892c77e2068e69cefb658e7f5afd0cfb682cfc7981da5bee2ea2b80177b4bc\"" Feb 13 19:30:55.040210 containerd[1502]: time="2025-02-13T19:30:55.040188209Z" level=info msg="Ensure that sandbox 29892c77e2068e69cefb658e7f5afd0cfb682cfc7981da5bee2ea2b80177b4bc in task-service has been cleanup successfully" Feb 13 19:30:55.040467 containerd[1502]: time="2025-02-13T19:30:55.040446134Z" level=info msg="TearDown network for sandbox \"29892c77e2068e69cefb658e7f5afd0cfb682cfc7981da5bee2ea2b80177b4bc\" successfully" Feb 13 19:30:55.040467 containerd[1502]: time="2025-02-13T19:30:55.040464308Z" level=info msg="StopPodSandbox for \"29892c77e2068e69cefb658e7f5afd0cfb682cfc7981da5bee2ea2b80177b4bc\" returns successfully" Feb 13 19:30:55.040943 containerd[1502]: time="2025-02-13T19:30:55.040676647Z" level=info msg="StopPodSandbox for \"3ce1ab7c9e3b75163c498673b614ea401518c42e9012bdd7e9cd033bba480db0\"" Feb 13 19:30:55.040943 containerd[1502]: time="2025-02-13T19:30:55.040745857Z" level=info msg="TearDown network for sandbox \"3ce1ab7c9e3b75163c498673b614ea401518c42e9012bdd7e9cd033bba480db0\" successfully" Feb 13 19:30:55.040943 containerd[1502]: time="2025-02-13T19:30:55.040753822Z" level=info msg="StopPodSandbox for \"3ce1ab7c9e3b75163c498673b614ea401518c42e9012bdd7e9cd033bba480db0\" returns successfully" Feb 13 19:30:55.041035 containerd[1502]: time="2025-02-13T19:30:55.040970338Z" level=info msg="StopPodSandbox for \"a615d24e84e74129058bc9c7cb028777bb16a481796829c1a98ceaf03287d6c5\"" Feb 13 19:30:55.041192 containerd[1502]: time="2025-02-13T19:30:55.041146750Z" level=info msg="TearDown network for sandbox \"a615d24e84e74129058bc9c7cb028777bb16a481796829c1a98ceaf03287d6c5\" successfully" Feb 13 19:30:55.041280 containerd[1502]: time="2025-02-13T19:30:55.041159814Z" level=info msg="StopPodSandbox for \"a615d24e84e74129058bc9c7cb028777bb16a481796829c1a98ceaf03287d6c5\" returns successfully" Feb 13 19:30:55.041465 systemd[1]: run-netns-cni\x2de7a54878\x2d744a\x2da3c9\x2d42db\x2d387e1fd5e7cd.mount: Deactivated successfully. Feb 13 19:30:55.041651 containerd[1502]: time="2025-02-13T19:30:55.041540519Z" level=info msg="StopPodSandbox for \"1deac3e8f1315485d945ed4d7ff726c16660ae79ed6279b69ac45afcd92e7fed\"" Feb 13 19:30:55.041651 containerd[1502]: time="2025-02-13T19:30:55.041603397Z" level=info msg="TearDown network for sandbox \"1deac3e8f1315485d945ed4d7ff726c16660ae79ed6279b69ac45afcd92e7fed\" successfully" Feb 13 19:30:55.041651 containerd[1502]: time="2025-02-13T19:30:55.041611482Z" level=info msg="StopPodSandbox for \"1deac3e8f1315485d945ed4d7ff726c16660ae79ed6279b69ac45afcd92e7fed\" returns successfully" Feb 13 19:30:55.041777 kubelet[2600]: I0213 19:30:55.041717 2600 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cdeca7bc8c5d40767aa8b82f58ac84479cad41960dfa49f467a04bf46aa03b20" Feb 13 19:30:55.042025 containerd[1502]: time="2025-02-13T19:30:55.041977139Z" level=info msg="StopPodSandbox for \"197b50055a8d04a83b5b696198cca280193728180dffd10631ed830366a0c893\"" Feb 13 19:30:55.042025 containerd[1502]: time="2025-02-13T19:30:55.042003028Z" level=info msg="StopPodSandbox for \"cdeca7bc8c5d40767aa8b82f58ac84479cad41960dfa49f467a04bf46aa03b20\"" Feb 13 19:30:55.042121 containerd[1502]: time="2025-02-13T19:30:55.042059534Z" level=info msg="TearDown network for sandbox \"197b50055a8d04a83b5b696198cca280193728180dffd10631ed830366a0c893\" successfully" Feb 13 19:30:55.042121 containerd[1502]: time="2025-02-13T19:30:55.042068852Z" level=info msg="StopPodSandbox for \"197b50055a8d04a83b5b696198cca280193728180dffd10631ed830366a0c893\" returns successfully" Feb 13 19:30:55.042036 systemd[1]: run-netns-cni\x2d1beff628\x2d1be3\x2db633\x2dbc6c\x2d355c5dd46355.mount: Deactivated successfully. Feb 13 19:30:55.042144 systemd[1]: run-netns-cni\x2d766c638a\x2dc455\x2dc556\x2d6e80\x2d10405a3088a3.mount: Deactivated successfully. Feb 13 19:30:55.042268 kubelet[2600]: E0213 19:30:55.042215 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:55.042462 containerd[1502]: time="2025-02-13T19:30:55.042414301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bxg8d,Uid:402f0fa8-6d39-4a67-b618-1d216e220aea,Namespace:kube-system,Attempt:5,}" Feb 13 19:30:55.042502 containerd[1502]: time="2025-02-13T19:30:55.042418509Z" level=info msg="Ensure that sandbox cdeca7bc8c5d40767aa8b82f58ac84479cad41960dfa49f467a04bf46aa03b20 in task-service has been cleanup successfully" Feb 13 19:30:55.042707 containerd[1502]: time="2025-02-13T19:30:55.042690389Z" level=info msg="TearDown network for sandbox \"cdeca7bc8c5d40767aa8b82f58ac84479cad41960dfa49f467a04bf46aa03b20\" successfully" Feb 13 19:30:55.042707 containerd[1502]: time="2025-02-13T19:30:55.042705087Z" level=info msg="StopPodSandbox for \"cdeca7bc8c5d40767aa8b82f58ac84479cad41960dfa49f467a04bf46aa03b20\" returns successfully" Feb 13 19:30:55.043011 containerd[1502]: time="2025-02-13T19:30:55.042890996Z" level=info msg="StopPodSandbox for \"68b36cedfb71a4b13925ca9c776e9f716e294ad3c104d860f484f0a21e16f665\"" Feb 13 19:30:55.043011 containerd[1502]: time="2025-02-13T19:30:55.042958612Z" level=info msg="TearDown network for sandbox \"68b36cedfb71a4b13925ca9c776e9f716e294ad3c104d860f484f0a21e16f665\" successfully" Feb 13 19:30:55.043011 containerd[1502]: time="2025-02-13T19:30:55.042967489Z" level=info msg="StopPodSandbox for \"68b36cedfb71a4b13925ca9c776e9f716e294ad3c104d860f484f0a21e16f665\" returns successfully" Feb 13 19:30:55.043248 containerd[1502]: time="2025-02-13T19:30:55.043230744Z" level=info msg="StopPodSandbox for \"919def795fcabe325c259e12ac19e82f4306388eba272371855df017ef8e2d5d\"" Feb 13 19:30:55.043349 containerd[1502]: time="2025-02-13T19:30:55.043306967Z" level=info msg="TearDown network for sandbox \"919def795fcabe325c259e12ac19e82f4306388eba272371855df017ef8e2d5d\" successfully" Feb 13 19:30:55.043349 containerd[1502]: time="2025-02-13T19:30:55.043331443Z" level=info msg="StopPodSandbox for \"919def795fcabe325c259e12ac19e82f4306388eba272371855df017ef8e2d5d\" returns successfully" Feb 13 19:30:55.044105 containerd[1502]: time="2025-02-13T19:30:55.044084338Z" level=info msg="StopPodSandbox for \"9a576abef7e3f4083c481695c9f493c39417d50066da6a1bc9040a8a2fdf6340\"" Feb 13 19:30:55.044459 containerd[1502]: time="2025-02-13T19:30:55.044155561Z" level=info msg="TearDown network for sandbox \"9a576abef7e3f4083c481695c9f493c39417d50066da6a1bc9040a8a2fdf6340\" successfully" Feb 13 19:30:55.044459 containerd[1502]: time="2025-02-13T19:30:55.044164658Z" level=info msg="StopPodSandbox for \"9a576abef7e3f4083c481695c9f493c39417d50066da6a1bc9040a8a2fdf6340\" returns successfully" Feb 13 19:30:55.044505 kubelet[2600]: E0213 19:30:55.044362 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:55.044702 containerd[1502]: time="2025-02-13T19:30:55.044627397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-gvd66,Uid:f2d2c41c-272d-4c74-897d-79c94986b647,Namespace:kube-system,Attempt:4,}" Feb 13 19:30:55.045713 systemd[1]: run-netns-cni\x2de383f0e1\x2d7892\x2d9520\x2d75ee\x2d59833b99fc7f.mount: Deactivated successfully. Feb 13 19:30:55.045900 systemd[1]: run-netns-cni\x2d32036fda\x2dbf85\x2d118f\x2d6b30\x2d9cf3752ee033.mount: Deactivated successfully. Feb 13 19:30:56.753785 systemd[1]: Started sshd@11-10.0.0.116:22-10.0.0.1:53212.service - OpenSSH per-connection server daemon (10.0.0.1:53212). Feb 13 19:30:56.809251 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1789771064.mount: Deactivated successfully. Feb 13 19:30:56.823333 sshd[4458]: Accepted publickey for core from 10.0.0.1 port 53212 ssh2: RSA SHA256:ENn9hOvI2hLUXcV6iHA8gc9Z4CTEPvGDMkoVtxIuYbg Feb 13 19:30:56.825436 sshd-session[4458]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:30:56.829571 systemd-logind[1485]: New session 12 of user core. Feb 13 19:30:56.839447 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 19:30:56.964662 sshd[4460]: Connection closed by 10.0.0.1 port 53212 Feb 13 19:30:56.965055 sshd-session[4458]: pam_unix(sshd:session): session closed for user core Feb 13 19:30:56.968976 systemd[1]: sshd@11-10.0.0.116:22-10.0.0.1:53212.service: Deactivated successfully. Feb 13 19:30:56.970937 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 19:30:56.971525 systemd-logind[1485]: Session 12 logged out. Waiting for processes to exit. Feb 13 19:30:56.972399 systemd-logind[1485]: Removed session 12. Feb 13 19:30:58.794460 containerd[1502]: time="2025-02-13T19:30:58.794407821Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:58.805050 containerd[1502]: time="2025-02-13T19:30:58.804987847Z" level=error msg="Failed to destroy network for sandbox \"06a7a23e17b4ca63f57567aca5f35e0cc59719129998a1820716ea386ece0d87\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:58.805391 containerd[1502]: time="2025-02-13T19:30:58.805361319Z" level=error msg="encountered an error cleaning up failed sandbox \"06a7a23e17b4ca63f57567aca5f35e0cc59719129998a1820716ea386ece0d87\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:58.805449 containerd[1502]: time="2025-02-13T19:30:58.805426251Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-gvd66,Uid:f2d2c41c-272d-4c74-897d-79c94986b647,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"06a7a23e17b4ca63f57567aca5f35e0cc59719129998a1820716ea386ece0d87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:58.805725 kubelet[2600]: E0213 19:30:58.805677 2600 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06a7a23e17b4ca63f57567aca5f35e0cc59719129998a1820716ea386ece0d87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:58.806107 kubelet[2600]: E0213 19:30:58.805760 2600 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06a7a23e17b4ca63f57567aca5f35e0cc59719129998a1820716ea386ece0d87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-gvd66" Feb 13 19:30:58.806107 kubelet[2600]: E0213 19:30:58.805788 2600 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06a7a23e17b4ca63f57567aca5f35e0cc59719129998a1820716ea386ece0d87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-gvd66" Feb 13 19:30:58.806107 kubelet[2600]: E0213 19:30:58.805850 2600 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-gvd66_kube-system(f2d2c41c-272d-4c74-897d-79c94986b647)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-gvd66_kube-system(f2d2c41c-272d-4c74-897d-79c94986b647)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"06a7a23e17b4ca63f57567aca5f35e0cc59719129998a1820716ea386ece0d87\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-gvd66" podUID="f2d2c41c-272d-4c74-897d-79c94986b647" Feb 13 19:30:58.898611 containerd[1502]: time="2025-02-13T19:30:58.898549664Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Feb 13 19:30:58.905248 containerd[1502]: time="2025-02-13T19:30:58.904867728Z" level=error msg="Failed to destroy network for sandbox \"dff7a908d6d80b53c7af4ce2f86bc9e08a85c956a953efb2b1f0977164b5d965\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:58.905484 containerd[1502]: time="2025-02-13T19:30:58.905455972Z" level=error msg="encountered an error cleaning up failed sandbox \"dff7a908d6d80b53c7af4ce2f86bc9e08a85c956a953efb2b1f0977164b5d965\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:58.905542 containerd[1502]: time="2025-02-13T19:30:58.905520754Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6794f4445b-7cftj,Uid:9bf3d155-222d-46fc-9867-b55f1df961f7,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"dff7a908d6d80b53c7af4ce2f86bc9e08a85c956a953efb2b1f0977164b5d965\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:58.906363 kubelet[2600]: E0213 19:30:58.905989 2600 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dff7a908d6d80b53c7af4ce2f86bc9e08a85c956a953efb2b1f0977164b5d965\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:58.906363 kubelet[2600]: E0213 19:30:58.906107 2600 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dff7a908d6d80b53c7af4ce2f86bc9e08a85c956a953efb2b1f0977164b5d965\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6794f4445b-7cftj" Feb 13 19:30:58.906363 kubelet[2600]: E0213 19:30:58.906154 2600 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dff7a908d6d80b53c7af4ce2f86bc9e08a85c956a953efb2b1f0977164b5d965\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6794f4445b-7cftj" Feb 13 19:30:58.906477 kubelet[2600]: E0213 19:30:58.906397 2600 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6794f4445b-7cftj_calico-apiserver(9bf3d155-222d-46fc-9867-b55f1df961f7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6794f4445b-7cftj_calico-apiserver(9bf3d155-222d-46fc-9867-b55f1df961f7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dff7a908d6d80b53c7af4ce2f86bc9e08a85c956a953efb2b1f0977164b5d965\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6794f4445b-7cftj" podUID="9bf3d155-222d-46fc-9867-b55f1df961f7" Feb 13 19:30:58.930181 containerd[1502]: time="2025-02-13T19:30:58.930015439Z" level=error msg="Failed to destroy network for sandbox \"ca70552ffab426979d1de2107ddb87e6551aa9adf1dd6f0eb5a66684f19cdda4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:58.931390 containerd[1502]: time="2025-02-13T19:30:58.931339445Z" level=error msg="encountered an error cleaning up failed sandbox \"ca70552ffab426979d1de2107ddb87e6551aa9adf1dd6f0eb5a66684f19cdda4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:58.931488 containerd[1502]: time="2025-02-13T19:30:58.931412452Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cfdf6,Uid:61978848-76c7-4692-bab4-3c8c891d5468,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"ca70552ffab426979d1de2107ddb87e6551aa9adf1dd6f0eb5a66684f19cdda4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:58.934385 kubelet[2600]: E0213 19:30:58.931628 2600 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca70552ffab426979d1de2107ddb87e6551aa9adf1dd6f0eb5a66684f19cdda4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:58.934452 kubelet[2600]: E0213 19:30:58.934418 2600 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca70552ffab426979d1de2107ddb87e6551aa9adf1dd6f0eb5a66684f19cdda4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cfdf6" Feb 13 19:30:58.934488 kubelet[2600]: E0213 19:30:58.934442 2600 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca70552ffab426979d1de2107ddb87e6551aa9adf1dd6f0eb5a66684f19cdda4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cfdf6" Feb 13 19:30:58.934810 kubelet[2600]: E0213 19:30:58.934539 2600 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-cfdf6_calico-system(61978848-76c7-4692-bab4-3c8c891d5468)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-cfdf6_calico-system(61978848-76c7-4692-bab4-3c8c891d5468)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ca70552ffab426979d1de2107ddb87e6551aa9adf1dd6f0eb5a66684f19cdda4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cfdf6" podUID="61978848-76c7-4692-bab4-3c8c891d5468" Feb 13 19:30:58.973904 containerd[1502]: time="2025-02-13T19:30:58.973852976Z" level=error msg="Failed to destroy network for sandbox \"d5c2f498a17882b9810f07e4c11851a73e26bc1d64eec044bd1728166fa3fbcf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:58.974245 containerd[1502]: time="2025-02-13T19:30:58.974211779Z" level=error msg="encountered an error cleaning up failed sandbox \"d5c2f498a17882b9810f07e4c11851a73e26bc1d64eec044bd1728166fa3fbcf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:58.974289 containerd[1502]: time="2025-02-13T19:30:58.974273566Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68b8f8cf95-qdbnh,Uid:d38faaa4-494c-4c4e-87ff-0a00aa82bf88,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"d5c2f498a17882b9810f07e4c11851a73e26bc1d64eec044bd1728166fa3fbcf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:58.974594 kubelet[2600]: E0213 19:30:58.974532 2600 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5c2f498a17882b9810f07e4c11851a73e26bc1d64eec044bd1728166fa3fbcf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:58.974594 kubelet[2600]: E0213 19:30:58.974595 2600 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5c2f498a17882b9810f07e4c11851a73e26bc1d64eec044bd1728166fa3fbcf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68b8f8cf95-qdbnh" Feb 13 19:30:58.974765 kubelet[2600]: E0213 19:30:58.974613 2600 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5c2f498a17882b9810f07e4c11851a73e26bc1d64eec044bd1728166fa3fbcf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68b8f8cf95-qdbnh" Feb 13 19:30:58.974765 kubelet[2600]: E0213 19:30:58.974654 2600 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-68b8f8cf95-qdbnh_calico-system(d38faaa4-494c-4c4e-87ff-0a00aa82bf88)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-68b8f8cf95-qdbnh_calico-system(d38faaa4-494c-4c4e-87ff-0a00aa82bf88)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d5c2f498a17882b9810f07e4c11851a73e26bc1d64eec044bd1728166fa3fbcf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68b8f8cf95-qdbnh" podUID="d38faaa4-494c-4c4e-87ff-0a00aa82bf88" Feb 13 19:30:59.017584 containerd[1502]: time="2025-02-13T19:30:59.017520710Z" level=error msg="Failed to destroy network for sandbox \"c2dc30846fa58b1ecad4344a3d9824db5c7ab58d70e840480ea575cd61100f5f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:59.017961 containerd[1502]: time="2025-02-13T19:30:59.017928767Z" level=error msg="encountered an error cleaning up failed sandbox \"c2dc30846fa58b1ecad4344a3d9824db5c7ab58d70e840480ea575cd61100f5f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:59.018026 containerd[1502]: time="2025-02-13T19:30:59.018002665Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6794f4445b-q9q8b,Uid:9eba37d3-14ec-4521-9302-789cbdb496aa,Namespace:calico-apiserver,Attempt:5,} failed, error" error="failed to setup network for sandbox \"c2dc30846fa58b1ecad4344a3d9824db5c7ab58d70e840480ea575cd61100f5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:59.018299 kubelet[2600]: E0213 19:30:59.018246 2600 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2dc30846fa58b1ecad4344a3d9824db5c7ab58d70e840480ea575cd61100f5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:59.018374 kubelet[2600]: E0213 19:30:59.018340 2600 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2dc30846fa58b1ecad4344a3d9824db5c7ab58d70e840480ea575cd61100f5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6794f4445b-q9q8b" Feb 13 19:30:59.018374 kubelet[2600]: E0213 19:30:59.018359 2600 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2dc30846fa58b1ecad4344a3d9824db5c7ab58d70e840480ea575cd61100f5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6794f4445b-q9q8b" Feb 13 19:30:59.018431 kubelet[2600]: E0213 19:30:59.018404 2600 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6794f4445b-q9q8b_calico-apiserver(9eba37d3-14ec-4521-9302-789cbdb496aa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6794f4445b-q9q8b_calico-apiserver(9eba37d3-14ec-4521-9302-789cbdb496aa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c2dc30846fa58b1ecad4344a3d9824db5c7ab58d70e840480ea575cd61100f5f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6794f4445b-q9q8b" podUID="9eba37d3-14ec-4521-9302-789cbdb496aa" Feb 13 19:30:59.030652 containerd[1502]: time="2025-02-13T19:30:59.030600730Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:59.051337 kubelet[2600]: I0213 19:30:59.051145 2600 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06a7a23e17b4ca63f57567aca5f35e0cc59719129998a1820716ea386ece0d87" Feb 13 19:30:59.052390 containerd[1502]: time="2025-02-13T19:30:59.052349095Z" level=info msg="StopPodSandbox for \"06a7a23e17b4ca63f57567aca5f35e0cc59719129998a1820716ea386ece0d87\"" Feb 13 19:30:59.052637 containerd[1502]: time="2025-02-13T19:30:59.052618791Z" level=info msg="Ensure that sandbox 06a7a23e17b4ca63f57567aca5f35e0cc59719129998a1820716ea386ece0d87 in task-service has been cleanup successfully" Feb 13 19:30:59.052934 containerd[1502]: time="2025-02-13T19:30:59.052907653Z" level=info msg="TearDown network for sandbox \"06a7a23e17b4ca63f57567aca5f35e0cc59719129998a1820716ea386ece0d87\" successfully" Feb 13 19:30:59.052934 containerd[1502]: time="2025-02-13T19:30:59.052927260Z" level=info msg="StopPodSandbox for \"06a7a23e17b4ca63f57567aca5f35e0cc59719129998a1820716ea386ece0d87\" returns successfully" Feb 13 19:30:59.053454 containerd[1502]: time="2025-02-13T19:30:59.053287737Z" level=info msg="StopPodSandbox for \"cdeca7bc8c5d40767aa8b82f58ac84479cad41960dfa49f467a04bf46aa03b20\"" Feb 13 19:30:59.053454 containerd[1502]: time="2025-02-13T19:30:59.053399206Z" level=info msg="TearDown network for sandbox \"cdeca7bc8c5d40767aa8b82f58ac84479cad41960dfa49f467a04bf46aa03b20\" successfully" Feb 13 19:30:59.053454 containerd[1502]: time="2025-02-13T19:30:59.053409536Z" level=info msg="StopPodSandbox for \"cdeca7bc8c5d40767aa8b82f58ac84479cad41960dfa49f467a04bf46aa03b20\" returns successfully" Feb 13 19:30:59.054053 containerd[1502]: time="2025-02-13T19:30:59.054022015Z" level=info msg="StopPodSandbox for \"68b36cedfb71a4b13925ca9c776e9f716e294ad3c104d860f484f0a21e16f665\"" Feb 13 19:30:59.055127 containerd[1502]: time="2025-02-13T19:30:59.054112185Z" level=info msg="TearDown network for sandbox \"68b36cedfb71a4b13925ca9c776e9f716e294ad3c104d860f484f0a21e16f665\" successfully" Feb 13 19:30:59.055127 containerd[1502]: time="2025-02-13T19:30:59.054127734Z" level=info msg="StopPodSandbox for \"68b36cedfb71a4b13925ca9c776e9f716e294ad3c104d860f484f0a21e16f665\" returns successfully" Feb 13 19:30:59.055127 containerd[1502]: time="2025-02-13T19:30:59.054329763Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:59.055127 containerd[1502]: time="2025-02-13T19:30:59.054890676Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 12.102357409s" Feb 13 19:30:59.055127 containerd[1502]: time="2025-02-13T19:30:59.054911545Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Feb 13 19:30:59.055127 containerd[1502]: time="2025-02-13T19:30:59.055023707Z" level=error msg="Failed to destroy network for sandbox \"70f85222cbcb79f1b19958776e833cf3199d71272c3f88786a10d32eeba936eb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:59.055300 containerd[1502]: time="2025-02-13T19:30:59.055218462Z" level=info msg="StopPodSandbox for \"919def795fcabe325c259e12ac19e82f4306388eba272371855df017ef8e2d5d\"" Feb 13 19:30:59.055354 containerd[1502]: time="2025-02-13T19:30:59.055328068Z" level=info msg="TearDown network for sandbox \"919def795fcabe325c259e12ac19e82f4306388eba272371855df017ef8e2d5d\" successfully" Feb 13 19:30:59.055354 containerd[1502]: time="2025-02-13T19:30:59.055339289Z" level=info msg="StopPodSandbox for \"919def795fcabe325c259e12ac19e82f4306388eba272371855df017ef8e2d5d\" returns successfully" Feb 13 19:30:59.056368 containerd[1502]: time="2025-02-13T19:30:59.055723440Z" level=error msg="encountered an error cleaning up failed sandbox \"70f85222cbcb79f1b19958776e833cf3199d71272c3f88786a10d32eeba936eb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:59.056368 containerd[1502]: time="2025-02-13T19:30:59.055786699Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bxg8d,Uid:402f0fa8-6d39-4a67-b618-1d216e220aea,Namespace:kube-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"70f85222cbcb79f1b19958776e833cf3199d71272c3f88786a10d32eeba936eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:59.056368 containerd[1502]: time="2025-02-13T19:30:59.056346961Z" level=info msg="StopPodSandbox for \"9a576abef7e3f4083c481695c9f493c39417d50066da6a1bc9040a8a2fdf6340\"" Feb 13 19:30:59.056585 containerd[1502]: time="2025-02-13T19:30:59.056418896Z" level=info msg="TearDown network for sandbox \"9a576abef7e3f4083c481695c9f493c39417d50066da6a1bc9040a8a2fdf6340\" successfully" Feb 13 19:30:59.056585 containerd[1502]: time="2025-02-13T19:30:59.056429105Z" level=info msg="StopPodSandbox for \"9a576abef7e3f4083c481695c9f493c39417d50066da6a1bc9040a8a2fdf6340\" returns successfully" Feb 13 19:30:59.056631 kubelet[2600]: E0213 19:30:59.055979 2600 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70f85222cbcb79f1b19958776e833cf3199d71272c3f88786a10d32eeba936eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:59.056631 kubelet[2600]: E0213 19:30:59.056020 2600 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70f85222cbcb79f1b19958776e833cf3199d71272c3f88786a10d32eeba936eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-bxg8d" Feb 13 19:30:59.056631 kubelet[2600]: E0213 19:30:59.056041 2600 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70f85222cbcb79f1b19958776e833cf3199d71272c3f88786a10d32eeba936eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-bxg8d" Feb 13 19:30:59.056724 kubelet[2600]: E0213 19:30:59.056077 2600 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-bxg8d_kube-system(402f0fa8-6d39-4a67-b618-1d216e220aea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-bxg8d_kube-system(402f0fa8-6d39-4a67-b618-1d216e220aea)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"70f85222cbcb79f1b19958776e833cf3199d71272c3f88786a10d32eeba936eb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-bxg8d" podUID="402f0fa8-6d39-4a67-b618-1d216e220aea" Feb 13 19:30:59.056724 kubelet[2600]: E0213 19:30:59.056555 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:30:59.056807 containerd[1502]: time="2025-02-13T19:30:59.056775386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-gvd66,Uid:f2d2c41c-272d-4c74-897d-79c94986b647,Namespace:kube-system,Attempt:5,}" Feb 13 19:30:59.057447 kubelet[2600]: I0213 19:30:59.057417 2600 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dff7a908d6d80b53c7af4ce2f86bc9e08a85c956a953efb2b1f0977164b5d965" Feb 13 19:30:59.057826 containerd[1502]: time="2025-02-13T19:30:59.057804317Z" level=info msg="StopPodSandbox for \"dff7a908d6d80b53c7af4ce2f86bc9e08a85c956a953efb2b1f0977164b5d965\"" Feb 13 19:30:59.058238 containerd[1502]: time="2025-02-13T19:30:59.058094703Z" level=info msg="Ensure that sandbox dff7a908d6d80b53c7af4ce2f86bc9e08a85c956a953efb2b1f0977164b5d965 in task-service has been cleanup successfully" Feb 13 19:30:59.058430 containerd[1502]: time="2025-02-13T19:30:59.058406257Z" level=info msg="TearDown network for sandbox \"dff7a908d6d80b53c7af4ce2f86bc9e08a85c956a953efb2b1f0977164b5d965\" successfully" Feb 13 19:30:59.058430 containerd[1502]: time="2025-02-13T19:30:59.058423139Z" level=info msg="StopPodSandbox for \"dff7a908d6d80b53c7af4ce2f86bc9e08a85c956a953efb2b1f0977164b5d965\" returns successfully" Feb 13 19:30:59.067014 containerd[1502]: time="2025-02-13T19:30:59.066960199Z" level=info msg="CreateContainer within sandbox \"059b614fd2f892cff198cd86b990eba2edb5278b2c86b8b9dd583b1b74bf0292\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 19:30:59.067128 containerd[1502]: time="2025-02-13T19:30:59.066973494Z" level=info msg="StopPodSandbox for \"9b60fb3683d6c0de790a6b7336486ff63455b4499390069c146ad9dffdf344af\"" Feb 13 19:30:59.067224 containerd[1502]: time="2025-02-13T19:30:59.067194618Z" level=info msg="TearDown network for sandbox \"9b60fb3683d6c0de790a6b7336486ff63455b4499390069c146ad9dffdf344af\" successfully" Feb 13 19:30:59.067224 containerd[1502]: time="2025-02-13T19:30:59.067210378Z" level=info msg="StopPodSandbox for \"9b60fb3683d6c0de790a6b7336486ff63455b4499390069c146ad9dffdf344af\" returns successfully" Feb 13 19:30:59.067684 containerd[1502]: time="2025-02-13T19:30:59.067662687Z" level=info msg="StopPodSandbox for \"ddacbc97dad8b654492a3ede378ba89ed873bd2206e50dd63b006c03446e6466\"" Feb 13 19:30:59.067749 containerd[1502]: time="2025-02-13T19:30:59.067735323Z" level=info msg="TearDown network for sandbox \"ddacbc97dad8b654492a3ede378ba89ed873bd2206e50dd63b006c03446e6466\" successfully" Feb 13 19:30:59.067771 containerd[1502]: time="2025-02-13T19:30:59.067747226Z" level=info msg="StopPodSandbox for \"ddacbc97dad8b654492a3ede378ba89ed873bd2206e50dd63b006c03446e6466\" returns successfully" Feb 13 19:30:59.068559 containerd[1502]: time="2025-02-13T19:30:59.068528703Z" level=info msg="StopPodSandbox for \"d128712c7ee019c611fbc612ab74775c9fe61e0291a6f84232248274b76f860a\"" Feb 13 19:30:59.068638 containerd[1502]: time="2025-02-13T19:30:59.068615977Z" level=info msg="TearDown network for sandbox \"d128712c7ee019c611fbc612ab74775c9fe61e0291a6f84232248274b76f860a\" successfully" Feb 13 19:30:59.068666 containerd[1502]: time="2025-02-13T19:30:59.068636646Z" level=info msg="StopPodSandbox for \"d128712c7ee019c611fbc612ab74775c9fe61e0291a6f84232248274b76f860a\" returns successfully" Feb 13 19:30:59.068975 containerd[1502]: time="2025-02-13T19:30:59.068942691Z" level=info msg="StopPodSandbox for \"b5ee6d231558bb9e6055e52293f90a21d1ac84e5b91aa9ef485efd32a23f0760\"" Feb 13 19:30:59.069043 containerd[1502]: time="2025-02-13T19:30:59.069027830Z" level=info msg="TearDown network for sandbox \"b5ee6d231558bb9e6055e52293f90a21d1ac84e5b91aa9ef485efd32a23f0760\" successfully" Feb 13 19:30:59.069043 containerd[1502]: time="2025-02-13T19:30:59.069041065Z" level=info msg="StopPodSandbox for \"b5ee6d231558bb9e6055e52293f90a21d1ac84e5b91aa9ef485efd32a23f0760\" returns successfully" Feb 13 19:30:59.069892 kubelet[2600]: I0213 19:30:59.069866 2600 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2dc30846fa58b1ecad4344a3d9824db5c7ab58d70e840480ea575cd61100f5f" Feb 13 19:30:59.070187 containerd[1502]: time="2025-02-13T19:30:59.070154426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6794f4445b-7cftj,Uid:9bf3d155-222d-46fc-9867-b55f1df961f7,Namespace:calico-apiserver,Attempt:5,}" Feb 13 19:30:59.072442 containerd[1502]: time="2025-02-13T19:30:59.072420862Z" level=info msg="StopPodSandbox for \"c2dc30846fa58b1ecad4344a3d9824db5c7ab58d70e840480ea575cd61100f5f\"" Feb 13 19:30:59.072619 containerd[1502]: time="2025-02-13T19:30:59.072605217Z" level=info msg="Ensure that sandbox c2dc30846fa58b1ecad4344a3d9824db5c7ab58d70e840480ea575cd61100f5f in task-service has been cleanup successfully" Feb 13 19:30:59.074507 containerd[1502]: time="2025-02-13T19:30:59.074444552Z" level=info msg="TearDown network for sandbox \"c2dc30846fa58b1ecad4344a3d9824db5c7ab58d70e840480ea575cd61100f5f\" successfully" Feb 13 19:30:59.074507 containerd[1502]: time="2025-02-13T19:30:59.074461403Z" level=info msg="StopPodSandbox for \"c2dc30846fa58b1ecad4344a3d9824db5c7ab58d70e840480ea575cd61100f5f\" returns successfully" Feb 13 19:30:59.075645 containerd[1502]: time="2025-02-13T19:30:59.075622112Z" level=info msg="StopPodSandbox for \"da21058157a478777dd793b0c03fcbc23d8d5e245d11dd697c88feb605601cc5\"" Feb 13 19:30:59.075716 containerd[1502]: time="2025-02-13T19:30:59.075699327Z" level=info msg="TearDown network for sandbox \"da21058157a478777dd793b0c03fcbc23d8d5e245d11dd697c88feb605601cc5\" successfully" Feb 13 19:30:59.075716 containerd[1502]: time="2025-02-13T19:30:59.075711981Z" level=info msg="StopPodSandbox for \"da21058157a478777dd793b0c03fcbc23d8d5e245d11dd697c88feb605601cc5\" returns successfully" Feb 13 19:30:59.076061 containerd[1502]: time="2025-02-13T19:30:59.076027353Z" level=info msg="StopPodSandbox for \"6d5d01f5764dfa313010e551a484fd652da553aef24f0843408d75f6c678d0dc\"" Feb 13 19:30:59.076131 containerd[1502]: time="2025-02-13T19:30:59.076114857Z" level=info msg="TearDown network for sandbox \"6d5d01f5764dfa313010e551a484fd652da553aef24f0843408d75f6c678d0dc\" successfully" Feb 13 19:30:59.076155 containerd[1502]: time="2025-02-13T19:30:59.076128753Z" level=info msg="StopPodSandbox for \"6d5d01f5764dfa313010e551a484fd652da553aef24f0843408d75f6c678d0dc\" returns successfully" Feb 13 19:30:59.076393 containerd[1502]: time="2025-02-13T19:30:59.076363254Z" level=info msg="StopPodSandbox for \"92aa5e65cd130c3a58ac37b4adf370bc37fd5b279afabfde37dbc8bf70e38b6a\"" Feb 13 19:30:59.076461 containerd[1502]: time="2025-02-13T19:30:59.076437984Z" level=info msg="TearDown network for sandbox \"92aa5e65cd130c3a58ac37b4adf370bc37fd5b279afabfde37dbc8bf70e38b6a\" successfully" Feb 13 19:30:59.076574 containerd[1502]: time="2025-02-13T19:30:59.076553952Z" level=info msg="StopPodSandbox for \"92aa5e65cd130c3a58ac37b4adf370bc37fd5b279afabfde37dbc8bf70e38b6a\" returns successfully" Feb 13 19:30:59.076825 containerd[1502]: time="2025-02-13T19:30:59.076751814Z" level=info msg="StopPodSandbox for \"029de7693a9f8fae3fc5f15fc92d9b4380854aa9a9856579c8e3ed5055510b90\"" Feb 13 19:30:59.076886 containerd[1502]: time="2025-02-13T19:30:59.076840410Z" level=info msg="TearDown network for sandbox \"029de7693a9f8fae3fc5f15fc92d9b4380854aa9a9856579c8e3ed5055510b90\" successfully" Feb 13 19:30:59.076886 containerd[1502]: time="2025-02-13T19:30:59.076851370Z" level=info msg="StopPodSandbox for \"029de7693a9f8fae3fc5f15fc92d9b4380854aa9a9856579c8e3ed5055510b90\" returns successfully" Feb 13 19:30:59.077086 containerd[1502]: time="2025-02-13T19:30:59.077046496Z" level=info msg="StopPodSandbox for \"259a6c2041cb9f5c709110e833b792e9227b4e6e9df73f046bd2d34be5e0fa0a\"" Feb 13 19:30:59.077138 containerd[1502]: time="2025-02-13T19:30:59.077121337Z" level=info msg="TearDown network for sandbox \"259a6c2041cb9f5c709110e833b792e9227b4e6e9df73f046bd2d34be5e0fa0a\" successfully" Feb 13 19:30:59.077138 containerd[1502]: time="2025-02-13T19:30:59.077136606Z" level=info msg="StopPodSandbox for \"259a6c2041cb9f5c709110e833b792e9227b4e6e9df73f046bd2d34be5e0fa0a\" returns successfully" Feb 13 19:30:59.077417 kubelet[2600]: I0213 19:30:59.077398 2600 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5c2f498a17882b9810f07e4c11851a73e26bc1d64eec044bd1728166fa3fbcf" Feb 13 19:30:59.077465 containerd[1502]: time="2025-02-13T19:30:59.077429746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6794f4445b-q9q8b,Uid:9eba37d3-14ec-4521-9302-789cbdb496aa,Namespace:calico-apiserver,Attempt:6,}" Feb 13 19:30:59.077835 containerd[1502]: time="2025-02-13T19:30:59.077810391Z" level=info msg="StopPodSandbox for \"d5c2f498a17882b9810f07e4c11851a73e26bc1d64eec044bd1728166fa3fbcf\"" Feb 13 19:30:59.078155 containerd[1502]: time="2025-02-13T19:30:59.078021097Z" level=info msg="Ensure that sandbox d5c2f498a17882b9810f07e4c11851a73e26bc1d64eec044bd1728166fa3fbcf in task-service has been cleanup successfully" Feb 13 19:30:59.078263 containerd[1502]: time="2025-02-13T19:30:59.078242031Z" level=info msg="TearDown network for sandbox \"d5c2f498a17882b9810f07e4c11851a73e26bc1d64eec044bd1728166fa3fbcf\" successfully" Feb 13 19:30:59.078263 containerd[1502]: time="2025-02-13T19:30:59.078258432Z" level=info msg="StopPodSandbox for \"d5c2f498a17882b9810f07e4c11851a73e26bc1d64eec044bd1728166fa3fbcf\" returns successfully" Feb 13 19:30:59.078773 containerd[1502]: time="2025-02-13T19:30:59.078660938Z" level=info msg="StopPodSandbox for \"737878d4cc9b2ec49100882d0cb04450e80ddc4b588a5fc065c8bb77559dc6ae\"" Feb 13 19:30:59.078773 containerd[1502]: time="2025-02-13T19:30:59.078759964Z" level=info msg="TearDown network for sandbox \"737878d4cc9b2ec49100882d0cb04450e80ddc4b588a5fc065c8bb77559dc6ae\" successfully" Feb 13 19:30:59.078827 containerd[1502]: time="2025-02-13T19:30:59.078775693Z" level=info msg="StopPodSandbox for \"737878d4cc9b2ec49100882d0cb04450e80ddc4b588a5fc065c8bb77559dc6ae\" returns successfully" Feb 13 19:30:59.079062 containerd[1502]: time="2025-02-13T19:30:59.079035291Z" level=info msg="StopPodSandbox for \"238e6f92adeb80ea69f9c84d0bfdba6cf65400a9690652509bdfbb946b589a8f\"" Feb 13 19:30:59.079139 containerd[1502]: time="2025-02-13T19:30:59.079122474Z" level=info msg="TearDown network for sandbox \"238e6f92adeb80ea69f9c84d0bfdba6cf65400a9690652509bdfbb946b589a8f\" successfully" Feb 13 19:30:59.079139 containerd[1502]: time="2025-02-13T19:30:59.079135970Z" level=info msg="StopPodSandbox for \"238e6f92adeb80ea69f9c84d0bfdba6cf65400a9690652509bdfbb946b589a8f\" returns successfully" Feb 13 19:30:59.079602 containerd[1502]: time="2025-02-13T19:30:59.079455760Z" level=info msg="StopPodSandbox for \"9a99dde00b5a0c5753dc22e79a23fa5cb756241ff3d7fe4eed74bd21fcf14d6f\"" Feb 13 19:30:59.079602 containerd[1502]: time="2025-02-13T19:30:59.079539287Z" level=info msg="TearDown network for sandbox \"9a99dde00b5a0c5753dc22e79a23fa5cb756241ff3d7fe4eed74bd21fcf14d6f\" successfully" Feb 13 19:30:59.079602 containerd[1502]: time="2025-02-13T19:30:59.079548745Z" level=info msg="StopPodSandbox for \"9a99dde00b5a0c5753dc22e79a23fa5cb756241ff3d7fe4eed74bd21fcf14d6f\" returns successfully" Feb 13 19:30:59.079690 kubelet[2600]: I0213 19:30:59.079568 2600 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca70552ffab426979d1de2107ddb87e6551aa9adf1dd6f0eb5a66684f19cdda4" Feb 13 19:30:59.079801 containerd[1502]: time="2025-02-13T19:30:59.079772244Z" level=info msg="StopPodSandbox for \"fd20a9c9c675e34be18747d0be53656eb71ddfce7fadca259852046e986978fd\"" Feb 13 19:30:59.079878 containerd[1502]: time="2025-02-13T19:30:59.079849249Z" level=info msg="TearDown network for sandbox \"fd20a9c9c675e34be18747d0be53656eb71ddfce7fadca259852046e986978fd\" successfully" Feb 13 19:30:59.079878 containerd[1502]: time="2025-02-13T19:30:59.079865390Z" level=info msg="StopPodSandbox for \"fd20a9c9c675e34be18747d0be53656eb71ddfce7fadca259852046e986978fd\" returns successfully" Feb 13 19:30:59.080135 containerd[1502]: time="2025-02-13T19:30:59.080105740Z" level=info msg="StopPodSandbox for \"ca70552ffab426979d1de2107ddb87e6551aa9adf1dd6f0eb5a66684f19cdda4\"" Feb 13 19:30:59.080241 containerd[1502]: time="2025-02-13T19:30:59.080219153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68b8f8cf95-qdbnh,Uid:d38faaa4-494c-4c4e-87ff-0a00aa82bf88,Namespace:calico-system,Attempt:5,}" Feb 13 19:30:59.080285 containerd[1502]: time="2025-02-13T19:30:59.080271562Z" level=info msg="Ensure that sandbox ca70552ffab426979d1de2107ddb87e6551aa9adf1dd6f0eb5a66684f19cdda4 in task-service has been cleanup successfully" Feb 13 19:30:59.080445 containerd[1502]: time="2025-02-13T19:30:59.080429068Z" level=info msg="TearDown network for sandbox \"ca70552ffab426979d1de2107ddb87e6551aa9adf1dd6f0eb5a66684f19cdda4\" successfully" Feb 13 19:30:59.080489 containerd[1502]: time="2025-02-13T19:30:59.080444557Z" level=info msg="StopPodSandbox for \"ca70552ffab426979d1de2107ddb87e6551aa9adf1dd6f0eb5a66684f19cdda4\" returns successfully" Feb 13 19:30:59.080701 containerd[1502]: time="2025-02-13T19:30:59.080685590Z" level=info msg="StopPodSandbox for \"cf302ac4983e4b5752dc5e187cea36041dfa162432d3e8affbda52f81e45b173\"" Feb 13 19:30:59.080774 containerd[1502]: time="2025-02-13T19:30:59.080758096Z" level=info msg="TearDown network for sandbox \"cf302ac4983e4b5752dc5e187cea36041dfa162432d3e8affbda52f81e45b173\" successfully" Feb 13 19:30:59.080774 containerd[1502]: time="2025-02-13T19:30:59.080769718Z" level=info msg="StopPodSandbox for \"cf302ac4983e4b5752dc5e187cea36041dfa162432d3e8affbda52f81e45b173\" returns successfully" Feb 13 19:30:59.081033 containerd[1502]: time="2025-02-13T19:30:59.081009257Z" level=info msg="StopPodSandbox for \"8a68fb01407d9c834800a26cc46bd4437b62dc056b0e1350c4907a96b87c511d\"" Feb 13 19:30:59.081179 containerd[1502]: time="2025-02-13T19:30:59.081158007Z" level=info msg="TearDown network for sandbox \"8a68fb01407d9c834800a26cc46bd4437b62dc056b0e1350c4907a96b87c511d\" successfully" Feb 13 19:30:59.081179 containerd[1502]: time="2025-02-13T19:30:59.081173496Z" level=info msg="StopPodSandbox for \"8a68fb01407d9c834800a26cc46bd4437b62dc056b0e1350c4907a96b87c511d\" returns successfully" Feb 13 19:30:59.081472 containerd[1502]: time="2025-02-13T19:30:59.081447991Z" level=info msg="StopPodSandbox for \"06b76013e050990e084073338277aecc1ce3e9a19060a462af91ada291c27c21\"" Feb 13 19:30:59.081551 containerd[1502]: time="2025-02-13T19:30:59.081532960Z" level=info msg="TearDown network for sandbox \"06b76013e050990e084073338277aecc1ce3e9a19060a462af91ada291c27c21\" successfully" Feb 13 19:30:59.081551 containerd[1502]: time="2025-02-13T19:30:59.081546966Z" level=info msg="StopPodSandbox for \"06b76013e050990e084073338277aecc1ce3e9a19060a462af91ada291c27c21\" returns successfully" Feb 13 19:30:59.081827 containerd[1502]: time="2025-02-13T19:30:59.081803318Z" level=info msg="StopPodSandbox for \"4d57794a877b0b6953752ccedec2bd85d68373c09318c76ac2c537449d27f46f\"" Feb 13 19:30:59.081927 containerd[1502]: time="2025-02-13T19:30:59.081885131Z" level=info msg="TearDown network for sandbox \"4d57794a877b0b6953752ccedec2bd85d68373c09318c76ac2c537449d27f46f\" successfully" Feb 13 19:30:59.081961 containerd[1502]: time="2025-02-13T19:30:59.081926329Z" level=info msg="StopPodSandbox for \"4d57794a877b0b6953752ccedec2bd85d68373c09318c76ac2c537449d27f46f\" returns successfully" Feb 13 19:30:59.082274 containerd[1502]: time="2025-02-13T19:30:59.082250718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cfdf6,Uid:61978848-76c7-4692-bab4-3c8c891d5468,Namespace:calico-system,Attempt:5,}" Feb 13 19:30:59.394199 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c2dc30846fa58b1ecad4344a3d9824db5c7ab58d70e840480ea575cd61100f5f-shm.mount: Deactivated successfully. Feb 13 19:30:59.394308 systemd[1]: run-netns-cni\x2dc3232e93\x2dd2f1\x2dad7e\x2de102\x2d82a8c7c2cff6.mount: Deactivated successfully. Feb 13 19:30:59.394394 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ca70552ffab426979d1de2107ddb87e6551aa9adf1dd6f0eb5a66684f19cdda4-shm.mount: Deactivated successfully. Feb 13 19:30:59.394474 systemd[1]: run-netns-cni\x2d84da8729\x2dadcb\x2d7cb3\x2da19a\x2dab2e261f61a9.mount: Deactivated successfully. Feb 13 19:30:59.394543 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-06a7a23e17b4ca63f57567aca5f35e0cc59719129998a1820716ea386ece0d87-shm.mount: Deactivated successfully. Feb 13 19:30:59.825017 containerd[1502]: time="2025-02-13T19:30:59.824905984Z" level=info msg="CreateContainer within sandbox \"059b614fd2f892cff198cd86b990eba2edb5278b2c86b8b9dd583b1b74bf0292\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"db0145061e61c003803620fc37e74fcc83081188e6f8256df598803f5685625f\"" Feb 13 19:30:59.830597 containerd[1502]: time="2025-02-13T19:30:59.829011182Z" level=info msg="StartContainer for \"db0145061e61c003803620fc37e74fcc83081188e6f8256df598803f5685625f\"" Feb 13 19:30:59.954817 containerd[1502]: time="2025-02-13T19:30:59.954627921Z" level=error msg="Failed to destroy network for sandbox \"54b3ba93a73466d89b8ae0c6cf3e8e736b78e767106d426c2c1ec3a3288a83ac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:59.955457 containerd[1502]: time="2025-02-13T19:30:59.955355237Z" level=error msg="encountered an error cleaning up failed sandbox \"54b3ba93a73466d89b8ae0c6cf3e8e736b78e767106d426c2c1ec3a3288a83ac\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:59.955457 containerd[1502]: time="2025-02-13T19:30:59.955413015Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6794f4445b-7cftj,Uid:9bf3d155-222d-46fc-9867-b55f1df961f7,Namespace:calico-apiserver,Attempt:5,} failed, error" error="failed to setup network for sandbox \"54b3ba93a73466d89b8ae0c6cf3e8e736b78e767106d426c2c1ec3a3288a83ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:59.956110 kubelet[2600]: E0213 19:30:59.955804 2600 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54b3ba93a73466d89b8ae0c6cf3e8e736b78e767106d426c2c1ec3a3288a83ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:59.956110 kubelet[2600]: E0213 19:30:59.955895 2600 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54b3ba93a73466d89b8ae0c6cf3e8e736b78e767106d426c2c1ec3a3288a83ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6794f4445b-7cftj" Feb 13 19:30:59.956110 kubelet[2600]: E0213 19:30:59.955941 2600 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54b3ba93a73466d89b8ae0c6cf3e8e736b78e767106d426c2c1ec3a3288a83ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6794f4445b-7cftj" Feb 13 19:30:59.956532 kubelet[2600]: E0213 19:30:59.956123 2600 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6794f4445b-7cftj_calico-apiserver(9bf3d155-222d-46fc-9867-b55f1df961f7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6794f4445b-7cftj_calico-apiserver(9bf3d155-222d-46fc-9867-b55f1df961f7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"54b3ba93a73466d89b8ae0c6cf3e8e736b78e767106d426c2c1ec3a3288a83ac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6794f4445b-7cftj" podUID="9bf3d155-222d-46fc-9867-b55f1df961f7" Feb 13 19:30:59.967544 systemd[1]: Started cri-containerd-db0145061e61c003803620fc37e74fcc83081188e6f8256df598803f5685625f.scope - libcontainer container db0145061e61c003803620fc37e74fcc83081188e6f8256df598803f5685625f. Feb 13 19:30:59.974863 containerd[1502]: time="2025-02-13T19:30:59.974793073Z" level=error msg="Failed to destroy network for sandbox \"da5ce96c4e01c16bcc0104e0c1afe1e549752b36b8b8ccf8dd21f08debce350e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:59.975886 containerd[1502]: time="2025-02-13T19:30:59.975817888Z" level=error msg="encountered an error cleaning up failed sandbox \"da5ce96c4e01c16bcc0104e0c1afe1e549752b36b8b8ccf8dd21f08debce350e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:59.976044 containerd[1502]: time="2025-02-13T19:30:59.975941109Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-gvd66,Uid:f2d2c41c-272d-4c74-897d-79c94986b647,Namespace:kube-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"da5ce96c4e01c16bcc0104e0c1afe1e549752b36b8b8ccf8dd21f08debce350e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:59.977028 kubelet[2600]: E0213 19:30:59.976972 2600 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da5ce96c4e01c16bcc0104e0c1afe1e549752b36b8b8ccf8dd21f08debce350e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:59.977097 kubelet[2600]: E0213 19:30:59.977056 2600 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da5ce96c4e01c16bcc0104e0c1afe1e549752b36b8b8ccf8dd21f08debce350e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-gvd66" Feb 13 19:30:59.977097 kubelet[2600]: E0213 19:30:59.977082 2600 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da5ce96c4e01c16bcc0104e0c1afe1e549752b36b8b8ccf8dd21f08debce350e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-gvd66" Feb 13 19:30:59.977155 kubelet[2600]: E0213 19:30:59.977128 2600 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-gvd66_kube-system(f2d2c41c-272d-4c74-897d-79c94986b647)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-gvd66_kube-system(f2d2c41c-272d-4c74-897d-79c94986b647)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"da5ce96c4e01c16bcc0104e0c1afe1e549752b36b8b8ccf8dd21f08debce350e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-gvd66" podUID="f2d2c41c-272d-4c74-897d-79c94986b647" Feb 13 19:30:59.977632 containerd[1502]: time="2025-02-13T19:30:59.977300481Z" level=error msg="Failed to destroy network for sandbox \"11c47a7f5cb5d79bd9c9a79d6da7f46398332e0fada4bee2607023589053bac7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:59.977632 containerd[1502]: time="2025-02-13T19:30:59.977535352Z" level=error msg="Failed to destroy network for sandbox \"8b5ece25a3742c4f88914be474455742b0444f124acba7c26fc8cc6a2d292b53\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:59.978365 containerd[1502]: time="2025-02-13T19:30:59.978020964Z" level=error msg="encountered an error cleaning up failed sandbox \"11c47a7f5cb5d79bd9c9a79d6da7f46398332e0fada4bee2607023589053bac7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:59.978365 containerd[1502]: time="2025-02-13T19:30:59.978089553Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6794f4445b-q9q8b,Uid:9eba37d3-14ec-4521-9302-789cbdb496aa,Namespace:calico-apiserver,Attempt:6,} failed, error" error="failed to setup network for sandbox \"11c47a7f5cb5d79bd9c9a79d6da7f46398332e0fada4bee2607023589053bac7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:59.978665 kubelet[2600]: E0213 19:30:59.978212 2600 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11c47a7f5cb5d79bd9c9a79d6da7f46398332e0fada4bee2607023589053bac7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:59.978665 kubelet[2600]: E0213 19:30:59.978275 2600 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11c47a7f5cb5d79bd9c9a79d6da7f46398332e0fada4bee2607023589053bac7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6794f4445b-q9q8b" Feb 13 19:30:59.978665 kubelet[2600]: E0213 19:30:59.978291 2600 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11c47a7f5cb5d79bd9c9a79d6da7f46398332e0fada4bee2607023589053bac7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6794f4445b-q9q8b" Feb 13 19:30:59.978772 kubelet[2600]: E0213 19:30:59.978339 2600 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6794f4445b-q9q8b_calico-apiserver(9eba37d3-14ec-4521-9302-789cbdb496aa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6794f4445b-q9q8b_calico-apiserver(9eba37d3-14ec-4521-9302-789cbdb496aa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"11c47a7f5cb5d79bd9c9a79d6da7f46398332e0fada4bee2607023589053bac7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6794f4445b-q9q8b" podUID="9eba37d3-14ec-4521-9302-789cbdb496aa" Feb 13 19:30:59.979359 containerd[1502]: time="2025-02-13T19:30:59.979069873Z" level=error msg="encountered an error cleaning up failed sandbox \"8b5ece25a3742c4f88914be474455742b0444f124acba7c26fc8cc6a2d292b53\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:59.979359 containerd[1502]: time="2025-02-13T19:30:59.979118254Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68b8f8cf95-qdbnh,Uid:d38faaa4-494c-4c4e-87ff-0a00aa82bf88,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"8b5ece25a3742c4f88914be474455742b0444f124acba7c26fc8cc6a2d292b53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:59.979702 kubelet[2600]: E0213 19:30:59.979235 2600 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b5ece25a3742c4f88914be474455742b0444f124acba7c26fc8cc6a2d292b53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:30:59.979702 kubelet[2600]: E0213 19:30:59.979262 2600 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b5ece25a3742c4f88914be474455742b0444f124acba7c26fc8cc6a2d292b53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68b8f8cf95-qdbnh" Feb 13 19:30:59.979702 kubelet[2600]: E0213 19:30:59.979275 2600 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b5ece25a3742c4f88914be474455742b0444f124acba7c26fc8cc6a2d292b53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68b8f8cf95-qdbnh" Feb 13 19:30:59.980700 kubelet[2600]: E0213 19:30:59.979297 2600 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-68b8f8cf95-qdbnh_calico-system(d38faaa4-494c-4c4e-87ff-0a00aa82bf88)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-68b8f8cf95-qdbnh_calico-system(d38faaa4-494c-4c4e-87ff-0a00aa82bf88)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8b5ece25a3742c4f88914be474455742b0444f124acba7c26fc8cc6a2d292b53\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68b8f8cf95-qdbnh" podUID="d38faaa4-494c-4c4e-87ff-0a00aa82bf88" Feb 13 19:31:00.084384 kubelet[2600]: I0213 19:31:00.084247 2600 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b5ece25a3742c4f88914be474455742b0444f124acba7c26fc8cc6a2d292b53" Feb 13 19:31:00.085053 containerd[1502]: time="2025-02-13T19:31:00.085021006Z" level=info msg="StopPodSandbox for \"8b5ece25a3742c4f88914be474455742b0444f124acba7c26fc8cc6a2d292b53\"" Feb 13 19:31:00.085465 containerd[1502]: time="2025-02-13T19:31:00.085432148Z" level=info msg="Ensure that sandbox 8b5ece25a3742c4f88914be474455742b0444f124acba7c26fc8cc6a2d292b53 in task-service has been cleanup successfully" Feb 13 19:31:00.085759 containerd[1502]: time="2025-02-13T19:31:00.085719077Z" level=info msg="TearDown network for sandbox \"8b5ece25a3742c4f88914be474455742b0444f124acba7c26fc8cc6a2d292b53\" successfully" Feb 13 19:31:00.085851 containerd[1502]: time="2025-02-13T19:31:00.085835235Z" level=info msg="StopPodSandbox for \"8b5ece25a3742c4f88914be474455742b0444f124acba7c26fc8cc6a2d292b53\" returns successfully" Feb 13 19:31:00.086609 containerd[1502]: time="2025-02-13T19:31:00.086587858Z" level=info msg="StopPodSandbox for \"d5c2f498a17882b9810f07e4c11851a73e26bc1d64eec044bd1728166fa3fbcf\"" Feb 13 19:31:00.086768 containerd[1502]: time="2025-02-13T19:31:00.086752227Z" level=info msg="TearDown network for sandbox \"d5c2f498a17882b9810f07e4c11851a73e26bc1d64eec044bd1728166fa3fbcf\" successfully" Feb 13 19:31:00.086827 containerd[1502]: time="2025-02-13T19:31:00.086814113Z" level=info msg="StopPodSandbox for \"d5c2f498a17882b9810f07e4c11851a73e26bc1d64eec044bd1728166fa3fbcf\" returns successfully" Feb 13 19:31:00.087354 containerd[1502]: time="2025-02-13T19:31:00.087210537Z" level=info msg="StopPodSandbox for \"737878d4cc9b2ec49100882d0cb04450e80ddc4b588a5fc065c8bb77559dc6ae\"" Feb 13 19:31:00.087354 containerd[1502]: time="2025-02-13T19:31:00.087297570Z" level=info msg="TearDown network for sandbox \"737878d4cc9b2ec49100882d0cb04450e80ddc4b588a5fc065c8bb77559dc6ae\" successfully" Feb 13 19:31:00.087464 containerd[1502]: time="2025-02-13T19:31:00.087443724Z" level=info msg="StopPodSandbox for \"737878d4cc9b2ec49100882d0cb04450e80ddc4b588a5fc065c8bb77559dc6ae\" returns successfully" Feb 13 19:31:00.087961 containerd[1502]: time="2025-02-13T19:31:00.087933474Z" level=info msg="StopPodSandbox for \"238e6f92adeb80ea69f9c84d0bfdba6cf65400a9690652509bdfbb946b589a8f\"" Feb 13 19:31:00.088119 containerd[1502]: time="2025-02-13T19:31:00.088103374Z" level=info msg="TearDown network for sandbox \"238e6f92adeb80ea69f9c84d0bfdba6cf65400a9690652509bdfbb946b589a8f\" successfully" Feb 13 19:31:00.088178 containerd[1502]: time="2025-02-13T19:31:00.088164729Z" level=info msg="StopPodSandbox for \"238e6f92adeb80ea69f9c84d0bfdba6cf65400a9690652509bdfbb946b589a8f\" returns successfully" Feb 13 19:31:00.088607 containerd[1502]: time="2025-02-13T19:31:00.088587873Z" level=info msg="StopPodSandbox for \"9a99dde00b5a0c5753dc22e79a23fa5cb756241ff3d7fe4eed74bd21fcf14d6f\"" Feb 13 19:31:00.088757 containerd[1502]: time="2025-02-13T19:31:00.088741491Z" level=info msg="TearDown network for sandbox \"9a99dde00b5a0c5753dc22e79a23fa5cb756241ff3d7fe4eed74bd21fcf14d6f\" successfully" Feb 13 19:31:00.088829 containerd[1502]: time="2025-02-13T19:31:00.088807695Z" level=info msg="StopPodSandbox for \"9a99dde00b5a0c5753dc22e79a23fa5cb756241ff3d7fe4eed74bd21fcf14d6f\" returns successfully" Feb 13 19:31:00.089501 containerd[1502]: time="2025-02-13T19:31:00.089482082Z" level=info msg="StopPodSandbox for \"fd20a9c9c675e34be18747d0be53656eb71ddfce7fadca259852046e986978fd\"" Feb 13 19:31:00.089768 containerd[1502]: time="2025-02-13T19:31:00.089750485Z" level=info msg="TearDown network for sandbox \"fd20a9c9c675e34be18747d0be53656eb71ddfce7fadca259852046e986978fd\" successfully" Feb 13 19:31:00.089834 containerd[1502]: time="2025-02-13T19:31:00.089820186Z" level=info msg="StopPodSandbox for \"fd20a9c9c675e34be18747d0be53656eb71ddfce7fadca259852046e986978fd\" returns successfully" Feb 13 19:31:00.090410 containerd[1502]: time="2025-02-13T19:31:00.090387111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68b8f8cf95-qdbnh,Uid:d38faaa4-494c-4c4e-87ff-0a00aa82bf88,Namespace:calico-system,Attempt:6,}" Feb 13 19:31:00.091151 kubelet[2600]: I0213 19:31:00.091097 2600 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54b3ba93a73466d89b8ae0c6cf3e8e736b78e767106d426c2c1ec3a3288a83ac" Feb 13 19:31:00.092232 containerd[1502]: time="2025-02-13T19:31:00.091870697Z" level=info msg="StopPodSandbox for \"54b3ba93a73466d89b8ae0c6cf3e8e736b78e767106d426c2c1ec3a3288a83ac\"" Feb 13 19:31:00.092232 containerd[1502]: time="2025-02-13T19:31:00.092061965Z" level=info msg="Ensure that sandbox 54b3ba93a73466d89b8ae0c6cf3e8e736b78e767106d426c2c1ec3a3288a83ac in task-service has been cleanup successfully" Feb 13 19:31:00.092706 containerd[1502]: time="2025-02-13T19:31:00.092677531Z" level=info msg="TearDown network for sandbox \"54b3ba93a73466d89b8ae0c6cf3e8e736b78e767106d426c2c1ec3a3288a83ac\" successfully" Feb 13 19:31:00.092824 containerd[1502]: time="2025-02-13T19:31:00.092807956Z" level=info msg="StopPodSandbox for \"54b3ba93a73466d89b8ae0c6cf3e8e736b78e767106d426c2c1ec3a3288a83ac\" returns successfully" Feb 13 19:31:00.093254 containerd[1502]: time="2025-02-13T19:31:00.093235850Z" level=info msg="StopPodSandbox for \"dff7a908d6d80b53c7af4ce2f86bc9e08a85c956a953efb2b1f0977164b5d965\"" Feb 13 19:31:00.093487 containerd[1502]: time="2025-02-13T19:31:00.093469568Z" level=info msg="TearDown network for sandbox \"dff7a908d6d80b53c7af4ce2f86bc9e08a85c956a953efb2b1f0977164b5d965\" successfully" Feb 13 19:31:00.093552 containerd[1502]: time="2025-02-13T19:31:00.093538387Z" level=info msg="StopPodSandbox for \"dff7a908d6d80b53c7af4ce2f86bc9e08a85c956a953efb2b1f0977164b5d965\" returns successfully" Feb 13 19:31:00.094001 containerd[1502]: time="2025-02-13T19:31:00.093981619Z" level=info msg="StopPodSandbox for \"9b60fb3683d6c0de790a6b7336486ff63455b4499390069c146ad9dffdf344af\"" Feb 13 19:31:00.094271 containerd[1502]: time="2025-02-13T19:31:00.094253180Z" level=info msg="TearDown network for sandbox \"9b60fb3683d6c0de790a6b7336486ff63455b4499390069c146ad9dffdf344af\" successfully" Feb 13 19:31:00.094358 containerd[1502]: time="2025-02-13T19:31:00.094342978Z" level=info msg="StopPodSandbox for \"9b60fb3683d6c0de790a6b7336486ff63455b4499390069c146ad9dffdf344af\" returns successfully" Feb 13 19:31:00.094650 containerd[1502]: time="2025-02-13T19:31:00.094631791Z" level=info msg="StopPodSandbox for \"ddacbc97dad8b654492a3ede378ba89ed873bd2206e50dd63b006c03446e6466\"" Feb 13 19:31:00.094809 containerd[1502]: time="2025-02-13T19:31:00.094793674Z" level=info msg="TearDown network for sandbox \"ddacbc97dad8b654492a3ede378ba89ed873bd2206e50dd63b006c03446e6466\" successfully" Feb 13 19:31:00.094869 containerd[1502]: time="2025-02-13T19:31:00.094856111Z" level=info msg="StopPodSandbox for \"ddacbc97dad8b654492a3ede378ba89ed873bd2206e50dd63b006c03446e6466\" returns successfully" Feb 13 19:31:00.095381 containerd[1502]: time="2025-02-13T19:31:00.095362552Z" level=info msg="StopPodSandbox for \"d128712c7ee019c611fbc612ab74775c9fe61e0291a6f84232248274b76f860a\"" Feb 13 19:31:00.095532 containerd[1502]: time="2025-02-13T19:31:00.095511281Z" level=info msg="TearDown network for sandbox \"d128712c7ee019c611fbc612ab74775c9fe61e0291a6f84232248274b76f860a\" successfully" Feb 13 19:31:00.095748 containerd[1502]: time="2025-02-13T19:31:00.095577115Z" level=info msg="StopPodSandbox for \"d128712c7ee019c611fbc612ab74775c9fe61e0291a6f84232248274b76f860a\" returns successfully" Feb 13 19:31:00.096780 containerd[1502]: time="2025-02-13T19:31:00.096436458Z" level=info msg="StopPodSandbox for \"b5ee6d231558bb9e6055e52293f90a21d1ac84e5b91aa9ef485efd32a23f0760\"" Feb 13 19:31:00.096780 containerd[1502]: time="2025-02-13T19:31:00.096526518Z" level=info msg="TearDown network for sandbox \"b5ee6d231558bb9e6055e52293f90a21d1ac84e5b91aa9ef485efd32a23f0760\" successfully" Feb 13 19:31:00.096780 containerd[1502]: time="2025-02-13T19:31:00.096543390Z" level=info msg="StopPodSandbox for \"b5ee6d231558bb9e6055e52293f90a21d1ac84e5b91aa9ef485efd32a23f0760\" returns successfully" Feb 13 19:31:00.097704 containerd[1502]: time="2025-02-13T19:31:00.097415938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6794f4445b-7cftj,Uid:9bf3d155-222d-46fc-9867-b55f1df961f7,Namespace:calico-apiserver,Attempt:6,}" Feb 13 19:31:00.099459 kubelet[2600]: I0213 19:31:00.099442 2600 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70f85222cbcb79f1b19958776e833cf3199d71272c3f88786a10d32eeba936eb" Feb 13 19:31:00.100058 containerd[1502]: time="2025-02-13T19:31:00.100038762Z" level=info msg="StopPodSandbox for \"70f85222cbcb79f1b19958776e833cf3199d71272c3f88786a10d32eeba936eb\"" Feb 13 19:31:00.100300 containerd[1502]: time="2025-02-13T19:31:00.100281999Z" level=info msg="Ensure that sandbox 70f85222cbcb79f1b19958776e833cf3199d71272c3f88786a10d32eeba936eb in task-service has been cleanup successfully" Feb 13 19:31:00.100611 containerd[1502]: time="2025-02-13T19:31:00.100593964Z" level=info msg="TearDown network for sandbox \"70f85222cbcb79f1b19958776e833cf3199d71272c3f88786a10d32eeba936eb\" successfully" Feb 13 19:31:00.100900 containerd[1502]: time="2025-02-13T19:31:00.100677060Z" level=info msg="StopPodSandbox for \"70f85222cbcb79f1b19958776e833cf3199d71272c3f88786a10d32eeba936eb\" returns successfully" Feb 13 19:31:00.101408 containerd[1502]: time="2025-02-13T19:31:00.101383637Z" level=info msg="StopPodSandbox for \"29892c77e2068e69cefb658e7f5afd0cfb682cfc7981da5bee2ea2b80177b4bc\"" Feb 13 19:31:00.101549 containerd[1502]: time="2025-02-13T19:31:00.101533498Z" level=info msg="TearDown network for sandbox \"29892c77e2068e69cefb658e7f5afd0cfb682cfc7981da5bee2ea2b80177b4bc\" successfully" Feb 13 19:31:00.101625 containerd[1502]: time="2025-02-13T19:31:00.101603950Z" level=info msg="StopPodSandbox for \"29892c77e2068e69cefb658e7f5afd0cfb682cfc7981da5bee2ea2b80177b4bc\" returns successfully" Feb 13 19:31:00.102044 containerd[1502]: time="2025-02-13T19:31:00.102024610Z" level=info msg="StopPodSandbox for \"3ce1ab7c9e3b75163c498673b614ea401518c42e9012bdd7e9cd033bba480db0\"" Feb 13 19:31:00.102336 containerd[1502]: time="2025-02-13T19:31:00.102162749Z" level=info msg="TearDown network for sandbox \"3ce1ab7c9e3b75163c498673b614ea401518c42e9012bdd7e9cd033bba480db0\" successfully" Feb 13 19:31:00.102336 containerd[1502]: time="2025-02-13T19:31:00.102181575Z" level=info msg="StopPodSandbox for \"3ce1ab7c9e3b75163c498673b614ea401518c42e9012bdd7e9cd033bba480db0\" returns successfully" Feb 13 19:31:00.281505 containerd[1502]: time="2025-02-13T19:31:00.280440020Z" level=info msg="StopPodSandbox for \"a615d24e84e74129058bc9c7cb028777bb16a481796829c1a98ceaf03287d6c5\"" Feb 13 19:31:00.281505 containerd[1502]: time="2025-02-13T19:31:00.280620688Z" level=info msg="TearDown network for sandbox \"a615d24e84e74129058bc9c7cb028777bb16a481796829c1a98ceaf03287d6c5\" successfully" Feb 13 19:31:00.281505 containerd[1502]: time="2025-02-13T19:31:00.280634835Z" level=info msg="StopPodSandbox for \"a615d24e84e74129058bc9c7cb028777bb16a481796829c1a98ceaf03287d6c5\" returns successfully" Feb 13 19:31:00.291094 containerd[1502]: time="2025-02-13T19:31:00.291040461Z" level=info msg="StopPodSandbox for \"1deac3e8f1315485d945ed4d7ff726c16660ae79ed6279b69ac45afcd92e7fed\"" Feb 13 19:31:00.291688 containerd[1502]: time="2025-02-13T19:31:00.291632082Z" level=info msg="TearDown network for sandbox \"1deac3e8f1315485d945ed4d7ff726c16660ae79ed6279b69ac45afcd92e7fed\" successfully" Feb 13 19:31:00.291688 containerd[1502]: time="2025-02-13T19:31:00.291681074Z" level=info msg="StopPodSandbox for \"1deac3e8f1315485d945ed4d7ff726c16660ae79ed6279b69ac45afcd92e7fed\" returns successfully" Feb 13 19:31:00.292707 kubelet[2600]: I0213 19:31:00.292668 2600 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da5ce96c4e01c16bcc0104e0c1afe1e549752b36b8b8ccf8dd21f08debce350e" Feb 13 19:31:00.293510 containerd[1502]: time="2025-02-13T19:31:00.293483227Z" level=info msg="StopPodSandbox for \"da5ce96c4e01c16bcc0104e0c1afe1e549752b36b8b8ccf8dd21f08debce350e\"" Feb 13 19:31:00.293679 containerd[1502]: time="2025-02-13T19:31:00.293653196Z" level=info msg="Ensure that sandbox da5ce96c4e01c16bcc0104e0c1afe1e549752b36b8b8ccf8dd21f08debce350e in task-service has been cleanup successfully" Feb 13 19:31:00.294092 containerd[1502]: time="2025-02-13T19:31:00.294064959Z" level=info msg="TearDown network for sandbox \"da5ce96c4e01c16bcc0104e0c1afe1e549752b36b8b8ccf8dd21f08debce350e\" successfully" Feb 13 19:31:00.294388 containerd[1502]: time="2025-02-13T19:31:00.294239337Z" level=info msg="StopPodSandbox for \"da5ce96c4e01c16bcc0104e0c1afe1e549752b36b8b8ccf8dd21f08debce350e\" returns successfully" Feb 13 19:31:00.294388 containerd[1502]: time="2025-02-13T19:31:00.294301514Z" level=info msg="StopPodSandbox for \"197b50055a8d04a83b5b696198cca280193728180dffd10631ed830366a0c893\"" Feb 13 19:31:00.294510 containerd[1502]: time="2025-02-13T19:31:00.294495468Z" level=info msg="TearDown network for sandbox \"197b50055a8d04a83b5b696198cca280193728180dffd10631ed830366a0c893\" successfully" Feb 13 19:31:00.294565 containerd[1502]: time="2025-02-13T19:31:00.294552885Z" level=info msg="StopPodSandbox for \"197b50055a8d04a83b5b696198cca280193728180dffd10631ed830366a0c893\" returns successfully" Feb 13 19:31:00.294879 kubelet[2600]: E0213 19:31:00.294853 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:31:00.295662 containerd[1502]: time="2025-02-13T19:31:00.295495475Z" level=info msg="StopPodSandbox for \"06a7a23e17b4ca63f57567aca5f35e0cc59719129998a1820716ea386ece0d87\"" Feb 13 19:31:00.295662 containerd[1502]: time="2025-02-13T19:31:00.295613768Z" level=info msg="TearDown network for sandbox \"06a7a23e17b4ca63f57567aca5f35e0cc59719129998a1820716ea386ece0d87\" successfully" Feb 13 19:31:00.295662 containerd[1502]: time="2025-02-13T19:31:00.295624037Z" level=info msg="StopPodSandbox for \"06a7a23e17b4ca63f57567aca5f35e0cc59719129998a1820716ea386ece0d87\" returns successfully" Feb 13 19:31:00.296191 containerd[1502]: time="2025-02-13T19:31:00.295980946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bxg8d,Uid:402f0fa8-6d39-4a67-b618-1d216e220aea,Namespace:kube-system,Attempt:6,}" Feb 13 19:31:00.296990 containerd[1502]: time="2025-02-13T19:31:00.296972257Z" level=info msg="StopPodSandbox for \"cdeca7bc8c5d40767aa8b82f58ac84479cad41960dfa49f467a04bf46aa03b20\"" Feb 13 19:31:00.297139 containerd[1502]: time="2025-02-13T19:31:00.297123712Z" level=info msg="TearDown network for sandbox \"cdeca7bc8c5d40767aa8b82f58ac84479cad41960dfa49f467a04bf46aa03b20\" successfully" Feb 13 19:31:00.297196 containerd[1502]: time="2025-02-13T19:31:00.297184757Z" level=info msg="StopPodSandbox for \"cdeca7bc8c5d40767aa8b82f58ac84479cad41960dfa49f467a04bf46aa03b20\" returns successfully" Feb 13 19:31:00.298137 containerd[1502]: time="2025-02-13T19:31:00.298116947Z" level=info msg="StopPodSandbox for \"68b36cedfb71a4b13925ca9c776e9f716e294ad3c104d860f484f0a21e16f665\"" Feb 13 19:31:00.299327 containerd[1502]: time="2025-02-13T19:31:00.299113678Z" level=info msg="TearDown network for sandbox \"68b36cedfb71a4b13925ca9c776e9f716e294ad3c104d860f484f0a21e16f665\" successfully" Feb 13 19:31:00.299434 containerd[1502]: time="2025-02-13T19:31:00.299418140Z" level=info msg="StopPodSandbox for \"68b36cedfb71a4b13925ca9c776e9f716e294ad3c104d860f484f0a21e16f665\" returns successfully" Feb 13 19:31:00.299528 containerd[1502]: time="2025-02-13T19:31:00.299241899Z" level=error msg="Failed to destroy network for sandbox \"c049dfb5c4e4f7b160d052644270f741c94db9967ffc69c3eaa01c0f5ec5566f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:31:00.299979 containerd[1502]: time="2025-02-13T19:31:00.299934379Z" level=info msg="StopPodSandbox for \"919def795fcabe325c259e12ac19e82f4306388eba272371855df017ef8e2d5d\"" Feb 13 19:31:00.300301 containerd[1502]: time="2025-02-13T19:31:00.299996366Z" level=error msg="encountered an error cleaning up failed sandbox \"c049dfb5c4e4f7b160d052644270f741c94db9967ffc69c3eaa01c0f5ec5566f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:31:00.300301 containerd[1502]: time="2025-02-13T19:31:00.300203584Z" level=info msg="TearDown network for sandbox \"919def795fcabe325c259e12ac19e82f4306388eba272371855df017ef8e2d5d\" successfully" Feb 13 19:31:00.300301 containerd[1502]: time="2025-02-13T19:31:00.300217691Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cfdf6,Uid:61978848-76c7-4692-bab4-3c8c891d5468,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"c049dfb5c4e4f7b160d052644270f741c94db9967ffc69c3eaa01c0f5ec5566f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:31:00.300301 containerd[1502]: time="2025-02-13T19:31:00.300220175Z" level=info msg="StopPodSandbox for \"919def795fcabe325c259e12ac19e82f4306388eba272371855df017ef8e2d5d\" returns successfully" Feb 13 19:31:00.300596 kubelet[2600]: E0213 19:31:00.300553 2600 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c049dfb5c4e4f7b160d052644270f741c94db9967ffc69c3eaa01c0f5ec5566f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:31:00.300664 kubelet[2600]: E0213 19:31:00.300602 2600 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c049dfb5c4e4f7b160d052644270f741c94db9967ffc69c3eaa01c0f5ec5566f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cfdf6" Feb 13 19:31:00.300664 kubelet[2600]: E0213 19:31:00.300622 2600 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c049dfb5c4e4f7b160d052644270f741c94db9967ffc69c3eaa01c0f5ec5566f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cfdf6" Feb 13 19:31:00.300722 containerd[1502]: time="2025-02-13T19:31:00.300647247Z" level=info msg="StopPodSandbox for \"9a576abef7e3f4083c481695c9f493c39417d50066da6a1bc9040a8a2fdf6340\"" Feb 13 19:31:00.300746 kubelet[2600]: E0213 19:31:00.300656 2600 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-cfdf6_calico-system(61978848-76c7-4692-bab4-3c8c891d5468)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-cfdf6_calico-system(61978848-76c7-4692-bab4-3c8c891d5468)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c049dfb5c4e4f7b160d052644270f741c94db9967ffc69c3eaa01c0f5ec5566f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cfdf6" podUID="61978848-76c7-4692-bab4-3c8c891d5468" Feb 13 19:31:00.300810 containerd[1502]: time="2025-02-13T19:31:00.300725915Z" level=info msg="TearDown network for sandbox \"9a576abef7e3f4083c481695c9f493c39417d50066da6a1bc9040a8a2fdf6340\" successfully" Feb 13 19:31:00.300810 containerd[1502]: time="2025-02-13T19:31:00.300736144Z" level=info msg="StopPodSandbox for \"9a576abef7e3f4083c481695c9f493c39417d50066da6a1bc9040a8a2fdf6340\" returns successfully" Feb 13 19:31:00.300927 kubelet[2600]: E0213 19:31:00.300908 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:31:00.301506 containerd[1502]: time="2025-02-13T19:31:00.301469531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-gvd66,Uid:f2d2c41c-272d-4c74-897d-79c94986b647,Namespace:kube-system,Attempt:6,}" Feb 13 19:31:00.394261 systemd[1]: run-netns-cni\x2d7a8a8ec1\x2d914d\x2d340c\x2dbf3f\x2dfdf62d4eb3e8.mount: Deactivated successfully. Feb 13 19:31:00.394373 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-da5ce96c4e01c16bcc0104e0c1afe1e549752b36b8b8ccf8dd21f08debce350e-shm.mount: Deactivated successfully. Feb 13 19:31:00.394452 systemd[1]: run-netns-cni\x2d4df98455\x2d0645\x2d4418\x2d5cf6\x2dc3a8b1f0b543.mount: Deactivated successfully. Feb 13 19:31:00.394523 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8b5ece25a3742c4f88914be474455742b0444f124acba7c26fc8cc6a2d292b53-shm.mount: Deactivated successfully. Feb 13 19:31:00.394603 systemd[1]: run-netns-cni\x2ddf12a573\x2d4ca3\x2d9de9\x2dfa4b\x2d76223dd3eedc.mount: Deactivated successfully. Feb 13 19:31:00.441404 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 19:31:00.441517 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 19:31:00.537190 containerd[1502]: time="2025-02-13T19:31:00.537071416Z" level=info msg="StartContainer for \"db0145061e61c003803620fc37e74fcc83081188e6f8256df598803f5685625f\" returns successfully" Feb 13 19:31:00.537630 kubelet[2600]: I0213 19:31:00.537435 2600 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11c47a7f5cb5d79bd9c9a79d6da7f46398332e0fada4bee2607023589053bac7" Feb 13 19:31:00.537954 containerd[1502]: time="2025-02-13T19:31:00.537885044Z" level=info msg="StopPodSandbox for \"11c47a7f5cb5d79bd9c9a79d6da7f46398332e0fada4bee2607023589053bac7\"" Feb 13 19:31:00.538253 containerd[1502]: time="2025-02-13T19:31:00.538115977Z" level=info msg="Ensure that sandbox 11c47a7f5cb5d79bd9c9a79d6da7f46398332e0fada4bee2607023589053bac7 in task-service has been cleanup successfully" Feb 13 19:31:00.541954 containerd[1502]: time="2025-02-13T19:31:00.541806977Z" level=info msg="TearDown network for sandbox \"11c47a7f5cb5d79bd9c9a79d6da7f46398332e0fada4bee2607023589053bac7\" successfully" Feb 13 19:31:00.541954 containerd[1502]: time="2025-02-13T19:31:00.541839087Z" level=info msg="StopPodSandbox for \"11c47a7f5cb5d79bd9c9a79d6da7f46398332e0fada4bee2607023589053bac7\" returns successfully" Feb 13 19:31:00.542195 systemd[1]: run-netns-cni\x2dd23db16c\x2d1290\x2d38a3\x2dd5cf\x2de4b46bbc6018.mount: Deactivated successfully. Feb 13 19:31:00.542724 containerd[1502]: time="2025-02-13T19:31:00.542701667Z" level=info msg="StopPodSandbox for \"c2dc30846fa58b1ecad4344a3d9824db5c7ab58d70e840480ea575cd61100f5f\"" Feb 13 19:31:00.542854 containerd[1502]: time="2025-02-13T19:31:00.542794190Z" level=info msg="TearDown network for sandbox \"c2dc30846fa58b1ecad4344a3d9824db5c7ab58d70e840480ea575cd61100f5f\" successfully" Feb 13 19:31:00.542854 containerd[1502]: time="2025-02-13T19:31:00.542806944Z" level=info msg="StopPodSandbox for \"c2dc30846fa58b1ecad4344a3d9824db5c7ab58d70e840480ea575cd61100f5f\" returns successfully" Feb 13 19:31:00.543173 containerd[1502]: time="2025-02-13T19:31:00.543145089Z" level=info msg="StopPodSandbox for \"da21058157a478777dd793b0c03fcbc23d8d5e245d11dd697c88feb605601cc5\"" Feb 13 19:31:00.543259 containerd[1502]: time="2025-02-13T19:31:00.543221833Z" level=info msg="TearDown network for sandbox \"da21058157a478777dd793b0c03fcbc23d8d5e245d11dd697c88feb605601cc5\" successfully" Feb 13 19:31:00.543259 containerd[1502]: time="2025-02-13T19:31:00.543231101Z" level=info msg="StopPodSandbox for \"da21058157a478777dd793b0c03fcbc23d8d5e245d11dd697c88feb605601cc5\" returns successfully" Feb 13 19:31:00.543530 containerd[1502]: time="2025-02-13T19:31:00.543425526Z" level=info msg="StopPodSandbox for \"6d5d01f5764dfa313010e551a484fd652da553aef24f0843408d75f6c678d0dc\"" Feb 13 19:31:00.543530 containerd[1502]: time="2025-02-13T19:31:00.543495848Z" level=info msg="TearDown network for sandbox \"6d5d01f5764dfa313010e551a484fd652da553aef24f0843408d75f6c678d0dc\" successfully" Feb 13 19:31:00.543530 containerd[1502]: time="2025-02-13T19:31:00.543504084Z" level=info msg="StopPodSandbox for \"6d5d01f5764dfa313010e551a484fd652da553aef24f0843408d75f6c678d0dc\" returns successfully" Feb 13 19:31:00.544041 containerd[1502]: time="2025-02-13T19:31:00.543878487Z" level=info msg="StopPodSandbox for \"92aa5e65cd130c3a58ac37b4adf370bc37fd5b279afabfde37dbc8bf70e38b6a\"" Feb 13 19:31:00.544041 containerd[1502]: time="2025-02-13T19:31:00.543990447Z" level=info msg="TearDown network for sandbox \"92aa5e65cd130c3a58ac37b4adf370bc37fd5b279afabfde37dbc8bf70e38b6a\" successfully" Feb 13 19:31:00.544041 containerd[1502]: time="2025-02-13T19:31:00.544000455Z" level=info msg="StopPodSandbox for \"92aa5e65cd130c3a58ac37b4adf370bc37fd5b279afabfde37dbc8bf70e38b6a\" returns successfully" Feb 13 19:31:00.544377 containerd[1502]: time="2025-02-13T19:31:00.544278527Z" level=info msg="StopPodSandbox for \"029de7693a9f8fae3fc5f15fc92d9b4380854aa9a9856579c8e3ed5055510b90\"" Feb 13 19:31:00.544487 containerd[1502]: time="2025-02-13T19:31:00.544474115Z" level=info msg="TearDown network for sandbox \"029de7693a9f8fae3fc5f15fc92d9b4380854aa9a9856579c8e3ed5055510b90\" successfully" Feb 13 19:31:00.544605 containerd[1502]: time="2025-02-13T19:31:00.544547042Z" level=info msg="StopPodSandbox for \"029de7693a9f8fae3fc5f15fc92d9b4380854aa9a9856579c8e3ed5055510b90\" returns successfully" Feb 13 19:31:00.544793 containerd[1502]: time="2025-02-13T19:31:00.544766984Z" level=info msg="StopPodSandbox for \"259a6c2041cb9f5c709110e833b792e9227b4e6e9df73f046bd2d34be5e0fa0a\"" Feb 13 19:31:00.544890 containerd[1502]: time="2025-02-13T19:31:00.544868225Z" level=info msg="TearDown network for sandbox \"259a6c2041cb9f5c709110e833b792e9227b4e6e9df73f046bd2d34be5e0fa0a\" successfully" Feb 13 19:31:00.544890 containerd[1502]: time="2025-02-13T19:31:00.544885477Z" level=info msg="StopPodSandbox for \"259a6c2041cb9f5c709110e833b792e9227b4e6e9df73f046bd2d34be5e0fa0a\" returns successfully" Feb 13 19:31:00.545416 containerd[1502]: time="2025-02-13T19:31:00.545389143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6794f4445b-q9q8b,Uid:9eba37d3-14ec-4521-9302-789cbdb496aa,Namespace:calico-apiserver,Attempt:7,}" Feb 13 19:31:00.920532 systemd-networkd[1430]: calia14dd751e1d: Link UP Feb 13 19:31:00.920823 systemd-networkd[1430]: calia14dd751e1d: Gained carrier Feb 13 19:31:00.933059 containerd[1502]: 2025-02-13 19:31:00.733 [INFO][4950] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:31:00.933059 containerd[1502]: 2025-02-13 19:31:00.749 [INFO][4950] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--gvd66-eth0 coredns-6f6b679f8f- kube-system f2d2c41c-272d-4c74-897d-79c94986b647 782 0 2025-02-13 19:30:21 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-gvd66 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia14dd751e1d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="f4d4de673b9f34c83ac2527ad2b735e74b4f78e6b54f17b92d5ec62ae7764fea" Namespace="kube-system" Pod="coredns-6f6b679f8f-gvd66" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--gvd66-" Feb 13 19:31:00.933059 containerd[1502]: 2025-02-13 19:31:00.750 [INFO][4950] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f4d4de673b9f34c83ac2527ad2b735e74b4f78e6b54f17b92d5ec62ae7764fea" Namespace="kube-system" Pod="coredns-6f6b679f8f-gvd66" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--gvd66-eth0" Feb 13 19:31:00.933059 containerd[1502]: 2025-02-13 19:31:00.838 [INFO][5026] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f4d4de673b9f34c83ac2527ad2b735e74b4f78e6b54f17b92d5ec62ae7764fea" HandleID="k8s-pod-network.f4d4de673b9f34c83ac2527ad2b735e74b4f78e6b54f17b92d5ec62ae7764fea" Workload="localhost-k8s-coredns--6f6b679f8f--gvd66-eth0" Feb 13 19:31:00.933059 containerd[1502]: 2025-02-13 19:31:00.851 [INFO][5026] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f4d4de673b9f34c83ac2527ad2b735e74b4f78e6b54f17b92d5ec62ae7764fea" HandleID="k8s-pod-network.f4d4de673b9f34c83ac2527ad2b735e74b4f78e6b54f17b92d5ec62ae7764fea" Workload="localhost-k8s-coredns--6f6b679f8f--gvd66-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032d830), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-gvd66", "timestamp":"2025-02-13 19:31:00.838622138 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:31:00.933059 containerd[1502]: 2025-02-13 19:31:00.852 [INFO][5026] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:31:00.933059 containerd[1502]: 2025-02-13 19:31:00.852 [INFO][5026] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:31:00.933059 containerd[1502]: 2025-02-13 19:31:00.852 [INFO][5026] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:31:00.933059 containerd[1502]: 2025-02-13 19:31:00.884 [INFO][5026] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f4d4de673b9f34c83ac2527ad2b735e74b4f78e6b54f17b92d5ec62ae7764fea" host="localhost" Feb 13 19:31:00.933059 containerd[1502]: 2025-02-13 19:31:00.889 [INFO][5026] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:31:00.933059 containerd[1502]: 2025-02-13 19:31:00.894 [INFO][5026] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:31:00.933059 containerd[1502]: 2025-02-13 19:31:00.896 [INFO][5026] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:31:00.933059 containerd[1502]: 2025-02-13 19:31:00.898 [INFO][5026] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:31:00.933059 containerd[1502]: 2025-02-13 19:31:00.898 [INFO][5026] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f4d4de673b9f34c83ac2527ad2b735e74b4f78e6b54f17b92d5ec62ae7764fea" host="localhost" Feb 13 19:31:00.933059 containerd[1502]: 2025-02-13 19:31:00.899 [INFO][5026] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f4d4de673b9f34c83ac2527ad2b735e74b4f78e6b54f17b92d5ec62ae7764fea Feb 13 19:31:00.933059 containerd[1502]: 2025-02-13 19:31:00.902 [INFO][5026] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f4d4de673b9f34c83ac2527ad2b735e74b4f78e6b54f17b92d5ec62ae7764fea" host="localhost" Feb 13 19:31:00.933059 containerd[1502]: 2025-02-13 19:31:00.908 [INFO][5026] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.f4d4de673b9f34c83ac2527ad2b735e74b4f78e6b54f17b92d5ec62ae7764fea" host="localhost" Feb 13 19:31:00.933059 containerd[1502]: 2025-02-13 19:31:00.908 [INFO][5026] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.f4d4de673b9f34c83ac2527ad2b735e74b4f78e6b54f17b92d5ec62ae7764fea" host="localhost" Feb 13 19:31:00.933059 containerd[1502]: 2025-02-13 19:31:00.908 [INFO][5026] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:31:00.933059 containerd[1502]: 2025-02-13 19:31:00.909 [INFO][5026] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="f4d4de673b9f34c83ac2527ad2b735e74b4f78e6b54f17b92d5ec62ae7764fea" HandleID="k8s-pod-network.f4d4de673b9f34c83ac2527ad2b735e74b4f78e6b54f17b92d5ec62ae7764fea" Workload="localhost-k8s-coredns--6f6b679f8f--gvd66-eth0" Feb 13 19:31:00.933980 containerd[1502]: 2025-02-13 19:31:00.913 [INFO][4950] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f4d4de673b9f34c83ac2527ad2b735e74b4f78e6b54f17b92d5ec62ae7764fea" Namespace="kube-system" Pod="coredns-6f6b679f8f-gvd66" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--gvd66-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--gvd66-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"f2d2c41c-272d-4c74-897d-79c94986b647", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 30, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-gvd66", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia14dd751e1d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:31:00.933980 containerd[1502]: 2025-02-13 19:31:00.913 [INFO][4950] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="f4d4de673b9f34c83ac2527ad2b735e74b4f78e6b54f17b92d5ec62ae7764fea" Namespace="kube-system" Pod="coredns-6f6b679f8f-gvd66" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--gvd66-eth0" Feb 13 19:31:00.933980 containerd[1502]: 2025-02-13 19:31:00.913 [INFO][4950] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia14dd751e1d ContainerID="f4d4de673b9f34c83ac2527ad2b735e74b4f78e6b54f17b92d5ec62ae7764fea" Namespace="kube-system" Pod="coredns-6f6b679f8f-gvd66" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--gvd66-eth0" Feb 13 19:31:00.933980 containerd[1502]: 2025-02-13 19:31:00.920 [INFO][4950] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f4d4de673b9f34c83ac2527ad2b735e74b4f78e6b54f17b92d5ec62ae7764fea" Namespace="kube-system" Pod="coredns-6f6b679f8f-gvd66" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--gvd66-eth0" Feb 13 19:31:00.933980 containerd[1502]: 2025-02-13 19:31:00.921 [INFO][4950] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f4d4de673b9f34c83ac2527ad2b735e74b4f78e6b54f17b92d5ec62ae7764fea" Namespace="kube-system" Pod="coredns-6f6b679f8f-gvd66" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--gvd66-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--gvd66-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"f2d2c41c-272d-4c74-897d-79c94986b647", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 30, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f4d4de673b9f34c83ac2527ad2b735e74b4f78e6b54f17b92d5ec62ae7764fea", Pod:"coredns-6f6b679f8f-gvd66", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia14dd751e1d", MAC:"56:59:fa:58:d9:ef", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:31:00.933980 containerd[1502]: 2025-02-13 19:31:00.929 [INFO][4950] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f4d4de673b9f34c83ac2527ad2b735e74b4f78e6b54f17b92d5ec62ae7764fea" Namespace="kube-system" Pod="coredns-6f6b679f8f-gvd66" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--gvd66-eth0" Feb 13 19:31:00.987884 containerd[1502]: time="2025-02-13T19:31:00.987440450Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:31:00.987884 containerd[1502]: time="2025-02-13T19:31:00.987492598Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:31:00.987884 containerd[1502]: time="2025-02-13T19:31:00.987503458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:31:00.987884 containerd[1502]: time="2025-02-13T19:31:00.987583388Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:31:01.012541 systemd[1]: Started cri-containerd-f4d4de673b9f34c83ac2527ad2b735e74b4f78e6b54f17b92d5ec62ae7764fea.scope - libcontainer container f4d4de673b9f34c83ac2527ad2b735e74b4f78e6b54f17b92d5ec62ae7764fea. Feb 13 19:31:01.017501 systemd-networkd[1430]: cali9801aa0cd7c: Link UP Feb 13 19:31:01.018162 systemd-networkd[1430]: cali9801aa0cd7c: Gained carrier Feb 13 19:31:01.027853 systemd-resolved[1338]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:31:01.035520 containerd[1502]: 2025-02-13 19:31:00.736 [INFO][4960] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:31:01.035520 containerd[1502]: 2025-02-13 19:31:00.754 [INFO][4960] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6794f4445b--7cftj-eth0 calico-apiserver-6794f4445b- calico-apiserver 9bf3d155-222d-46fc-9867-b55f1df961f7 783 0 2025-02-13 19:30:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6794f4445b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6794f4445b-7cftj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9801aa0cd7c [] []}} ContainerID="7c5c4fa9444c2bc1bfdbdd0bbc73ce86d02274f4d59ff297b58e7ab65a5fb05f" Namespace="calico-apiserver" Pod="calico-apiserver-6794f4445b-7cftj" WorkloadEndpoint="localhost-k8s-calico--apiserver--6794f4445b--7cftj-" Feb 13 19:31:01.035520 containerd[1502]: 2025-02-13 19:31:00.754 [INFO][4960] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7c5c4fa9444c2bc1bfdbdd0bbc73ce86d02274f4d59ff297b58e7ab65a5fb05f" Namespace="calico-apiserver" Pod="calico-apiserver-6794f4445b-7cftj" WorkloadEndpoint="localhost-k8s-calico--apiserver--6794f4445b--7cftj-eth0" Feb 13 19:31:01.035520 containerd[1502]: 2025-02-13 19:31:00.836 [INFO][5027] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7c5c4fa9444c2bc1bfdbdd0bbc73ce86d02274f4d59ff297b58e7ab65a5fb05f" HandleID="k8s-pod-network.7c5c4fa9444c2bc1bfdbdd0bbc73ce86d02274f4d59ff297b58e7ab65a5fb05f" Workload="localhost-k8s-calico--apiserver--6794f4445b--7cftj-eth0" Feb 13 19:31:01.035520 containerd[1502]: 2025-02-13 19:31:00.852 [INFO][5027] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7c5c4fa9444c2bc1bfdbdd0bbc73ce86d02274f4d59ff297b58e7ab65a5fb05f" HandleID="k8s-pod-network.7c5c4fa9444c2bc1bfdbdd0bbc73ce86d02274f4d59ff297b58e7ab65a5fb05f" Workload="localhost-k8s-calico--apiserver--6794f4445b--7cftj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00051dd10), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6794f4445b-7cftj", "timestamp":"2025-02-13 19:31:00.836547584 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:31:01.035520 containerd[1502]: 2025-02-13 19:31:00.852 [INFO][5027] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:31:01.035520 containerd[1502]: 2025-02-13 19:31:00.909 [INFO][5027] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:31:01.035520 containerd[1502]: 2025-02-13 19:31:00.909 [INFO][5027] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:31:01.035520 containerd[1502]: 2025-02-13 19:31:00.985 [INFO][5027] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7c5c4fa9444c2bc1bfdbdd0bbc73ce86d02274f4d59ff297b58e7ab65a5fb05f" host="localhost" Feb 13 19:31:01.035520 containerd[1502]: 2025-02-13 19:31:00.990 [INFO][5027] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:31:01.035520 containerd[1502]: 2025-02-13 19:31:00.994 [INFO][5027] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:31:01.035520 containerd[1502]: 2025-02-13 19:31:00.996 [INFO][5027] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:31:01.035520 containerd[1502]: 2025-02-13 19:31:00.998 [INFO][5027] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:31:01.035520 containerd[1502]: 2025-02-13 19:31:00.998 [INFO][5027] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7c5c4fa9444c2bc1bfdbdd0bbc73ce86d02274f4d59ff297b58e7ab65a5fb05f" host="localhost" Feb 13 19:31:01.035520 containerd[1502]: 2025-02-13 19:31:01.000 [INFO][5027] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7c5c4fa9444c2bc1bfdbdd0bbc73ce86d02274f4d59ff297b58e7ab65a5fb05f Feb 13 19:31:01.035520 containerd[1502]: 2025-02-13 19:31:01.004 [INFO][5027] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7c5c4fa9444c2bc1bfdbdd0bbc73ce86d02274f4d59ff297b58e7ab65a5fb05f" host="localhost" Feb 13 19:31:01.035520 containerd[1502]: 2025-02-13 19:31:01.010 [INFO][5027] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.7c5c4fa9444c2bc1bfdbdd0bbc73ce86d02274f4d59ff297b58e7ab65a5fb05f" host="localhost" Feb 13 19:31:01.035520 containerd[1502]: 2025-02-13 19:31:01.011 [INFO][5027] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.7c5c4fa9444c2bc1bfdbdd0bbc73ce86d02274f4d59ff297b58e7ab65a5fb05f" host="localhost" Feb 13 19:31:01.035520 containerd[1502]: 2025-02-13 19:31:01.011 [INFO][5027] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:31:01.035520 containerd[1502]: 2025-02-13 19:31:01.011 [INFO][5027] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="7c5c4fa9444c2bc1bfdbdd0bbc73ce86d02274f4d59ff297b58e7ab65a5fb05f" HandleID="k8s-pod-network.7c5c4fa9444c2bc1bfdbdd0bbc73ce86d02274f4d59ff297b58e7ab65a5fb05f" Workload="localhost-k8s-calico--apiserver--6794f4445b--7cftj-eth0" Feb 13 19:31:01.036198 containerd[1502]: 2025-02-13 19:31:01.014 [INFO][4960] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7c5c4fa9444c2bc1bfdbdd0bbc73ce86d02274f4d59ff297b58e7ab65a5fb05f" Namespace="calico-apiserver" Pod="calico-apiserver-6794f4445b-7cftj" WorkloadEndpoint="localhost-k8s-calico--apiserver--6794f4445b--7cftj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6794f4445b--7cftj-eth0", GenerateName:"calico-apiserver-6794f4445b-", Namespace:"calico-apiserver", SelfLink:"", UID:"9bf3d155-222d-46fc-9867-b55f1df961f7", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 30, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6794f4445b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6794f4445b-7cftj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9801aa0cd7c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:31:01.036198 containerd[1502]: 2025-02-13 19:31:01.014 [INFO][4960] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="7c5c4fa9444c2bc1bfdbdd0bbc73ce86d02274f4d59ff297b58e7ab65a5fb05f" Namespace="calico-apiserver" Pod="calico-apiserver-6794f4445b-7cftj" WorkloadEndpoint="localhost-k8s-calico--apiserver--6794f4445b--7cftj-eth0" Feb 13 19:31:01.036198 containerd[1502]: 2025-02-13 19:31:01.014 [INFO][4960] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9801aa0cd7c ContainerID="7c5c4fa9444c2bc1bfdbdd0bbc73ce86d02274f4d59ff297b58e7ab65a5fb05f" Namespace="calico-apiserver" Pod="calico-apiserver-6794f4445b-7cftj" WorkloadEndpoint="localhost-k8s-calico--apiserver--6794f4445b--7cftj-eth0" Feb 13 19:31:01.036198 containerd[1502]: 2025-02-13 19:31:01.018 [INFO][4960] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7c5c4fa9444c2bc1bfdbdd0bbc73ce86d02274f4d59ff297b58e7ab65a5fb05f" Namespace="calico-apiserver" Pod="calico-apiserver-6794f4445b-7cftj" WorkloadEndpoint="localhost-k8s-calico--apiserver--6794f4445b--7cftj-eth0" Feb 13 19:31:01.036198 containerd[1502]: 2025-02-13 19:31:01.018 [INFO][4960] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7c5c4fa9444c2bc1bfdbdd0bbc73ce86d02274f4d59ff297b58e7ab65a5fb05f" Namespace="calico-apiserver" Pod="calico-apiserver-6794f4445b-7cftj" WorkloadEndpoint="localhost-k8s-calico--apiserver--6794f4445b--7cftj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6794f4445b--7cftj-eth0", GenerateName:"calico-apiserver-6794f4445b-", Namespace:"calico-apiserver", SelfLink:"", UID:"9bf3d155-222d-46fc-9867-b55f1df961f7", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 30, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6794f4445b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7c5c4fa9444c2bc1bfdbdd0bbc73ce86d02274f4d59ff297b58e7ab65a5fb05f", Pod:"calico-apiserver-6794f4445b-7cftj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9801aa0cd7c", MAC:"52:45:0e:d3:1a:c1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:31:01.036198 containerd[1502]: 2025-02-13 19:31:01.030 [INFO][4960] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7c5c4fa9444c2bc1bfdbdd0bbc73ce86d02274f4d59ff297b58e7ab65a5fb05f" Namespace="calico-apiserver" Pod="calico-apiserver-6794f4445b-7cftj" WorkloadEndpoint="localhost-k8s-calico--apiserver--6794f4445b--7cftj-eth0" Feb 13 19:31:01.055190 containerd[1502]: time="2025-02-13T19:31:01.054813052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-gvd66,Uid:f2d2c41c-272d-4c74-897d-79c94986b647,Namespace:kube-system,Attempt:6,} returns sandbox id \"f4d4de673b9f34c83ac2527ad2b735e74b4f78e6b54f17b92d5ec62ae7764fea\"" Feb 13 19:31:01.056761 kubelet[2600]: E0213 19:31:01.055653 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:31:01.061513 containerd[1502]: time="2025-02-13T19:31:01.061364642Z" level=info msg="CreateContainer within sandbox \"f4d4de673b9f34c83ac2527ad2b735e74b4f78e6b54f17b92d5ec62ae7764fea\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:31:01.062883 containerd[1502]: time="2025-02-13T19:31:01.062660956Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:31:01.062883 containerd[1502]: time="2025-02-13T19:31:01.062718644Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:31:01.062883 containerd[1502]: time="2025-02-13T19:31:01.062732180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:31:01.062883 containerd[1502]: time="2025-02-13T19:31:01.062831556Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:31:01.081467 systemd[1]: Started cri-containerd-7c5c4fa9444c2bc1bfdbdd0bbc73ce86d02274f4d59ff297b58e7ab65a5fb05f.scope - libcontainer container 7c5c4fa9444c2bc1bfdbdd0bbc73ce86d02274f4d59ff297b58e7ab65a5fb05f. Feb 13 19:31:01.091648 containerd[1502]: time="2025-02-13T19:31:01.091452587Z" level=info msg="CreateContainer within sandbox \"f4d4de673b9f34c83ac2527ad2b735e74b4f78e6b54f17b92d5ec62ae7764fea\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6277e103e8161948ad02e84c77c2de706a37087a9bbdec317978e8f92b598600\"" Feb 13 19:31:01.093327 containerd[1502]: time="2025-02-13T19:31:01.093269317Z" level=info msg="StartContainer for \"6277e103e8161948ad02e84c77c2de706a37087a9bbdec317978e8f92b598600\"" Feb 13 19:31:01.101122 systemd-resolved[1338]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:31:01.125472 systemd[1]: Started cri-containerd-6277e103e8161948ad02e84c77c2de706a37087a9bbdec317978e8f92b598600.scope - libcontainer container 6277e103e8161948ad02e84c77c2de706a37087a9bbdec317978e8f92b598600. Feb 13 19:31:01.135072 systemd-networkd[1430]: cali67e170f85bd: Link UP Feb 13 19:31:01.135632 systemd-networkd[1430]: cali67e170f85bd: Gained carrier Feb 13 19:31:01.154526 containerd[1502]: time="2025-02-13T19:31:01.153144413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6794f4445b-7cftj,Uid:9bf3d155-222d-46fc-9867-b55f1df961f7,Namespace:calico-apiserver,Attempt:6,} returns sandbox id \"7c5c4fa9444c2bc1bfdbdd0bbc73ce86d02274f4d59ff297b58e7ab65a5fb05f\"" Feb 13 19:31:01.154526 containerd[1502]: time="2025-02-13T19:31:01.154432883Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 19:31:01.155002 containerd[1502]: 2025-02-13 19:31:00.720 [INFO][4955] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:31:01.155002 containerd[1502]: 2025-02-13 19:31:00.735 [INFO][4955] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--bxg8d-eth0 coredns-6f6b679f8f- kube-system 402f0fa8-6d39-4a67-b618-1d216e220aea 776 0 2025-02-13 19:30:21 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-bxg8d eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali67e170f85bd [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="6bcdbcbaef537d66d3695e3f70077514eb5e70193840699edd520c67d8692e47" Namespace="kube-system" Pod="coredns-6f6b679f8f-bxg8d" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--bxg8d-" Feb 13 19:31:01.155002 containerd[1502]: 2025-02-13 19:31:00.735 [INFO][4955] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6bcdbcbaef537d66d3695e3f70077514eb5e70193840699edd520c67d8692e47" Namespace="kube-system" Pod="coredns-6f6b679f8f-bxg8d" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--bxg8d-eth0" Feb 13 19:31:01.155002 containerd[1502]: 2025-02-13 19:31:00.843 [INFO][5020] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6bcdbcbaef537d66d3695e3f70077514eb5e70193840699edd520c67d8692e47" HandleID="k8s-pod-network.6bcdbcbaef537d66d3695e3f70077514eb5e70193840699edd520c67d8692e47" Workload="localhost-k8s-coredns--6f6b679f8f--bxg8d-eth0" Feb 13 19:31:01.155002 containerd[1502]: 2025-02-13 19:31:00.853 [INFO][5020] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6bcdbcbaef537d66d3695e3f70077514eb5e70193840699edd520c67d8692e47" HandleID="k8s-pod-network.6bcdbcbaef537d66d3695e3f70077514eb5e70193840699edd520c67d8692e47" Workload="localhost-k8s-coredns--6f6b679f8f--bxg8d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000264890), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-bxg8d", "timestamp":"2025-02-13 19:31:00.843018802 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:31:01.155002 containerd[1502]: 2025-02-13 19:31:00.853 [INFO][5020] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:31:01.155002 containerd[1502]: 2025-02-13 19:31:01.011 [INFO][5020] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:31:01.155002 containerd[1502]: 2025-02-13 19:31:01.011 [INFO][5020] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:31:01.155002 containerd[1502]: 2025-02-13 19:31:01.086 [INFO][5020] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6bcdbcbaef537d66d3695e3f70077514eb5e70193840699edd520c67d8692e47" host="localhost" Feb 13 19:31:01.155002 containerd[1502]: 2025-02-13 19:31:01.096 [INFO][5020] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:31:01.155002 containerd[1502]: 2025-02-13 19:31:01.104 [INFO][5020] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:31:01.155002 containerd[1502]: 2025-02-13 19:31:01.107 [INFO][5020] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:31:01.155002 containerd[1502]: 2025-02-13 19:31:01.109 [INFO][5020] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:31:01.155002 containerd[1502]: 2025-02-13 19:31:01.109 [INFO][5020] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6bcdbcbaef537d66d3695e3f70077514eb5e70193840699edd520c67d8692e47" host="localhost" Feb 13 19:31:01.155002 containerd[1502]: 2025-02-13 19:31:01.111 [INFO][5020] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6bcdbcbaef537d66d3695e3f70077514eb5e70193840699edd520c67d8692e47 Feb 13 19:31:01.155002 containerd[1502]: 2025-02-13 19:31:01.116 [INFO][5020] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6bcdbcbaef537d66d3695e3f70077514eb5e70193840699edd520c67d8692e47" host="localhost" Feb 13 19:31:01.155002 containerd[1502]: 2025-02-13 19:31:01.122 [INFO][5020] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.6bcdbcbaef537d66d3695e3f70077514eb5e70193840699edd520c67d8692e47" host="localhost" Feb 13 19:31:01.155002 containerd[1502]: 2025-02-13 19:31:01.123 [INFO][5020] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.6bcdbcbaef537d66d3695e3f70077514eb5e70193840699edd520c67d8692e47" host="localhost" Feb 13 19:31:01.155002 containerd[1502]: 2025-02-13 19:31:01.123 [INFO][5020] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:31:01.155002 containerd[1502]: 2025-02-13 19:31:01.123 [INFO][5020] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="6bcdbcbaef537d66d3695e3f70077514eb5e70193840699edd520c67d8692e47" HandleID="k8s-pod-network.6bcdbcbaef537d66d3695e3f70077514eb5e70193840699edd520c67d8692e47" Workload="localhost-k8s-coredns--6f6b679f8f--bxg8d-eth0" Feb 13 19:31:01.155741 containerd[1502]: 2025-02-13 19:31:01.131 [INFO][4955] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6bcdbcbaef537d66d3695e3f70077514eb5e70193840699edd520c67d8692e47" Namespace="kube-system" Pod="coredns-6f6b679f8f-bxg8d" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--bxg8d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--bxg8d-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"402f0fa8-6d39-4a67-b618-1d216e220aea", ResourceVersion:"776", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 30, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-bxg8d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali67e170f85bd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:31:01.155741 containerd[1502]: 2025-02-13 19:31:01.131 [INFO][4955] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="6bcdbcbaef537d66d3695e3f70077514eb5e70193840699edd520c67d8692e47" Namespace="kube-system" Pod="coredns-6f6b679f8f-bxg8d" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--bxg8d-eth0" Feb 13 19:31:01.155741 containerd[1502]: 2025-02-13 19:31:01.131 [INFO][4955] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali67e170f85bd ContainerID="6bcdbcbaef537d66d3695e3f70077514eb5e70193840699edd520c67d8692e47" Namespace="kube-system" Pod="coredns-6f6b679f8f-bxg8d" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--bxg8d-eth0" Feb 13 19:31:01.155741 containerd[1502]: 2025-02-13 19:31:01.136 [INFO][4955] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6bcdbcbaef537d66d3695e3f70077514eb5e70193840699edd520c67d8692e47" Namespace="kube-system" Pod="coredns-6f6b679f8f-bxg8d" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--bxg8d-eth0" Feb 13 19:31:01.155741 containerd[1502]: 2025-02-13 19:31:01.136 [INFO][4955] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6bcdbcbaef537d66d3695e3f70077514eb5e70193840699edd520c67d8692e47" Namespace="kube-system" Pod="coredns-6f6b679f8f-bxg8d" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--bxg8d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--bxg8d-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"402f0fa8-6d39-4a67-b618-1d216e220aea", ResourceVersion:"776", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 30, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6bcdbcbaef537d66d3695e3f70077514eb5e70193840699edd520c67d8692e47", Pod:"coredns-6f6b679f8f-bxg8d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali67e170f85bd", MAC:"aa:b9:f6:5e:30:9d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:31:01.155741 containerd[1502]: 2025-02-13 19:31:01.151 [INFO][4955] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6bcdbcbaef537d66d3695e3f70077514eb5e70193840699edd520c67d8692e47" Namespace="kube-system" Pod="coredns-6f6b679f8f-bxg8d" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--bxg8d-eth0" Feb 13 19:31:01.184062 containerd[1502]: time="2025-02-13T19:31:01.183419290Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:31:01.184062 containerd[1502]: time="2025-02-13T19:31:01.183606410Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:31:01.184062 containerd[1502]: time="2025-02-13T19:31:01.183647529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:31:01.184469 containerd[1502]: time="2025-02-13T19:31:01.184292990Z" level=info msg="StartContainer for \"6277e103e8161948ad02e84c77c2de706a37087a9bbdec317978e8f92b598600\" returns successfully" Feb 13 19:31:01.184868 containerd[1502]: time="2025-02-13T19:31:01.184779323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:31:01.210590 systemd[1]: Started cri-containerd-6bcdbcbaef537d66d3695e3f70077514eb5e70193840699edd520c67d8692e47.scope - libcontainer container 6bcdbcbaef537d66d3695e3f70077514eb5e70193840699edd520c67d8692e47. Feb 13 19:31:01.231262 systemd-resolved[1338]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:31:01.241375 systemd-networkd[1430]: cali0972cc708e3: Link UP Feb 13 19:31:01.242248 systemd-networkd[1430]: cali0972cc708e3: Gained carrier Feb 13 19:31:01.262647 containerd[1502]: 2025-02-13 19:31:00.691 [INFO][4941] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:31:01.262647 containerd[1502]: 2025-02-13 19:31:00.722 [INFO][4941] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--68b8f8cf95--qdbnh-eth0 calico-kube-controllers-68b8f8cf95- calico-system d38faaa4-494c-4c4e-87ff-0a00aa82bf88 784 0 2025-02-13 19:30:27 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:68b8f8cf95 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-68b8f8cf95-qdbnh eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali0972cc708e3 [] []}} ContainerID="c2ad3ce96aec3e39146ef9a12bf120b804b514fb82f9bd97d91f159bedd76a24" Namespace="calico-system" Pod="calico-kube-controllers-68b8f8cf95-qdbnh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--68b8f8cf95--qdbnh-" Feb 13 19:31:01.262647 containerd[1502]: 2025-02-13 19:31:00.722 [INFO][4941] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c2ad3ce96aec3e39146ef9a12bf120b804b514fb82f9bd97d91f159bedd76a24" Namespace="calico-system" Pod="calico-kube-controllers-68b8f8cf95-qdbnh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--68b8f8cf95--qdbnh-eth0" Feb 13 19:31:01.262647 containerd[1502]: 2025-02-13 19:31:00.840 [INFO][5011] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c2ad3ce96aec3e39146ef9a12bf120b804b514fb82f9bd97d91f159bedd76a24" HandleID="k8s-pod-network.c2ad3ce96aec3e39146ef9a12bf120b804b514fb82f9bd97d91f159bedd76a24" Workload="localhost-k8s-calico--kube--controllers--68b8f8cf95--qdbnh-eth0" Feb 13 19:31:01.262647 containerd[1502]: 2025-02-13 19:31:00.853 [INFO][5011] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c2ad3ce96aec3e39146ef9a12bf120b804b514fb82f9bd97d91f159bedd76a24" HandleID="k8s-pod-network.c2ad3ce96aec3e39146ef9a12bf120b804b514fb82f9bd97d91f159bedd76a24" Workload="localhost-k8s-calico--kube--controllers--68b8f8cf95--qdbnh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003604c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-68b8f8cf95-qdbnh", "timestamp":"2025-02-13 19:31:00.840214517 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:31:01.262647 containerd[1502]: 2025-02-13 19:31:00.854 [INFO][5011] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:31:01.262647 containerd[1502]: 2025-02-13 19:31:01.123 [INFO][5011] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:31:01.262647 containerd[1502]: 2025-02-13 19:31:01.123 [INFO][5011] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:31:01.262647 containerd[1502]: 2025-02-13 19:31:01.188 [INFO][5011] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c2ad3ce96aec3e39146ef9a12bf120b804b514fb82f9bd97d91f159bedd76a24" host="localhost" Feb 13 19:31:01.262647 containerd[1502]: 2025-02-13 19:31:01.196 [INFO][5011] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:31:01.262647 containerd[1502]: 2025-02-13 19:31:01.207 [INFO][5011] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:31:01.262647 containerd[1502]: 2025-02-13 19:31:01.210 [INFO][5011] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:31:01.262647 containerd[1502]: 2025-02-13 19:31:01.212 [INFO][5011] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:31:01.262647 containerd[1502]: 2025-02-13 19:31:01.212 [INFO][5011] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c2ad3ce96aec3e39146ef9a12bf120b804b514fb82f9bd97d91f159bedd76a24" host="localhost" Feb 13 19:31:01.262647 containerd[1502]: 2025-02-13 19:31:01.214 [INFO][5011] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c2ad3ce96aec3e39146ef9a12bf120b804b514fb82f9bd97d91f159bedd76a24 Feb 13 19:31:01.262647 containerd[1502]: 2025-02-13 19:31:01.219 [INFO][5011] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c2ad3ce96aec3e39146ef9a12bf120b804b514fb82f9bd97d91f159bedd76a24" host="localhost" Feb 13 19:31:01.262647 containerd[1502]: 2025-02-13 19:31:01.233 [INFO][5011] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.c2ad3ce96aec3e39146ef9a12bf120b804b514fb82f9bd97d91f159bedd76a24" host="localhost" Feb 13 19:31:01.262647 containerd[1502]: 2025-02-13 19:31:01.233 [INFO][5011] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.c2ad3ce96aec3e39146ef9a12bf120b804b514fb82f9bd97d91f159bedd76a24" host="localhost" Feb 13 19:31:01.262647 containerd[1502]: 2025-02-13 19:31:01.233 [INFO][5011] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:31:01.262647 containerd[1502]: 2025-02-13 19:31:01.233 [INFO][5011] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="c2ad3ce96aec3e39146ef9a12bf120b804b514fb82f9bd97d91f159bedd76a24" HandleID="k8s-pod-network.c2ad3ce96aec3e39146ef9a12bf120b804b514fb82f9bd97d91f159bedd76a24" Workload="localhost-k8s-calico--kube--controllers--68b8f8cf95--qdbnh-eth0" Feb 13 19:31:01.263257 containerd[1502]: 2025-02-13 19:31:01.236 [INFO][4941] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c2ad3ce96aec3e39146ef9a12bf120b804b514fb82f9bd97d91f159bedd76a24" Namespace="calico-system" Pod="calico-kube-controllers-68b8f8cf95-qdbnh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--68b8f8cf95--qdbnh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--68b8f8cf95--qdbnh-eth0", GenerateName:"calico-kube-controllers-68b8f8cf95-", Namespace:"calico-system", SelfLink:"", UID:"d38faaa4-494c-4c4e-87ff-0a00aa82bf88", ResourceVersion:"784", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 30, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68b8f8cf95", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-68b8f8cf95-qdbnh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0972cc708e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:31:01.263257 containerd[1502]: 2025-02-13 19:31:01.237 [INFO][4941] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="c2ad3ce96aec3e39146ef9a12bf120b804b514fb82f9bd97d91f159bedd76a24" Namespace="calico-system" Pod="calico-kube-controllers-68b8f8cf95-qdbnh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--68b8f8cf95--qdbnh-eth0" Feb 13 19:31:01.263257 containerd[1502]: 2025-02-13 19:31:01.237 [INFO][4941] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0972cc708e3 ContainerID="c2ad3ce96aec3e39146ef9a12bf120b804b514fb82f9bd97d91f159bedd76a24" Namespace="calico-system" Pod="calico-kube-controllers-68b8f8cf95-qdbnh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--68b8f8cf95--qdbnh-eth0" Feb 13 19:31:01.263257 containerd[1502]: 2025-02-13 19:31:01.241 [INFO][4941] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c2ad3ce96aec3e39146ef9a12bf120b804b514fb82f9bd97d91f159bedd76a24" Namespace="calico-system" Pod="calico-kube-controllers-68b8f8cf95-qdbnh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--68b8f8cf95--qdbnh-eth0" Feb 13 19:31:01.263257 containerd[1502]: 2025-02-13 19:31:01.241 [INFO][4941] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c2ad3ce96aec3e39146ef9a12bf120b804b514fb82f9bd97d91f159bedd76a24" Namespace="calico-system" Pod="calico-kube-controllers-68b8f8cf95-qdbnh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--68b8f8cf95--qdbnh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--68b8f8cf95--qdbnh-eth0", GenerateName:"calico-kube-controllers-68b8f8cf95-", Namespace:"calico-system", SelfLink:"", UID:"d38faaa4-494c-4c4e-87ff-0a00aa82bf88", ResourceVersion:"784", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 30, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68b8f8cf95", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c2ad3ce96aec3e39146ef9a12bf120b804b514fb82f9bd97d91f159bedd76a24", Pod:"calico-kube-controllers-68b8f8cf95-qdbnh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0972cc708e3", MAC:"16:32:09:b7:59:a2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:31:01.263257 containerd[1502]: 2025-02-13 19:31:01.259 [INFO][4941] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c2ad3ce96aec3e39146ef9a12bf120b804b514fb82f9bd97d91f159bedd76a24" Namespace="calico-system" Pod="calico-kube-controllers-68b8f8cf95-qdbnh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--68b8f8cf95--qdbnh-eth0" Feb 13 19:31:01.267353 containerd[1502]: time="2025-02-13T19:31:01.267288971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bxg8d,Uid:402f0fa8-6d39-4a67-b618-1d216e220aea,Namespace:kube-system,Attempt:6,} returns sandbox id \"6bcdbcbaef537d66d3695e3f70077514eb5e70193840699edd520c67d8692e47\"" Feb 13 19:31:01.267990 kubelet[2600]: E0213 19:31:01.267967 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:31:01.270308 containerd[1502]: time="2025-02-13T19:31:01.270279385Z" level=info msg="CreateContainer within sandbox \"6bcdbcbaef537d66d3695e3f70077514eb5e70193840699edd520c67d8692e47\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:31:01.430214 containerd[1502]: time="2025-02-13T19:31:01.429767487Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:31:01.430214 containerd[1502]: time="2025-02-13T19:31:01.429874820Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:31:01.430214 containerd[1502]: time="2025-02-13T19:31:01.429896711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:31:01.430214 containerd[1502]: time="2025-02-13T19:31:01.430063654Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:31:01.450559 systemd[1]: run-containerd-runc-k8s.io-c2ad3ce96aec3e39146ef9a12bf120b804b514fb82f9bd97d91f159bedd76a24-runc.lYObwF.mount: Deactivated successfully. Feb 13 19:31:01.459466 systemd[1]: Started cri-containerd-c2ad3ce96aec3e39146ef9a12bf120b804b514fb82f9bd97d91f159bedd76a24.scope - libcontainer container c2ad3ce96aec3e39146ef9a12bf120b804b514fb82f9bd97d91f159bedd76a24. Feb 13 19:31:01.474946 systemd-resolved[1338]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:31:01.500891 containerd[1502]: time="2025-02-13T19:31:01.500213769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68b8f8cf95-qdbnh,Uid:d38faaa4-494c-4c4e-87ff-0a00aa82bf88,Namespace:calico-system,Attempt:6,} returns sandbox id \"c2ad3ce96aec3e39146ef9a12bf120b804b514fb82f9bd97d91f159bedd76a24\"" Feb 13 19:31:01.547179 kubelet[2600]: I0213 19:31:01.546795 2600 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c049dfb5c4e4f7b160d052644270f741c94db9967ffc69c3eaa01c0f5ec5566f" Feb 13 19:31:01.547297 containerd[1502]: time="2025-02-13T19:31:01.547246936Z" level=info msg="StopPodSandbox for \"c049dfb5c4e4f7b160d052644270f741c94db9967ffc69c3eaa01c0f5ec5566f\"" Feb 13 19:31:01.547479 containerd[1502]: time="2025-02-13T19:31:01.547459736Z" level=info msg="Ensure that sandbox c049dfb5c4e4f7b160d052644270f741c94db9967ffc69c3eaa01c0f5ec5566f in task-service has been cleanup successfully" Feb 13 19:31:01.547744 containerd[1502]: time="2025-02-13T19:31:01.547666674Z" level=info msg="TearDown network for sandbox \"c049dfb5c4e4f7b160d052644270f741c94db9967ffc69c3eaa01c0f5ec5566f\" successfully" Feb 13 19:31:01.547744 containerd[1502]: time="2025-02-13T19:31:01.547684938Z" level=info msg="StopPodSandbox for \"c049dfb5c4e4f7b160d052644270f741c94db9967ffc69c3eaa01c0f5ec5566f\" returns successfully" Feb 13 19:31:01.549240 containerd[1502]: time="2025-02-13T19:31:01.549222214Z" level=info msg="StopPodSandbox for \"ca70552ffab426979d1de2107ddb87e6551aa9adf1dd6f0eb5a66684f19cdda4\"" Feb 13 19:31:01.549681 containerd[1502]: time="2025-02-13T19:31:01.549502521Z" level=info msg="TearDown network for sandbox \"ca70552ffab426979d1de2107ddb87e6551aa9adf1dd6f0eb5a66684f19cdda4\" successfully" Feb 13 19:31:01.549681 containerd[1502]: time="2025-02-13T19:31:01.549547976Z" level=info msg="StopPodSandbox for \"ca70552ffab426979d1de2107ddb87e6551aa9adf1dd6f0eb5a66684f19cdda4\" returns successfully" Feb 13 19:31:01.549968 containerd[1502]: time="2025-02-13T19:31:01.549828512Z" level=info msg="StopPodSandbox for \"cf302ac4983e4b5752dc5e187cea36041dfa162432d3e8affbda52f81e45b173\"" Feb 13 19:31:01.549968 containerd[1502]: time="2025-02-13T19:31:01.549904546Z" level=info msg="TearDown network for sandbox \"cf302ac4983e4b5752dc5e187cea36041dfa162432d3e8affbda52f81e45b173\" successfully" Feb 13 19:31:01.549968 containerd[1502]: time="2025-02-13T19:31:01.549939221Z" level=info msg="StopPodSandbox for \"cf302ac4983e4b5752dc5e187cea36041dfa162432d3e8affbda52f81e45b173\" returns successfully" Feb 13 19:31:01.550496 containerd[1502]: time="2025-02-13T19:31:01.550384938Z" level=info msg="StopPodSandbox for \"8a68fb01407d9c834800a26cc46bd4437b62dc056b0e1350c4907a96b87c511d\"" Feb 13 19:31:01.550496 containerd[1502]: time="2025-02-13T19:31:01.550468795Z" level=info msg="TearDown network for sandbox \"8a68fb01407d9c834800a26cc46bd4437b62dc056b0e1350c4907a96b87c511d\" successfully" Feb 13 19:31:01.550496 containerd[1502]: time="2025-02-13T19:31:01.550477842Z" level=info msg="StopPodSandbox for \"8a68fb01407d9c834800a26cc46bd4437b62dc056b0e1350c4907a96b87c511d\" returns successfully" Feb 13 19:31:01.550784 systemd[1]: run-netns-cni\x2d51d5cd84\x2d0390\x2d5112\x2d2341\x2ded90168a15f3.mount: Deactivated successfully. Feb 13 19:31:01.550888 containerd[1502]: time="2025-02-13T19:31:01.550856273Z" level=info msg="StopPodSandbox for \"06b76013e050990e084073338277aecc1ce3e9a19060a462af91ada291c27c21\"" Feb 13 19:31:01.551042 containerd[1502]: time="2025-02-13T19:31:01.550969485Z" level=info msg="TearDown network for sandbox \"06b76013e050990e084073338277aecc1ce3e9a19060a462af91ada291c27c21\" successfully" Feb 13 19:31:01.551042 containerd[1502]: time="2025-02-13T19:31:01.550980335Z" level=info msg="StopPodSandbox for \"06b76013e050990e084073338277aecc1ce3e9a19060a462af91ada291c27c21\" returns successfully" Feb 13 19:31:01.551410 containerd[1502]: time="2025-02-13T19:31:01.551383102Z" level=info msg="StopPodSandbox for \"4d57794a877b0b6953752ccedec2bd85d68373c09318c76ac2c537449d27f46f\"" Feb 13 19:31:01.551801 containerd[1502]: time="2025-02-13T19:31:01.551603395Z" level=info msg="TearDown network for sandbox \"4d57794a877b0b6953752ccedec2bd85d68373c09318c76ac2c537449d27f46f\" successfully" Feb 13 19:31:01.551891 containerd[1502]: time="2025-02-13T19:31:01.551877259Z" level=info msg="StopPodSandbox for \"4d57794a877b0b6953752ccedec2bd85d68373c09318c76ac2c537449d27f46f\" returns successfully" Feb 13 19:31:01.552502 containerd[1502]: time="2025-02-13T19:31:01.552480141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cfdf6,Uid:61978848-76c7-4692-bab4-3c8c891d5468,Namespace:calico-system,Attempt:6,}" Feb 13 19:31:01.553404 kubelet[2600]: E0213 19:31:01.553379 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:31:01.558812 kubelet[2600]: E0213 19:31:01.558786 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:31:01.683770 systemd-networkd[1430]: cali76b969b1aae: Link UP Feb 13 19:31:01.683988 systemd-networkd[1430]: cali76b969b1aae: Gained carrier Feb 13 19:31:01.716179 containerd[1502]: 2025-02-13 19:31:00.724 [INFO][4976] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:31:01.716179 containerd[1502]: 2025-02-13 19:31:00.747 [INFO][4976] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6794f4445b--q9q8b-eth0 calico-apiserver-6794f4445b- calico-apiserver 9eba37d3-14ec-4521-9302-789cbdb496aa 781 0 2025-02-13 19:30:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6794f4445b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6794f4445b-q9q8b eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali76b969b1aae [] []}} ContainerID="1d8c2ab6c3440e2cbf869f466225d7cacaec94e9da57b05dfc4e32d643486a0c" Namespace="calico-apiserver" Pod="calico-apiserver-6794f4445b-q9q8b" WorkloadEndpoint="localhost-k8s-calico--apiserver--6794f4445b--q9q8b-" Feb 13 19:31:01.716179 containerd[1502]: 2025-02-13 19:31:00.747 [INFO][4976] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1d8c2ab6c3440e2cbf869f466225d7cacaec94e9da57b05dfc4e32d643486a0c" Namespace="calico-apiserver" Pod="calico-apiserver-6794f4445b-q9q8b" WorkloadEndpoint="localhost-k8s-calico--apiserver--6794f4445b--q9q8b-eth0" Feb 13 19:31:01.716179 containerd[1502]: 2025-02-13 19:31:00.834 [INFO][5016] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1d8c2ab6c3440e2cbf869f466225d7cacaec94e9da57b05dfc4e32d643486a0c" HandleID="k8s-pod-network.1d8c2ab6c3440e2cbf869f466225d7cacaec94e9da57b05dfc4e32d643486a0c" Workload="localhost-k8s-calico--apiserver--6794f4445b--q9q8b-eth0" Feb 13 19:31:01.716179 containerd[1502]: 2025-02-13 19:31:00.856 [INFO][5016] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1d8c2ab6c3440e2cbf869f466225d7cacaec94e9da57b05dfc4e32d643486a0c" HandleID="k8s-pod-network.1d8c2ab6c3440e2cbf869f466225d7cacaec94e9da57b05dfc4e32d643486a0c" Workload="localhost-k8s-calico--apiserver--6794f4445b--q9q8b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f4ea0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6794f4445b-q9q8b", "timestamp":"2025-02-13 19:31:00.834178496 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:31:01.716179 containerd[1502]: 2025-02-13 19:31:00.857 [INFO][5016] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:31:01.716179 containerd[1502]: 2025-02-13 19:31:01.233 [INFO][5016] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:31:01.716179 containerd[1502]: 2025-02-13 19:31:01.233 [INFO][5016] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:31:01.716179 containerd[1502]: 2025-02-13 19:31:01.288 [INFO][5016] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1d8c2ab6c3440e2cbf869f466225d7cacaec94e9da57b05dfc4e32d643486a0c" host="localhost" Feb 13 19:31:01.716179 containerd[1502]: 2025-02-13 19:31:01.452 [INFO][5016] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:31:01.716179 containerd[1502]: 2025-02-13 19:31:01.456 [INFO][5016] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:31:01.716179 containerd[1502]: 2025-02-13 19:31:01.458 [INFO][5016] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:31:01.716179 containerd[1502]: 2025-02-13 19:31:01.460 [INFO][5016] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:31:01.716179 containerd[1502]: 2025-02-13 19:31:01.460 [INFO][5016] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1d8c2ab6c3440e2cbf869f466225d7cacaec94e9da57b05dfc4e32d643486a0c" host="localhost" Feb 13 19:31:01.716179 containerd[1502]: 2025-02-13 19:31:01.462 [INFO][5016] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1d8c2ab6c3440e2cbf869f466225d7cacaec94e9da57b05dfc4e32d643486a0c Feb 13 19:31:01.716179 containerd[1502]: 2025-02-13 19:31:01.480 [INFO][5016] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1d8c2ab6c3440e2cbf869f466225d7cacaec94e9da57b05dfc4e32d643486a0c" host="localhost" Feb 13 19:31:01.716179 containerd[1502]: 2025-02-13 19:31:01.678 [INFO][5016] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.1d8c2ab6c3440e2cbf869f466225d7cacaec94e9da57b05dfc4e32d643486a0c" host="localhost" Feb 13 19:31:01.716179 containerd[1502]: 2025-02-13 19:31:01.678 [INFO][5016] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.1d8c2ab6c3440e2cbf869f466225d7cacaec94e9da57b05dfc4e32d643486a0c" host="localhost" Feb 13 19:31:01.716179 containerd[1502]: 2025-02-13 19:31:01.678 [INFO][5016] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:31:01.716179 containerd[1502]: 2025-02-13 19:31:01.678 [INFO][5016] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="1d8c2ab6c3440e2cbf869f466225d7cacaec94e9da57b05dfc4e32d643486a0c" HandleID="k8s-pod-network.1d8c2ab6c3440e2cbf869f466225d7cacaec94e9da57b05dfc4e32d643486a0c" Workload="localhost-k8s-calico--apiserver--6794f4445b--q9q8b-eth0" Feb 13 19:31:01.716953 containerd[1502]: 2025-02-13 19:31:01.681 [INFO][4976] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1d8c2ab6c3440e2cbf869f466225d7cacaec94e9da57b05dfc4e32d643486a0c" Namespace="calico-apiserver" Pod="calico-apiserver-6794f4445b-q9q8b" WorkloadEndpoint="localhost-k8s-calico--apiserver--6794f4445b--q9q8b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6794f4445b--q9q8b-eth0", GenerateName:"calico-apiserver-6794f4445b-", Namespace:"calico-apiserver", SelfLink:"", UID:"9eba37d3-14ec-4521-9302-789cbdb496aa", ResourceVersion:"781", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 30, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6794f4445b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6794f4445b-q9q8b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali76b969b1aae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:31:01.716953 containerd[1502]: 2025-02-13 19:31:01.681 [INFO][4976] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="1d8c2ab6c3440e2cbf869f466225d7cacaec94e9da57b05dfc4e32d643486a0c" Namespace="calico-apiserver" Pod="calico-apiserver-6794f4445b-q9q8b" WorkloadEndpoint="localhost-k8s-calico--apiserver--6794f4445b--q9q8b-eth0" Feb 13 19:31:01.716953 containerd[1502]: 2025-02-13 19:31:01.682 [INFO][4976] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali76b969b1aae ContainerID="1d8c2ab6c3440e2cbf869f466225d7cacaec94e9da57b05dfc4e32d643486a0c" Namespace="calico-apiserver" Pod="calico-apiserver-6794f4445b-q9q8b" WorkloadEndpoint="localhost-k8s-calico--apiserver--6794f4445b--q9q8b-eth0" Feb 13 19:31:01.716953 containerd[1502]: 2025-02-13 19:31:01.683 [INFO][4976] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1d8c2ab6c3440e2cbf869f466225d7cacaec94e9da57b05dfc4e32d643486a0c" Namespace="calico-apiserver" Pod="calico-apiserver-6794f4445b-q9q8b" WorkloadEndpoint="localhost-k8s-calico--apiserver--6794f4445b--q9q8b-eth0" Feb 13 19:31:01.716953 containerd[1502]: 2025-02-13 19:31:01.684 [INFO][4976] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1d8c2ab6c3440e2cbf869f466225d7cacaec94e9da57b05dfc4e32d643486a0c" Namespace="calico-apiserver" Pod="calico-apiserver-6794f4445b-q9q8b" WorkloadEndpoint="localhost-k8s-calico--apiserver--6794f4445b--q9q8b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6794f4445b--q9q8b-eth0", GenerateName:"calico-apiserver-6794f4445b-", Namespace:"calico-apiserver", SelfLink:"", UID:"9eba37d3-14ec-4521-9302-789cbdb496aa", ResourceVersion:"781", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 30, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6794f4445b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1d8c2ab6c3440e2cbf869f466225d7cacaec94e9da57b05dfc4e32d643486a0c", Pod:"calico-apiserver-6794f4445b-q9q8b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali76b969b1aae", MAC:"be:ba:15:29:58:37", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:31:01.716953 containerd[1502]: 2025-02-13 19:31:01.713 [INFO][4976] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1d8c2ab6c3440e2cbf869f466225d7cacaec94e9da57b05dfc4e32d643486a0c" Namespace="calico-apiserver" Pod="calico-apiserver-6794f4445b-q9q8b" WorkloadEndpoint="localhost-k8s-calico--apiserver--6794f4445b--q9q8b-eth0" Feb 13 19:31:01.886016 kubelet[2600]: I0213 19:31:01.885957 2600 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-twt5n" podStartSLOduration=4.483032582 podStartE2EDuration="34.885939516s" podCreationTimestamp="2025-02-13 19:30:27 +0000 UTC" firstStartedPulling="2025-02-13 19:30:28.652958303 +0000 UTC m=+12.930989619" lastFinishedPulling="2025-02-13 19:30:59.055865237 +0000 UTC m=+43.333896553" observedRunningTime="2025-02-13 19:31:01.854497418 +0000 UTC m=+46.132528734" watchObservedRunningTime="2025-02-13 19:31:01.885939516 +0000 UTC m=+46.163970832" Feb 13 19:31:01.896828 containerd[1502]: time="2025-02-13T19:31:01.896648380Z" level=info msg="CreateContainer within sandbox \"6bcdbcbaef537d66d3695e3f70077514eb5e70193840699edd520c67d8692e47\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d3c913a497755733ff569a6fec057491bd243e0930a2d5b648c4a7f991e443ec\"" Feb 13 19:31:01.899132 containerd[1502]: time="2025-02-13T19:31:01.897803548Z" level=info msg="StartContainer for \"d3c913a497755733ff569a6fec057491bd243e0930a2d5b648c4a7f991e443ec\"" Feb 13 19:31:01.900477 containerd[1502]: time="2025-02-13T19:31:01.900159672Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:31:01.903490 containerd[1502]: time="2025-02-13T19:31:01.903182166Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:31:01.903490 containerd[1502]: time="2025-02-13T19:31:01.903238111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:31:01.903490 containerd[1502]: time="2025-02-13T19:31:01.903394314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:31:01.949541 systemd[1]: Started cri-containerd-1d8c2ab6c3440e2cbf869f466225d7cacaec94e9da57b05dfc4e32d643486a0c.scope - libcontainer container 1d8c2ab6c3440e2cbf869f466225d7cacaec94e9da57b05dfc4e32d643486a0c. Feb 13 19:31:01.953515 systemd[1]: Started cri-containerd-d3c913a497755733ff569a6fec057491bd243e0930a2d5b648c4a7f991e443ec.scope - libcontainer container d3c913a497755733ff569a6fec057491bd243e0930a2d5b648c4a7f991e443ec. Feb 13 19:31:01.983022 systemd[1]: Started sshd@12-10.0.0.116:22-10.0.0.1:53226.service - OpenSSH per-connection server daemon (10.0.0.1:53226). Feb 13 19:31:02.003452 systemd-resolved[1338]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:31:02.072041 sshd[5498]: Accepted publickey for core from 10.0.0.1 port 53226 ssh2: RSA SHA256:ENn9hOvI2hLUXcV6iHA8gc9Z4CTEPvGDMkoVtxIuYbg Feb 13 19:31:02.075109 containerd[1502]: time="2025-02-13T19:31:02.073763125Z" level=info msg="StartContainer for \"d3c913a497755733ff569a6fec057491bd243e0930a2d5b648c4a7f991e443ec\" returns successfully" Feb 13 19:31:02.074949 sshd-session[5498]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:31:02.088697 systemd-logind[1485]: New session 13 of user core. Feb 13 19:31:02.092571 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 19:31:02.112002 containerd[1502]: time="2025-02-13T19:31:02.111960149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6794f4445b-q9q8b,Uid:9eba37d3-14ec-4521-9302-789cbdb496aa,Namespace:calico-apiserver,Attempt:7,} returns sandbox id \"1d8c2ab6c3440e2cbf869f466225d7cacaec94e9da57b05dfc4e32d643486a0c\"" Feb 13 19:31:02.171500 systemd-networkd[1430]: cali04f8b77fe2d: Link UP Feb 13 19:31:02.171713 systemd-networkd[1430]: cali04f8b77fe2d: Gained carrier Feb 13 19:31:02.180307 kubelet[2600]: I0213 19:31:02.180244 2600 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-gvd66" podStartSLOduration=41.180224235 podStartE2EDuration="41.180224235s" podCreationTimestamp="2025-02-13 19:30:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:31:01.886360837 +0000 UTC m=+46.164392153" watchObservedRunningTime="2025-02-13 19:31:02.180224235 +0000 UTC m=+46.458255551" Feb 13 19:31:02.186356 containerd[1502]: 2025-02-13 19:31:01.952 [INFO][5391] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:31:02.186356 containerd[1502]: 2025-02-13 19:31:01.969 [INFO][5391] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--cfdf6-eth0 csi-node-driver- calico-system 61978848-76c7-4692-bab4-3c8c891d5468 639 0 2025-02-13 19:30:27 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-cfdf6 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali04f8b77fe2d [] []}} ContainerID="7eb003001418bf70a6fe9d98a67a0fd67c05cf544947504193fec55d79282bf7" Namespace="calico-system" Pod="csi-node-driver-cfdf6" WorkloadEndpoint="localhost-k8s-csi--node--driver--cfdf6-" Feb 13 19:31:02.186356 containerd[1502]: 2025-02-13 19:31:01.969 [INFO][5391] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7eb003001418bf70a6fe9d98a67a0fd67c05cf544947504193fec55d79282bf7" Namespace="calico-system" Pod="csi-node-driver-cfdf6" WorkloadEndpoint="localhost-k8s-csi--node--driver--cfdf6-eth0" Feb 13 19:31:02.186356 containerd[1502]: 2025-02-13 19:31:02.114 [INFO][5505] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7eb003001418bf70a6fe9d98a67a0fd67c05cf544947504193fec55d79282bf7" HandleID="k8s-pod-network.7eb003001418bf70a6fe9d98a67a0fd67c05cf544947504193fec55d79282bf7" Workload="localhost-k8s-csi--node--driver--cfdf6-eth0" Feb 13 19:31:02.186356 containerd[1502]: 2025-02-13 19:31:02.125 [INFO][5505] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7eb003001418bf70a6fe9d98a67a0fd67c05cf544947504193fec55d79282bf7" HandleID="k8s-pod-network.7eb003001418bf70a6fe9d98a67a0fd67c05cf544947504193fec55d79282bf7" Workload="localhost-k8s-csi--node--driver--cfdf6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000498c20), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-cfdf6", "timestamp":"2025-02-13 19:31:02.113628671 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:31:02.186356 containerd[1502]: 2025-02-13 19:31:02.125 [INFO][5505] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:31:02.186356 containerd[1502]: 2025-02-13 19:31:02.125 [INFO][5505] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:31:02.186356 containerd[1502]: 2025-02-13 19:31:02.125 [INFO][5505] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:31:02.186356 containerd[1502]: 2025-02-13 19:31:02.127 [INFO][5505] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7eb003001418bf70a6fe9d98a67a0fd67c05cf544947504193fec55d79282bf7" host="localhost" Feb 13 19:31:02.186356 containerd[1502]: 2025-02-13 19:31:02.131 [INFO][5505] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:31:02.186356 containerd[1502]: 2025-02-13 19:31:02.139 [INFO][5505] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:31:02.186356 containerd[1502]: 2025-02-13 19:31:02.141 [INFO][5505] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:31:02.186356 containerd[1502]: 2025-02-13 19:31:02.144 [INFO][5505] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:31:02.186356 containerd[1502]: 2025-02-13 19:31:02.144 [INFO][5505] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7eb003001418bf70a6fe9d98a67a0fd67c05cf544947504193fec55d79282bf7" host="localhost" Feb 13 19:31:02.186356 containerd[1502]: 2025-02-13 19:31:02.146 [INFO][5505] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7eb003001418bf70a6fe9d98a67a0fd67c05cf544947504193fec55d79282bf7 Feb 13 19:31:02.186356 containerd[1502]: 2025-02-13 19:31:02.154 [INFO][5505] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7eb003001418bf70a6fe9d98a67a0fd67c05cf544947504193fec55d79282bf7" host="localhost" Feb 13 19:31:02.186356 containerd[1502]: 2025-02-13 19:31:02.162 [INFO][5505] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.7eb003001418bf70a6fe9d98a67a0fd67c05cf544947504193fec55d79282bf7" host="localhost" Feb 13 19:31:02.186356 containerd[1502]: 2025-02-13 19:31:02.162 [INFO][5505] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.7eb003001418bf70a6fe9d98a67a0fd67c05cf544947504193fec55d79282bf7" host="localhost" Feb 13 19:31:02.186356 containerd[1502]: 2025-02-13 19:31:02.162 [INFO][5505] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:31:02.186356 containerd[1502]: 2025-02-13 19:31:02.162 [INFO][5505] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="7eb003001418bf70a6fe9d98a67a0fd67c05cf544947504193fec55d79282bf7" HandleID="k8s-pod-network.7eb003001418bf70a6fe9d98a67a0fd67c05cf544947504193fec55d79282bf7" Workload="localhost-k8s-csi--node--driver--cfdf6-eth0" Feb 13 19:31:02.187051 containerd[1502]: 2025-02-13 19:31:02.168 [INFO][5391] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7eb003001418bf70a6fe9d98a67a0fd67c05cf544947504193fec55d79282bf7" Namespace="calico-system" Pod="csi-node-driver-cfdf6" WorkloadEndpoint="localhost-k8s-csi--node--driver--cfdf6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--cfdf6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"61978848-76c7-4692-bab4-3c8c891d5468", ResourceVersion:"639", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 30, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-cfdf6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali04f8b77fe2d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:31:02.187051 containerd[1502]: 2025-02-13 19:31:02.168 [INFO][5391] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="7eb003001418bf70a6fe9d98a67a0fd67c05cf544947504193fec55d79282bf7" Namespace="calico-system" Pod="csi-node-driver-cfdf6" WorkloadEndpoint="localhost-k8s-csi--node--driver--cfdf6-eth0" Feb 13 19:31:02.187051 containerd[1502]: 2025-02-13 19:31:02.168 [INFO][5391] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali04f8b77fe2d ContainerID="7eb003001418bf70a6fe9d98a67a0fd67c05cf544947504193fec55d79282bf7" Namespace="calico-system" Pod="csi-node-driver-cfdf6" WorkloadEndpoint="localhost-k8s-csi--node--driver--cfdf6-eth0" Feb 13 19:31:02.187051 containerd[1502]: 2025-02-13 19:31:02.171 [INFO][5391] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7eb003001418bf70a6fe9d98a67a0fd67c05cf544947504193fec55d79282bf7" Namespace="calico-system" Pod="csi-node-driver-cfdf6" WorkloadEndpoint="localhost-k8s-csi--node--driver--cfdf6-eth0" Feb 13 19:31:02.187051 containerd[1502]: 2025-02-13 19:31:02.171 [INFO][5391] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7eb003001418bf70a6fe9d98a67a0fd67c05cf544947504193fec55d79282bf7" Namespace="calico-system" Pod="csi-node-driver-cfdf6" WorkloadEndpoint="localhost-k8s-csi--node--driver--cfdf6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--cfdf6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"61978848-76c7-4692-bab4-3c8c891d5468", ResourceVersion:"639", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 30, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7eb003001418bf70a6fe9d98a67a0fd67c05cf544947504193fec55d79282bf7", Pod:"csi-node-driver-cfdf6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali04f8b77fe2d", MAC:"32:c8:86:1e:7e:8e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:31:02.187051 containerd[1502]: 2025-02-13 19:31:02.180 [INFO][5391] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7eb003001418bf70a6fe9d98a67a0fd67c05cf544947504193fec55d79282bf7" Namespace="calico-system" Pod="csi-node-driver-cfdf6" WorkloadEndpoint="localhost-k8s-csi--node--driver--cfdf6-eth0" Feb 13 19:31:02.188417 systemd-networkd[1430]: calia14dd751e1d: Gained IPv6LL Feb 13 19:31:02.219808 containerd[1502]: time="2025-02-13T19:31:02.219533717Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:31:02.219808 containerd[1502]: time="2025-02-13T19:31:02.219604841Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:31:02.219808 containerd[1502]: time="2025-02-13T19:31:02.219615731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:31:02.219808 containerd[1502]: time="2025-02-13T19:31:02.219714536Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:31:02.243599 systemd[1]: Started cri-containerd-7eb003001418bf70a6fe9d98a67a0fd67c05cf544947504193fec55d79282bf7.scope - libcontainer container 7eb003001418bf70a6fe9d98a67a0fd67c05cf544947504193fec55d79282bf7. Feb 13 19:31:02.252667 systemd-networkd[1430]: cali67e170f85bd: Gained IPv6LL Feb 13 19:31:02.274364 sshd[5571]: Connection closed by 10.0.0.1 port 53226 Feb 13 19:31:02.274932 sshd-session[5498]: pam_unix(sshd:session): session closed for user core Feb 13 19:31:02.276059 systemd-resolved[1338]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:31:02.279075 systemd[1]: sshd@12-10.0.0.116:22-10.0.0.1:53226.service: Deactivated successfully. Feb 13 19:31:02.280420 kernel: bpftool[5646]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 19:31:02.283217 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 19:31:02.285402 systemd-logind[1485]: Session 13 logged out. Waiting for processes to exit. Feb 13 19:31:02.286766 systemd-logind[1485]: Removed session 13. Feb 13 19:31:02.289783 containerd[1502]: time="2025-02-13T19:31:02.289749277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cfdf6,Uid:61978848-76c7-4692-bab4-3c8c891d5468,Namespace:calico-system,Attempt:6,} returns sandbox id \"7eb003001418bf70a6fe9d98a67a0fd67c05cf544947504193fec55d79282bf7\"" Feb 13 19:31:02.380510 systemd-networkd[1430]: cali9801aa0cd7c: Gained IPv6LL Feb 13 19:31:02.444469 systemd-networkd[1430]: cali0972cc708e3: Gained IPv6LL Feb 13 19:31:02.521203 systemd-networkd[1430]: vxlan.calico: Link UP Feb 13 19:31:02.521214 systemd-networkd[1430]: vxlan.calico: Gained carrier Feb 13 19:31:02.568962 kubelet[2600]: E0213 19:31:02.568923 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:31:02.569475 kubelet[2600]: E0213 19:31:02.569421 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:31:02.569717 kubelet[2600]: E0213 19:31:02.569678 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:31:02.597935 kubelet[2600]: I0213 19:31:02.595438 2600 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-bxg8d" podStartSLOduration=41.595417792 podStartE2EDuration="41.595417792s" podCreationTimestamp="2025-02-13 19:30:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:31:02.583690597 +0000 UTC m=+46.861721913" watchObservedRunningTime="2025-02-13 19:31:02.595417792 +0000 UTC m=+46.873449108" Feb 13 19:31:02.599935 systemd[1]: run-containerd-runc-k8s.io-db0145061e61c003803620fc37e74fcc83081188e6f8256df598803f5685625f-runc.gnzshL.mount: Deactivated successfully. Feb 13 19:31:03.020536 systemd-networkd[1430]: cali76b969b1aae: Gained IPv6LL Feb 13 19:31:03.525975 containerd[1502]: time="2025-02-13T19:31:03.525912416Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:31:03.526859 containerd[1502]: time="2025-02-13T19:31:03.526819659Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Feb 13 19:31:03.528483 containerd[1502]: time="2025-02-13T19:31:03.528454267Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:31:03.530889 containerd[1502]: time="2025-02-13T19:31:03.530853351Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:31:03.531534 containerd[1502]: time="2025-02-13T19:31:03.531491970Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 2.377032557s" Feb 13 19:31:03.531595 containerd[1502]: time="2025-02-13T19:31:03.531537044Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 13 19:31:03.532812 containerd[1502]: time="2025-02-13T19:31:03.532772503Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Feb 13 19:31:03.535380 containerd[1502]: time="2025-02-13T19:31:03.535345904Z" level=info msg="CreateContainer within sandbox \"7c5c4fa9444c2bc1bfdbdd0bbc73ce86d02274f4d59ff297b58e7ab65a5fb05f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 19:31:03.553432 containerd[1502]: time="2025-02-13T19:31:03.553389343Z" level=info msg="CreateContainer within sandbox \"7c5c4fa9444c2bc1bfdbdd0bbc73ce86d02274f4d59ff297b58e7ab65a5fb05f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5301c2fe1066223555e5c0b1fa5584c2b029b8f2d7b4c9245ebef30352f8b33f\"" Feb 13 19:31:03.555174 containerd[1502]: time="2025-02-13T19:31:03.553931581Z" level=info msg="StartContainer for \"5301c2fe1066223555e5c0b1fa5584c2b029b8f2d7b4c9245ebef30352f8b33f\"" Feb 13 19:31:03.577669 kubelet[2600]: E0213 19:31:03.577626 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:31:03.578146 kubelet[2600]: E0213 19:31:03.577962 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:31:03.592452 systemd[1]: Started cri-containerd-5301c2fe1066223555e5c0b1fa5584c2b029b8f2d7b4c9245ebef30352f8b33f.scope - libcontainer container 5301c2fe1066223555e5c0b1fa5584c2b029b8f2d7b4c9245ebef30352f8b33f. Feb 13 19:31:03.634010 containerd[1502]: time="2025-02-13T19:31:03.633389791Z" level=info msg="StartContainer for \"5301c2fe1066223555e5c0b1fa5584c2b029b8f2d7b4c9245ebef30352f8b33f\" returns successfully" Feb 13 19:31:04.108602 systemd-networkd[1430]: cali04f8b77fe2d: Gained IPv6LL Feb 13 19:31:04.492564 systemd-networkd[1430]: vxlan.calico: Gained IPv6LL Feb 13 19:31:04.582139 kubelet[2600]: E0213 19:31:04.581945 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:31:04.636177 kubelet[2600]: I0213 19:31:04.636112 2600 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6794f4445b-7cftj" podStartSLOduration=35.257613289 podStartE2EDuration="37.636094746s" podCreationTimestamp="2025-02-13 19:30:27 +0000 UTC" firstStartedPulling="2025-02-13 19:31:01.154152787 +0000 UTC m=+45.432184103" lastFinishedPulling="2025-02-13 19:31:03.532634244 +0000 UTC m=+47.810665560" observedRunningTime="2025-02-13 19:31:04.636011049 +0000 UTC m=+48.914042365" watchObservedRunningTime="2025-02-13 19:31:04.636094746 +0000 UTC m=+48.914126062" Feb 13 19:31:06.172082 containerd[1502]: time="2025-02-13T19:31:06.172019422Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:31:06.172829 containerd[1502]: time="2025-02-13T19:31:06.172784538Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Feb 13 19:31:06.175392 containerd[1502]: time="2025-02-13T19:31:06.174182582Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:31:06.178277 containerd[1502]: time="2025-02-13T19:31:06.177765456Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:31:06.178277 containerd[1502]: time="2025-02-13T19:31:06.178169715Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.64536397s" Feb 13 19:31:06.178277 containerd[1502]: time="2025-02-13T19:31:06.178191526Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Feb 13 19:31:06.179089 containerd[1502]: time="2025-02-13T19:31:06.179070786Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 19:31:06.198026 containerd[1502]: time="2025-02-13T19:31:06.197967992Z" level=info msg="CreateContainer within sandbox \"c2ad3ce96aec3e39146ef9a12bf120b804b514fb82f9bd97d91f159bedd76a24\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 13 19:31:06.216722 containerd[1502]: time="2025-02-13T19:31:06.216546540Z" level=info msg="CreateContainer within sandbox \"c2ad3ce96aec3e39146ef9a12bf120b804b514fb82f9bd97d91f159bedd76a24\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a28a179fb7180f66364afa3037fe530bd134291848f62d1a415542b0b204f858\"" Feb 13 19:31:06.220132 containerd[1502]: time="2025-02-13T19:31:06.218404697Z" level=info msg="StartContainer for \"a28a179fb7180f66364afa3037fe530bd134291848f62d1a415542b0b204f858\"" Feb 13 19:31:06.260439 systemd[1]: Started cri-containerd-a28a179fb7180f66364afa3037fe530bd134291848f62d1a415542b0b204f858.scope - libcontainer container a28a179fb7180f66364afa3037fe530bd134291848f62d1a415542b0b204f858. Feb 13 19:31:06.269185 kubelet[2600]: E0213 19:31:06.269155 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:31:06.306706 containerd[1502]: time="2025-02-13T19:31:06.306658521Z" level=info msg="StartContainer for \"a28a179fb7180f66364afa3037fe530bd134291848f62d1a415542b0b204f858\" returns successfully" Feb 13 19:31:06.597332 kubelet[2600]: I0213 19:31:06.597257 2600 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-68b8f8cf95-qdbnh" podStartSLOduration=34.920533993 podStartE2EDuration="39.597241945s" podCreationTimestamp="2025-02-13 19:30:27 +0000 UTC" firstStartedPulling="2025-02-13 19:31:01.502290438 +0000 UTC m=+45.780321754" lastFinishedPulling="2025-02-13 19:31:06.17899839 +0000 UTC m=+50.457029706" observedRunningTime="2025-02-13 19:31:06.596932154 +0000 UTC m=+50.874963470" watchObservedRunningTime="2025-02-13 19:31:06.597241945 +0000 UTC m=+50.875273261" Feb 13 19:31:06.739635 containerd[1502]: time="2025-02-13T19:31:06.739569655Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:31:06.740356 containerd[1502]: time="2025-02-13T19:31:06.740296098Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Feb 13 19:31:06.742166 containerd[1502]: time="2025-02-13T19:31:06.742141912Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 562.973603ms" Feb 13 19:31:06.742228 containerd[1502]: time="2025-02-13T19:31:06.742168612Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 13 19:31:06.743087 containerd[1502]: time="2025-02-13T19:31:06.743052983Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 19:31:06.744141 containerd[1502]: time="2025-02-13T19:31:06.744097092Z" level=info msg="CreateContainer within sandbox \"1d8c2ab6c3440e2cbf869f466225d7cacaec94e9da57b05dfc4e32d643486a0c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 19:31:06.758455 containerd[1502]: time="2025-02-13T19:31:06.758421596Z" level=info msg="CreateContainer within sandbox \"1d8c2ab6c3440e2cbf869f466225d7cacaec94e9da57b05dfc4e32d643486a0c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"727d01131e21e5ab11a1ddb327471c2095cc9a9720750cbacb58d947c9a3656f\"" Feb 13 19:31:06.759995 containerd[1502]: time="2025-02-13T19:31:06.758829451Z" level=info msg="StartContainer for \"727d01131e21e5ab11a1ddb327471c2095cc9a9720750cbacb58d947c9a3656f\"" Feb 13 19:31:06.786459 systemd[1]: Started cri-containerd-727d01131e21e5ab11a1ddb327471c2095cc9a9720750cbacb58d947c9a3656f.scope - libcontainer container 727d01131e21e5ab11a1ddb327471c2095cc9a9720750cbacb58d947c9a3656f. Feb 13 19:31:06.825599 containerd[1502]: time="2025-02-13T19:31:06.825551014Z" level=info msg="StartContainer for \"727d01131e21e5ab11a1ddb327471c2095cc9a9720750cbacb58d947c9a3656f\" returns successfully" Feb 13 19:31:07.296703 systemd[1]: Started sshd@13-10.0.0.116:22-10.0.0.1:59374.service - OpenSSH per-connection server daemon (10.0.0.1:59374). Feb 13 19:31:07.342092 sshd[5923]: Accepted publickey for core from 10.0.0.1 port 59374 ssh2: RSA SHA256:ENn9hOvI2hLUXcV6iHA8gc9Z4CTEPvGDMkoVtxIuYbg Feb 13 19:31:07.343858 sshd-session[5923]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:31:07.348533 systemd-logind[1485]: New session 14 of user core. Feb 13 19:31:07.358442 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 19:31:07.486651 sshd[5925]: Connection closed by 10.0.0.1 port 59374 Feb 13 19:31:07.487073 sshd-session[5923]: pam_unix(sshd:session): session closed for user core Feb 13 19:31:07.496553 systemd[1]: sshd@13-10.0.0.116:22-10.0.0.1:59374.service: Deactivated successfully. Feb 13 19:31:07.498523 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 19:31:07.500287 systemd-logind[1485]: Session 14 logged out. Waiting for processes to exit. Feb 13 19:31:07.509647 systemd[1]: Started sshd@14-10.0.0.116:22-10.0.0.1:59390.service - OpenSSH per-connection server daemon (10.0.0.1:59390). Feb 13 19:31:07.510757 systemd-logind[1485]: Removed session 14. Feb 13 19:31:07.547959 sshd[5938]: Accepted publickey for core from 10.0.0.1 port 59390 ssh2: RSA SHA256:ENn9hOvI2hLUXcV6iHA8gc9Z4CTEPvGDMkoVtxIuYbg Feb 13 19:31:07.549735 sshd-session[5938]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:31:07.554026 systemd-logind[1485]: New session 15 of user core. Feb 13 19:31:07.560519 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 19:31:07.613795 kubelet[2600]: I0213 19:31:07.611415 2600 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6794f4445b-q9q8b" podStartSLOduration=35.982707536 podStartE2EDuration="40.611394072s" podCreationTimestamp="2025-02-13 19:30:27 +0000 UTC" firstStartedPulling="2025-02-13 19:31:02.114121045 +0000 UTC m=+46.392152361" lastFinishedPulling="2025-02-13 19:31:06.742807581 +0000 UTC m=+51.020838897" observedRunningTime="2025-02-13 19:31:07.60611833 +0000 UTC m=+51.884149656" watchObservedRunningTime="2025-02-13 19:31:07.611394072 +0000 UTC m=+51.889425388" Feb 13 19:31:07.620302 systemd[1]: run-containerd-runc-k8s.io-a28a179fb7180f66364afa3037fe530bd134291848f62d1a415542b0b204f858-runc.1mMDY3.mount: Deactivated successfully. Feb 13 19:31:07.720604 sshd[5940]: Connection closed by 10.0.0.1 port 59390 Feb 13 19:31:07.721085 sshd-session[5938]: pam_unix(sshd:session): session closed for user core Feb 13 19:31:07.741548 systemd[1]: sshd@14-10.0.0.116:22-10.0.0.1:59390.service: Deactivated successfully. Feb 13 19:31:07.743768 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 19:31:07.745756 systemd-logind[1485]: Session 15 logged out. Waiting for processes to exit. Feb 13 19:31:07.755346 systemd[1]: Started sshd@15-10.0.0.116:22-10.0.0.1:59392.service - OpenSSH per-connection server daemon (10.0.0.1:59392). Feb 13 19:31:07.757341 systemd-logind[1485]: Removed session 15. Feb 13 19:31:07.799099 sshd[5973]: Accepted publickey for core from 10.0.0.1 port 59392 ssh2: RSA SHA256:ENn9hOvI2hLUXcV6iHA8gc9Z4CTEPvGDMkoVtxIuYbg Feb 13 19:31:07.800747 sshd-session[5973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:31:07.805066 systemd-logind[1485]: New session 16 of user core. Feb 13 19:31:07.816433 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 19:31:07.933746 sshd[5975]: Connection closed by 10.0.0.1 port 59392 Feb 13 19:31:07.934092 sshd-session[5973]: pam_unix(sshd:session): session closed for user core Feb 13 19:31:07.938295 systemd[1]: sshd@15-10.0.0.116:22-10.0.0.1:59392.service: Deactivated successfully. Feb 13 19:31:07.941047 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 19:31:07.941898 systemd-logind[1485]: Session 16 logged out. Waiting for processes to exit. Feb 13 19:31:07.942816 systemd-logind[1485]: Removed session 16. Feb 13 19:31:08.349959 containerd[1502]: time="2025-02-13T19:31:08.349900838Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:31:08.350668 containerd[1502]: time="2025-02-13T19:31:08.350610470Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Feb 13 19:31:08.351817 containerd[1502]: time="2025-02-13T19:31:08.351769876Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:31:08.353925 containerd[1502]: time="2025-02-13T19:31:08.353895514Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:31:08.354415 containerd[1502]: time="2025-02-13T19:31:08.354379613Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.611293137s" Feb 13 19:31:08.354448 containerd[1502]: time="2025-02-13T19:31:08.354418235Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Feb 13 19:31:08.356382 containerd[1502]: time="2025-02-13T19:31:08.356357154Z" level=info msg="CreateContainer within sandbox \"7eb003001418bf70a6fe9d98a67a0fd67c05cf544947504193fec55d79282bf7\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 19:31:08.382306 containerd[1502]: time="2025-02-13T19:31:08.382264682Z" level=info msg="CreateContainer within sandbox \"7eb003001418bf70a6fe9d98a67a0fd67c05cf544947504193fec55d79282bf7\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"f0f1c0af2f5ce63de5470067e04bcc9dbbe82ef11cef6d36f30d196f970a1de5\"" Feb 13 19:31:08.382872 containerd[1502]: time="2025-02-13T19:31:08.382852546Z" level=info msg="StartContainer for \"f0f1c0af2f5ce63de5470067e04bcc9dbbe82ef11cef6d36f30d196f970a1de5\"" Feb 13 19:31:08.417474 systemd[1]: Started cri-containerd-f0f1c0af2f5ce63de5470067e04bcc9dbbe82ef11cef6d36f30d196f970a1de5.scope - libcontainer container f0f1c0af2f5ce63de5470067e04bcc9dbbe82ef11cef6d36f30d196f970a1de5. Feb 13 19:31:08.487822 containerd[1502]: time="2025-02-13T19:31:08.487745690Z" level=info msg="StartContainer for \"f0f1c0af2f5ce63de5470067e04bcc9dbbe82ef11cef6d36f30d196f970a1de5\" returns successfully" Feb 13 19:31:08.493182 containerd[1502]: time="2025-02-13T19:31:08.492153422Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 19:31:08.599917 kubelet[2600]: I0213 19:31:08.599883 2600 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:31:10.251558 containerd[1502]: time="2025-02-13T19:31:10.251442376Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:31:10.252406 containerd[1502]: time="2025-02-13T19:31:10.252346903Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Feb 13 19:31:10.253728 containerd[1502]: time="2025-02-13T19:31:10.253699341Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:31:10.255866 containerd[1502]: time="2025-02-13T19:31:10.255836411Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:31:10.256526 containerd[1502]: time="2025-02-13T19:31:10.256492603Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.764302221s" Feb 13 19:31:10.256567 containerd[1502]: time="2025-02-13T19:31:10.256525414Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Feb 13 19:31:10.258557 containerd[1502]: time="2025-02-13T19:31:10.258533151Z" level=info msg="CreateContainer within sandbox \"7eb003001418bf70a6fe9d98a67a0fd67c05cf544947504193fec55d79282bf7\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 19:31:10.275341 containerd[1502]: time="2025-02-13T19:31:10.274611392Z" level=info msg="CreateContainer within sandbox \"7eb003001418bf70a6fe9d98a67a0fd67c05cf544947504193fec55d79282bf7\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"d88f221b7bc4369a013b9c523384c24d17c93567b0fb836b4aa3bc0e96c19f5b\"" Feb 13 19:31:10.277654 containerd[1502]: time="2025-02-13T19:31:10.277613765Z" level=info msg="StartContainer for \"d88f221b7bc4369a013b9c523384c24d17c93567b0fb836b4aa3bc0e96c19f5b\"" Feb 13 19:31:10.307077 systemd[1]: run-containerd-runc-k8s.io-d88f221b7bc4369a013b9c523384c24d17c93567b0fb836b4aa3bc0e96c19f5b-runc.J1pLQS.mount: Deactivated successfully. Feb 13 19:31:10.323644 systemd[1]: Started cri-containerd-d88f221b7bc4369a013b9c523384c24d17c93567b0fb836b4aa3bc0e96c19f5b.scope - libcontainer container d88f221b7bc4369a013b9c523384c24d17c93567b0fb836b4aa3bc0e96c19f5b. Feb 13 19:31:10.364540 containerd[1502]: time="2025-02-13T19:31:10.364470529Z" level=info msg="StartContainer for \"d88f221b7bc4369a013b9c523384c24d17c93567b0fb836b4aa3bc0e96c19f5b\" returns successfully" Feb 13 19:31:10.880783 kubelet[2600]: I0213 19:31:10.880737 2600 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 19:31:10.880783 kubelet[2600]: I0213 19:31:10.880778 2600 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 19:31:12.950702 systemd[1]: Started sshd@16-10.0.0.116:22-10.0.0.1:59408.service - OpenSSH per-connection server daemon (10.0.0.1:59408). Feb 13 19:31:13.004508 sshd[6088]: Accepted publickey for core from 10.0.0.1 port 59408 ssh2: RSA SHA256:ENn9hOvI2hLUXcV6iHA8gc9Z4CTEPvGDMkoVtxIuYbg Feb 13 19:31:13.006529 sshd-session[6088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:31:13.011857 systemd-logind[1485]: New session 17 of user core. Feb 13 19:31:13.020534 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 19:31:13.161646 sshd[6090]: Connection closed by 10.0.0.1 port 59408 Feb 13 19:31:13.162163 sshd-session[6088]: pam_unix(sshd:session): session closed for user core Feb 13 19:31:13.166095 systemd[1]: sshd@16-10.0.0.116:22-10.0.0.1:59408.service: Deactivated successfully. Feb 13 19:31:13.168475 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 19:31:13.170839 systemd-logind[1485]: Session 17 logged out. Waiting for processes to exit. Feb 13 19:31:13.171817 systemd-logind[1485]: Removed session 17. Feb 13 19:31:15.807430 containerd[1502]: time="2025-02-13T19:31:15.807388823Z" level=info msg="StopPodSandbox for \"4d57794a877b0b6953752ccedec2bd85d68373c09318c76ac2c537449d27f46f\"" Feb 13 19:31:15.807834 containerd[1502]: time="2025-02-13T19:31:15.807497035Z" level=info msg="TearDown network for sandbox \"4d57794a877b0b6953752ccedec2bd85d68373c09318c76ac2c537449d27f46f\" successfully" Feb 13 19:31:15.807834 containerd[1502]: time="2025-02-13T19:31:15.807506633Z" level=info msg="StopPodSandbox for \"4d57794a877b0b6953752ccedec2bd85d68373c09318c76ac2c537449d27f46f\" returns successfully" Feb 13 19:31:15.807834 containerd[1502]: time="2025-02-13T19:31:15.807814932Z" level=info msg="RemovePodSandbox for \"4d57794a877b0b6953752ccedec2bd85d68373c09318c76ac2c537449d27f46f\"" Feb 13 19:31:15.817250 containerd[1502]: time="2025-02-13T19:31:15.817225137Z" level=info msg="Forcibly stopping sandbox \"4d57794a877b0b6953752ccedec2bd85d68373c09318c76ac2c537449d27f46f\"" Feb 13 19:31:15.817353 containerd[1502]: time="2025-02-13T19:31:15.817297022Z" level=info msg="TearDown network for sandbox \"4d57794a877b0b6953752ccedec2bd85d68373c09318c76ac2c537449d27f46f\" successfully" Feb 13 19:31:15.982596 containerd[1502]: time="2025-02-13T19:31:15.982530576Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4d57794a877b0b6953752ccedec2bd85d68373c09318c76ac2c537449d27f46f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:31:15.982755 containerd[1502]: time="2025-02-13T19:31:15.982646254Z" level=info msg="RemovePodSandbox \"4d57794a877b0b6953752ccedec2bd85d68373c09318c76ac2c537449d27f46f\" returns successfully" Feb 13 19:31:15.983258 containerd[1502]: time="2025-02-13T19:31:15.983228946Z" level=info msg="StopPodSandbox for \"06b76013e050990e084073338277aecc1ce3e9a19060a462af91ada291c27c21\"" Feb 13 19:31:15.983396 containerd[1502]: time="2025-02-13T19:31:15.983372746Z" level=info msg="TearDown network for sandbox \"06b76013e050990e084073338277aecc1ce3e9a19060a462af91ada291c27c21\" successfully" Feb 13 19:31:15.983396 containerd[1502]: time="2025-02-13T19:31:15.983388295Z" level=info msg="StopPodSandbox for \"06b76013e050990e084073338277aecc1ce3e9a19060a462af91ada291c27c21\" returns successfully" Feb 13 19:31:15.983776 containerd[1502]: time="2025-02-13T19:31:15.983734134Z" level=info msg="RemovePodSandbox for \"06b76013e050990e084073338277aecc1ce3e9a19060a462af91ada291c27c21\"" Feb 13 19:31:15.983776 containerd[1502]: time="2025-02-13T19:31:15.983781923Z" level=info msg="Forcibly stopping sandbox \"06b76013e050990e084073338277aecc1ce3e9a19060a462af91ada291c27c21\"" Feb 13 19:31:15.983935 containerd[1502]: time="2025-02-13T19:31:15.983885458Z" level=info msg="TearDown network for sandbox \"06b76013e050990e084073338277aecc1ce3e9a19060a462af91ada291c27c21\" successfully" Feb 13 19:31:15.987718 containerd[1502]: time="2025-02-13T19:31:15.987675900Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"06b76013e050990e084073338277aecc1ce3e9a19060a462af91ada291c27c21\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:31:15.987774 containerd[1502]: time="2025-02-13T19:31:15.987738658Z" level=info msg="RemovePodSandbox \"06b76013e050990e084073338277aecc1ce3e9a19060a462af91ada291c27c21\" returns successfully" Feb 13 19:31:15.988029 containerd[1502]: time="2025-02-13T19:31:15.988004666Z" level=info msg="StopPodSandbox for \"8a68fb01407d9c834800a26cc46bd4437b62dc056b0e1350c4907a96b87c511d\"" Feb 13 19:31:15.988129 containerd[1502]: time="2025-02-13T19:31:15.988093784Z" level=info msg="TearDown network for sandbox \"8a68fb01407d9c834800a26cc46bd4437b62dc056b0e1350c4907a96b87c511d\" successfully" Feb 13 19:31:15.988129 containerd[1502]: time="2025-02-13T19:31:15.988111186Z" level=info msg="StopPodSandbox for \"8a68fb01407d9c834800a26cc46bd4437b62dc056b0e1350c4907a96b87c511d\" returns successfully" Feb 13 19:31:15.988389 containerd[1502]: time="2025-02-13T19:31:15.988364461Z" level=info msg="RemovePodSandbox for \"8a68fb01407d9c834800a26cc46bd4437b62dc056b0e1350c4907a96b87c511d\"" Feb 13 19:31:15.988464 containerd[1502]: time="2025-02-13T19:31:15.988393436Z" level=info msg="Forcibly stopping sandbox \"8a68fb01407d9c834800a26cc46bd4437b62dc056b0e1350c4907a96b87c511d\"" Feb 13 19:31:15.988529 containerd[1502]: time="2025-02-13T19:31:15.988477954Z" level=info msg="TearDown network for sandbox \"8a68fb01407d9c834800a26cc46bd4437b62dc056b0e1350c4907a96b87c511d\" successfully" Feb 13 19:31:15.992501 containerd[1502]: time="2025-02-13T19:31:15.992466637Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8a68fb01407d9c834800a26cc46bd4437b62dc056b0e1350c4907a96b87c511d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:31:15.992720 containerd[1502]: time="2025-02-13T19:31:15.992532952Z" level=info msg="RemovePodSandbox \"8a68fb01407d9c834800a26cc46bd4437b62dc056b0e1350c4907a96b87c511d\" returns successfully" Feb 13 19:31:15.992880 containerd[1502]: time="2025-02-13T19:31:15.992831142Z" level=info msg="StopPodSandbox for \"cf302ac4983e4b5752dc5e187cea36041dfa162432d3e8affbda52f81e45b173\"" Feb 13 19:31:15.992950 containerd[1502]: time="2025-02-13T19:31:15.992933474Z" level=info msg="TearDown network for sandbox \"cf302ac4983e4b5752dc5e187cea36041dfa162432d3e8affbda52f81e45b173\" successfully" Feb 13 19:31:15.992950 containerd[1502]: time="2025-02-13T19:31:15.992947059Z" level=info msg="StopPodSandbox for \"cf302ac4983e4b5752dc5e187cea36041dfa162432d3e8affbda52f81e45b173\" returns successfully" Feb 13 19:31:15.993185 containerd[1502]: time="2025-02-13T19:31:15.993158496Z" level=info msg="RemovePodSandbox for \"cf302ac4983e4b5752dc5e187cea36041dfa162432d3e8affbda52f81e45b173\"" Feb 13 19:31:15.993185 containerd[1502]: time="2025-02-13T19:31:15.993180337Z" level=info msg="Forcibly stopping sandbox \"cf302ac4983e4b5752dc5e187cea36041dfa162432d3e8affbda52f81e45b173\"" Feb 13 19:31:15.993285 containerd[1502]: time="2025-02-13T19:31:15.993249106Z" level=info msg="TearDown network for sandbox \"cf302ac4983e4b5752dc5e187cea36041dfa162432d3e8affbda52f81e45b173\" successfully" Feb 13 19:31:15.996932 containerd[1502]: time="2025-02-13T19:31:15.996899745Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cf302ac4983e4b5752dc5e187cea36041dfa162432d3e8affbda52f81e45b173\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:31:15.996991 containerd[1502]: time="2025-02-13T19:31:15.996942806Z" level=info msg="RemovePodSandbox \"cf302ac4983e4b5752dc5e187cea36041dfa162432d3e8affbda52f81e45b173\" returns successfully" Feb 13 19:31:15.997190 containerd[1502]: time="2025-02-13T19:31:15.997164241Z" level=info msg="StopPodSandbox for \"ca70552ffab426979d1de2107ddb87e6551aa9adf1dd6f0eb5a66684f19cdda4\"" Feb 13 19:31:15.997298 containerd[1502]: time="2025-02-13T19:31:15.997276101Z" level=info msg="TearDown network for sandbox \"ca70552ffab426979d1de2107ddb87e6551aa9adf1dd6f0eb5a66684f19cdda4\" successfully" Feb 13 19:31:15.997298 containerd[1502]: time="2025-02-13T19:31:15.997289446Z" level=info msg="StopPodSandbox for \"ca70552ffab426979d1de2107ddb87e6551aa9adf1dd6f0eb5a66684f19cdda4\" returns successfully" Feb 13 19:31:15.997545 containerd[1502]: time="2025-02-13T19:31:15.997523845Z" level=info msg="RemovePodSandbox for \"ca70552ffab426979d1de2107ddb87e6551aa9adf1dd6f0eb5a66684f19cdda4\"" Feb 13 19:31:15.997603 containerd[1502]: time="2025-02-13T19:31:15.997549064Z" level=info msg="Forcibly stopping sandbox \"ca70552ffab426979d1de2107ddb87e6551aa9adf1dd6f0eb5a66684f19cdda4\"" Feb 13 19:31:15.997682 containerd[1502]: time="2025-02-13T19:31:15.997649412Z" level=info msg="TearDown network for sandbox \"ca70552ffab426979d1de2107ddb87e6551aa9adf1dd6f0eb5a66684f19cdda4\" successfully" Feb 13 19:31:16.001110 containerd[1502]: time="2025-02-13T19:31:16.001083825Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ca70552ffab426979d1de2107ddb87e6551aa9adf1dd6f0eb5a66684f19cdda4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:31:16.001168 containerd[1502]: time="2025-02-13T19:31:16.001123469Z" level=info msg="RemovePodSandbox \"ca70552ffab426979d1de2107ddb87e6551aa9adf1dd6f0eb5a66684f19cdda4\" returns successfully" Feb 13 19:31:16.001421 containerd[1502]: time="2025-02-13T19:31:16.001394959Z" level=info msg="StopPodSandbox for \"c049dfb5c4e4f7b160d052644270f741c94db9967ffc69c3eaa01c0f5ec5566f\"" Feb 13 19:31:16.001514 containerd[1502]: time="2025-02-13T19:31:16.001493333Z" level=info msg="TearDown network for sandbox \"c049dfb5c4e4f7b160d052644270f741c94db9967ffc69c3eaa01c0f5ec5566f\" successfully" Feb 13 19:31:16.001514 containerd[1502]: time="2025-02-13T19:31:16.001507560Z" level=info msg="StopPodSandbox for \"c049dfb5c4e4f7b160d052644270f741c94db9967ffc69c3eaa01c0f5ec5566f\" returns successfully" Feb 13 19:31:16.001748 containerd[1502]: time="2025-02-13T19:31:16.001714328Z" level=info msg="RemovePodSandbox for \"c049dfb5c4e4f7b160d052644270f741c94db9967ffc69c3eaa01c0f5ec5566f\"" Feb 13 19:31:16.001748 containerd[1502]: time="2025-02-13T19:31:16.001736550Z" level=info msg="Forcibly stopping sandbox \"c049dfb5c4e4f7b160d052644270f741c94db9967ffc69c3eaa01c0f5ec5566f\"" Feb 13 19:31:16.001862 containerd[1502]: time="2025-02-13T19:31:16.001822581Z" level=info msg="TearDown network for sandbox \"c049dfb5c4e4f7b160d052644270f741c94db9967ffc69c3eaa01c0f5ec5566f\" successfully" Feb 13 19:31:16.005301 containerd[1502]: time="2025-02-13T19:31:16.005273385Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c049dfb5c4e4f7b160d052644270f741c94db9967ffc69c3eaa01c0f5ec5566f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:31:16.005377 containerd[1502]: time="2025-02-13T19:31:16.005305175Z" level=info msg="RemovePodSandbox \"c049dfb5c4e4f7b160d052644270f741c94db9967ffc69c3eaa01c0f5ec5566f\" returns successfully" Feb 13 19:31:16.005677 containerd[1502]: time="2025-02-13T19:31:16.005657676Z" level=info msg="StopPodSandbox for \"259a6c2041cb9f5c709110e833b792e9227b4e6e9df73f046bd2d34be5e0fa0a\"" Feb 13 19:31:16.005752 containerd[1502]: time="2025-02-13T19:31:16.005739019Z" level=info msg="TearDown network for sandbox \"259a6c2041cb9f5c709110e833b792e9227b4e6e9df73f046bd2d34be5e0fa0a\" successfully" Feb 13 19:31:16.005787 containerd[1502]: time="2025-02-13T19:31:16.005750711Z" level=info msg="StopPodSandbox for \"259a6c2041cb9f5c709110e833b792e9227b4e6e9df73f046bd2d34be5e0fa0a\" returns successfully" Feb 13 19:31:16.006024 containerd[1502]: time="2025-02-13T19:31:16.005998565Z" level=info msg="RemovePodSandbox for \"259a6c2041cb9f5c709110e833b792e9227b4e6e9df73f046bd2d34be5e0fa0a\"" Feb 13 19:31:16.006073 containerd[1502]: time="2025-02-13T19:31:16.006028792Z" level=info msg="Forcibly stopping sandbox \"259a6c2041cb9f5c709110e833b792e9227b4e6e9df73f046bd2d34be5e0fa0a\"" Feb 13 19:31:16.006149 containerd[1502]: time="2025-02-13T19:31:16.006110906Z" level=info msg="TearDown network for sandbox \"259a6c2041cb9f5c709110e833b792e9227b4e6e9df73f046bd2d34be5e0fa0a\" successfully" Feb 13 19:31:16.009824 containerd[1502]: time="2025-02-13T19:31:16.009793916Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"259a6c2041cb9f5c709110e833b792e9227b4e6e9df73f046bd2d34be5e0fa0a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:31:16.009871 containerd[1502]: time="2025-02-13T19:31:16.009840574Z" level=info msg="RemovePodSandbox \"259a6c2041cb9f5c709110e833b792e9227b4e6e9df73f046bd2d34be5e0fa0a\" returns successfully" Feb 13 19:31:16.010088 containerd[1502]: time="2025-02-13T19:31:16.010056338Z" level=info msg="StopPodSandbox for \"029de7693a9f8fae3fc5f15fc92d9b4380854aa9a9856579c8e3ed5055510b90\"" Feb 13 19:31:16.010150 containerd[1502]: time="2025-02-13T19:31:16.010136919Z" level=info msg="TearDown network for sandbox \"029de7693a9f8fae3fc5f15fc92d9b4380854aa9a9856579c8e3ed5055510b90\" successfully" Feb 13 19:31:16.010190 containerd[1502]: time="2025-02-13T19:31:16.010149623Z" level=info msg="StopPodSandbox for \"029de7693a9f8fae3fc5f15fc92d9b4380854aa9a9856579c8e3ed5055510b90\" returns successfully" Feb 13 19:31:16.010456 containerd[1502]: time="2025-02-13T19:31:16.010435870Z" level=info msg="RemovePodSandbox for \"029de7693a9f8fae3fc5f15fc92d9b4380854aa9a9856579c8e3ed5055510b90\"" Feb 13 19:31:16.010506 containerd[1502]: time="2025-02-13T19:31:16.010458132Z" level=info msg="Forcibly stopping sandbox \"029de7693a9f8fae3fc5f15fc92d9b4380854aa9a9856579c8e3ed5055510b90\"" Feb 13 19:31:16.010570 containerd[1502]: time="2025-02-13T19:31:16.010542862Z" level=info msg="TearDown network for sandbox \"029de7693a9f8fae3fc5f15fc92d9b4380854aa9a9856579c8e3ed5055510b90\" successfully" Feb 13 19:31:16.014726 containerd[1502]: time="2025-02-13T19:31:16.014689341Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"029de7693a9f8fae3fc5f15fc92d9b4380854aa9a9856579c8e3ed5055510b90\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:31:16.014726 containerd[1502]: time="2025-02-13T19:31:16.014728785Z" level=info msg="RemovePodSandbox \"029de7693a9f8fae3fc5f15fc92d9b4380854aa9a9856579c8e3ed5055510b90\" returns successfully" Feb 13 19:31:16.014953 containerd[1502]: time="2025-02-13T19:31:16.014938458Z" level=info msg="StopPodSandbox for \"92aa5e65cd130c3a58ac37b4adf370bc37fd5b279afabfde37dbc8bf70e38b6a\"" Feb 13 19:31:16.015038 containerd[1502]: time="2025-02-13T19:31:16.015015382Z" level=info msg="TearDown network for sandbox \"92aa5e65cd130c3a58ac37b4adf370bc37fd5b279afabfde37dbc8bf70e38b6a\" successfully" Feb 13 19:31:16.015038 containerd[1502]: time="2025-02-13T19:31:16.015029619Z" level=info msg="StopPodSandbox for \"92aa5e65cd130c3a58ac37b4adf370bc37fd5b279afabfde37dbc8bf70e38b6a\" returns successfully" Feb 13 19:31:16.015248 containerd[1502]: time="2025-02-13T19:31:16.015223933Z" level=info msg="RemovePodSandbox for \"92aa5e65cd130c3a58ac37b4adf370bc37fd5b279afabfde37dbc8bf70e38b6a\"" Feb 13 19:31:16.015248 containerd[1502]: time="2025-02-13T19:31:16.015245383Z" level=info msg="Forcibly stopping sandbox \"92aa5e65cd130c3a58ac37b4adf370bc37fd5b279afabfde37dbc8bf70e38b6a\"" Feb 13 19:31:16.015376 containerd[1502]: time="2025-02-13T19:31:16.015337206Z" level=info msg="TearDown network for sandbox \"92aa5e65cd130c3a58ac37b4adf370bc37fd5b279afabfde37dbc8bf70e38b6a\" successfully" Feb 13 19:31:16.019032 containerd[1502]: time="2025-02-13T19:31:16.018993165Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"92aa5e65cd130c3a58ac37b4adf370bc37fd5b279afabfde37dbc8bf70e38b6a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:31:16.019093 containerd[1502]: time="2025-02-13T19:31:16.019057656Z" level=info msg="RemovePodSandbox \"92aa5e65cd130c3a58ac37b4adf370bc37fd5b279afabfde37dbc8bf70e38b6a\" returns successfully" Feb 13 19:31:16.019518 containerd[1502]: time="2025-02-13T19:31:16.019451775Z" level=info msg="StopPodSandbox for \"6d5d01f5764dfa313010e551a484fd652da553aef24f0843408d75f6c678d0dc\"" Feb 13 19:31:16.019622 containerd[1502]: time="2025-02-13T19:31:16.019602508Z" level=info msg="TearDown network for sandbox \"6d5d01f5764dfa313010e551a484fd652da553aef24f0843408d75f6c678d0dc\" successfully" Feb 13 19:31:16.019622 containerd[1502]: time="2025-02-13T19:31:16.019617947Z" level=info msg="StopPodSandbox for \"6d5d01f5764dfa313010e551a484fd652da553aef24f0843408d75f6c678d0dc\" returns successfully" Feb 13 19:31:16.020358 containerd[1502]: time="2025-02-13T19:31:16.019914233Z" level=info msg="RemovePodSandbox for \"6d5d01f5764dfa313010e551a484fd652da553aef24f0843408d75f6c678d0dc\"" Feb 13 19:31:16.020358 containerd[1502]: time="2025-02-13T19:31:16.019938629Z" level=info msg="Forcibly stopping sandbox \"6d5d01f5764dfa313010e551a484fd652da553aef24f0843408d75f6c678d0dc\"" Feb 13 19:31:16.020358 containerd[1502]: time="2025-02-13T19:31:16.020006707Z" level=info msg="TearDown network for sandbox \"6d5d01f5764dfa313010e551a484fd652da553aef24f0843408d75f6c678d0dc\" successfully" Feb 13 19:31:16.023724 containerd[1502]: time="2025-02-13T19:31:16.023694585Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6d5d01f5764dfa313010e551a484fd652da553aef24f0843408d75f6c678d0dc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:31:16.023794 containerd[1502]: time="2025-02-13T19:31:16.023753716Z" level=info msg="RemovePodSandbox \"6d5d01f5764dfa313010e551a484fd652da553aef24f0843408d75f6c678d0dc\" returns successfully" Feb 13 19:31:16.024076 containerd[1502]: time="2025-02-13T19:31:16.024027781Z" level=info msg="StopPodSandbox for \"da21058157a478777dd793b0c03fcbc23d8d5e245d11dd697c88feb605601cc5\"" Feb 13 19:31:16.024131 containerd[1502]: time="2025-02-13T19:31:16.024119172Z" level=info msg="TearDown network for sandbox \"da21058157a478777dd793b0c03fcbc23d8d5e245d11dd697c88feb605601cc5\" successfully" Feb 13 19:31:16.024162 containerd[1502]: time="2025-02-13T19:31:16.024129541Z" level=info msg="StopPodSandbox for \"da21058157a478777dd793b0c03fcbc23d8d5e245d11dd697c88feb605601cc5\" returns successfully" Feb 13 19:31:16.024507 containerd[1502]: time="2025-02-13T19:31:16.024477444Z" level=info msg="RemovePodSandbox for \"da21058157a478777dd793b0c03fcbc23d8d5e245d11dd697c88feb605601cc5\"" Feb 13 19:31:16.024546 containerd[1502]: time="2025-02-13T19:31:16.024511428Z" level=info msg="Forcibly stopping sandbox \"da21058157a478777dd793b0c03fcbc23d8d5e245d11dd697c88feb605601cc5\"" Feb 13 19:31:16.024651 containerd[1502]: time="2025-02-13T19:31:16.024603420Z" level=info msg="TearDown network for sandbox \"da21058157a478777dd793b0c03fcbc23d8d5e245d11dd697c88feb605601cc5\" successfully" Feb 13 19:31:16.028773 containerd[1502]: time="2025-02-13T19:31:16.028645453Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"da21058157a478777dd793b0c03fcbc23d8d5e245d11dd697c88feb605601cc5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:31:16.028773 containerd[1502]: time="2025-02-13T19:31:16.028703953Z" level=info msg="RemovePodSandbox \"da21058157a478777dd793b0c03fcbc23d8d5e245d11dd697c88feb605601cc5\" returns successfully" Feb 13 19:31:16.029009 containerd[1502]: time="2025-02-13T19:31:16.028986132Z" level=info msg="StopPodSandbox for \"c2dc30846fa58b1ecad4344a3d9824db5c7ab58d70e840480ea575cd61100f5f\"" Feb 13 19:31:16.029103 containerd[1502]: time="2025-02-13T19:31:16.029084437Z" level=info msg="TearDown network for sandbox \"c2dc30846fa58b1ecad4344a3d9824db5c7ab58d70e840480ea575cd61100f5f\" successfully" Feb 13 19:31:16.029131 containerd[1502]: time="2025-02-13T19:31:16.029101278Z" level=info msg="StopPodSandbox for \"c2dc30846fa58b1ecad4344a3d9824db5c7ab58d70e840480ea575cd61100f5f\" returns successfully" Feb 13 19:31:16.030421 containerd[1502]: time="2025-02-13T19:31:16.030390498Z" level=info msg="RemovePodSandbox for \"c2dc30846fa58b1ecad4344a3d9824db5c7ab58d70e840480ea575cd61100f5f\"" Feb 13 19:31:16.030464 containerd[1502]: time="2025-02-13T19:31:16.030421907Z" level=info msg="Forcibly stopping sandbox \"c2dc30846fa58b1ecad4344a3d9824db5c7ab58d70e840480ea575cd61100f5f\"" Feb 13 19:31:16.030546 containerd[1502]: time="2025-02-13T19:31:16.030502348Z" level=info msg="TearDown network for sandbox \"c2dc30846fa58b1ecad4344a3d9824db5c7ab58d70e840480ea575cd61100f5f\" successfully" Feb 13 19:31:16.034865 containerd[1502]: time="2025-02-13T19:31:16.034834986Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c2dc30846fa58b1ecad4344a3d9824db5c7ab58d70e840480ea575cd61100f5f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:31:16.034954 containerd[1502]: time="2025-02-13T19:31:16.034884008Z" level=info msg="RemovePodSandbox \"c2dc30846fa58b1ecad4344a3d9824db5c7ab58d70e840480ea575cd61100f5f\" returns successfully" Feb 13 19:31:16.035160 containerd[1502]: time="2025-02-13T19:31:16.035138776Z" level=info msg="StopPodSandbox for \"11c47a7f5cb5d79bd9c9a79d6da7f46398332e0fada4bee2607023589053bac7\"" Feb 13 19:31:16.035393 containerd[1502]: time="2025-02-13T19:31:16.035366803Z" level=info msg="TearDown network for sandbox \"11c47a7f5cb5d79bd9c9a79d6da7f46398332e0fada4bee2607023589053bac7\" successfully" Feb 13 19:31:16.035393 containerd[1502]: time="2025-02-13T19:31:16.035382242Z" level=info msg="StopPodSandbox for \"11c47a7f5cb5d79bd9c9a79d6da7f46398332e0fada4bee2607023589053bac7\" returns successfully" Feb 13 19:31:16.035754 containerd[1502]: time="2025-02-13T19:31:16.035724103Z" level=info msg="RemovePodSandbox for \"11c47a7f5cb5d79bd9c9a79d6da7f46398332e0fada4bee2607023589053bac7\"" Feb 13 19:31:16.035805 containerd[1502]: time="2025-02-13T19:31:16.035756945Z" level=info msg="Forcibly stopping sandbox \"11c47a7f5cb5d79bd9c9a79d6da7f46398332e0fada4bee2607023589053bac7\"" Feb 13 19:31:16.035910 containerd[1502]: time="2025-02-13T19:31:16.035869667Z" level=info msg="TearDown network for sandbox \"11c47a7f5cb5d79bd9c9a79d6da7f46398332e0fada4bee2607023589053bac7\" successfully" Feb 13 19:31:16.039609 containerd[1502]: time="2025-02-13T19:31:16.039570169Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"11c47a7f5cb5d79bd9c9a79d6da7f46398332e0fada4bee2607023589053bac7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:31:16.039678 containerd[1502]: time="2025-02-13T19:31:16.039624281Z" level=info msg="RemovePodSandbox \"11c47a7f5cb5d79bd9c9a79d6da7f46398332e0fada4bee2607023589053bac7\" returns successfully" Feb 13 19:31:16.039905 containerd[1502]: time="2025-02-13T19:31:16.039879119Z" level=info msg="StopPodSandbox for \"9a576abef7e3f4083c481695c9f493c39417d50066da6a1bc9040a8a2fdf6340\"" Feb 13 19:31:16.040010 containerd[1502]: time="2025-02-13T19:31:16.039984457Z" level=info msg="TearDown network for sandbox \"9a576abef7e3f4083c481695c9f493c39417d50066da6a1bc9040a8a2fdf6340\" successfully" Feb 13 19:31:16.040010 containerd[1502]: time="2025-02-13T19:31:16.040002801Z" level=info msg="StopPodSandbox for \"9a576abef7e3f4083c481695c9f493c39417d50066da6a1bc9040a8a2fdf6340\" returns successfully" Feb 13 19:31:16.040229 containerd[1502]: time="2025-02-13T19:31:16.040205582Z" level=info msg="RemovePodSandbox for \"9a576abef7e3f4083c481695c9f493c39417d50066da6a1bc9040a8a2fdf6340\"" Feb 13 19:31:16.040229 containerd[1502]: time="2025-02-13T19:31:16.040228334Z" level=info msg="Forcibly stopping sandbox \"9a576abef7e3f4083c481695c9f493c39417d50066da6a1bc9040a8a2fdf6340\"" Feb 13 19:31:16.040339 containerd[1502]: time="2025-02-13T19:31:16.040298917Z" level=info msg="TearDown network for sandbox \"9a576abef7e3f4083c481695c9f493c39417d50066da6a1bc9040a8a2fdf6340\" successfully" Feb 13 19:31:16.043839 containerd[1502]: time="2025-02-13T19:31:16.043809734Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9a576abef7e3f4083c481695c9f493c39417d50066da6a1bc9040a8a2fdf6340\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:31:16.043912 containerd[1502]: time="2025-02-13T19:31:16.043861200Z" level=info msg="RemovePodSandbox \"9a576abef7e3f4083c481695c9f493c39417d50066da6a1bc9040a8a2fdf6340\" returns successfully" Feb 13 19:31:16.044140 containerd[1502]: time="2025-02-13T19:31:16.044108504Z" level=info msg="StopPodSandbox for \"919def795fcabe325c259e12ac19e82f4306388eba272371855df017ef8e2d5d\"" Feb 13 19:31:16.044238 containerd[1502]: time="2025-02-13T19:31:16.044221837Z" level=info msg="TearDown network for sandbox \"919def795fcabe325c259e12ac19e82f4306388eba272371855df017ef8e2d5d\" successfully" Feb 13 19:31:16.044238 containerd[1502]: time="2025-02-13T19:31:16.044235172Z" level=info msg="StopPodSandbox for \"919def795fcabe325c259e12ac19e82f4306388eba272371855df017ef8e2d5d\" returns successfully" Feb 13 19:31:16.044496 containerd[1502]: time="2025-02-13T19:31:16.044468559Z" level=info msg="RemovePodSandbox for \"919def795fcabe325c259e12ac19e82f4306388eba272371855df017ef8e2d5d\"" Feb 13 19:31:16.044548 containerd[1502]: time="2025-02-13T19:31:16.044494778Z" level=info msg="Forcibly stopping sandbox \"919def795fcabe325c259e12ac19e82f4306388eba272371855df017ef8e2d5d\"" Feb 13 19:31:16.044665 containerd[1502]: time="2025-02-13T19:31:16.044628720Z" level=info msg="TearDown network for sandbox \"919def795fcabe325c259e12ac19e82f4306388eba272371855df017ef8e2d5d\" successfully" Feb 13 19:31:16.048120 containerd[1502]: time="2025-02-13T19:31:16.048097277Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"919def795fcabe325c259e12ac19e82f4306388eba272371855df017ef8e2d5d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:31:16.048191 containerd[1502]: time="2025-02-13T19:31:16.048129448Z" level=info msg="RemovePodSandbox \"919def795fcabe325c259e12ac19e82f4306388eba272371855df017ef8e2d5d\" returns successfully" Feb 13 19:31:16.048403 containerd[1502]: time="2025-02-13T19:31:16.048379186Z" level=info msg="StopPodSandbox for \"68b36cedfb71a4b13925ca9c776e9f716e294ad3c104d860f484f0a21e16f665\"" Feb 13 19:31:16.048493 containerd[1502]: time="2025-02-13T19:31:16.048468624Z" level=info msg="TearDown network for sandbox \"68b36cedfb71a4b13925ca9c776e9f716e294ad3c104d860f484f0a21e16f665\" successfully" Feb 13 19:31:16.048493 containerd[1502]: time="2025-02-13T19:31:16.048485355Z" level=info msg="StopPodSandbox for \"68b36cedfb71a4b13925ca9c776e9f716e294ad3c104d860f484f0a21e16f665\" returns successfully" Feb 13 19:31:16.048723 containerd[1502]: time="2025-02-13T19:31:16.048698695Z" level=info msg="RemovePodSandbox for \"68b36cedfb71a4b13925ca9c776e9f716e294ad3c104d860f484f0a21e16f665\"" Feb 13 19:31:16.048723 containerd[1502]: time="2025-02-13T19:31:16.048719304Z" level=info msg="Forcibly stopping sandbox \"68b36cedfb71a4b13925ca9c776e9f716e294ad3c104d860f484f0a21e16f665\"" Feb 13 19:31:16.048811 containerd[1502]: time="2025-02-13T19:31:16.048780068Z" level=info msg="TearDown network for sandbox \"68b36cedfb71a4b13925ca9c776e9f716e294ad3c104d860f484f0a21e16f665\" successfully" Feb 13 19:31:16.052171 containerd[1502]: time="2025-02-13T19:31:16.052147977Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"68b36cedfb71a4b13925ca9c776e9f716e294ad3c104d860f484f0a21e16f665\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:31:16.052228 containerd[1502]: time="2025-02-13T19:31:16.052178905Z" level=info msg="RemovePodSandbox \"68b36cedfb71a4b13925ca9c776e9f716e294ad3c104d860f484f0a21e16f665\" returns successfully" Feb 13 19:31:16.052472 containerd[1502]: time="2025-02-13T19:31:16.052455674Z" level=info msg="StopPodSandbox for \"cdeca7bc8c5d40767aa8b82f58ac84479cad41960dfa49f467a04bf46aa03b20\"" Feb 13 19:31:16.052590 containerd[1502]: time="2025-02-13T19:31:16.052532698Z" level=info msg="TearDown network for sandbox \"cdeca7bc8c5d40767aa8b82f58ac84479cad41960dfa49f467a04bf46aa03b20\" successfully" Feb 13 19:31:16.052590 containerd[1502]: time="2025-02-13T19:31:16.052544731Z" level=info msg="StopPodSandbox for \"cdeca7bc8c5d40767aa8b82f58ac84479cad41960dfa49f467a04bf46aa03b20\" returns successfully" Feb 13 19:31:16.052732 containerd[1502]: time="2025-02-13T19:31:16.052713027Z" level=info msg="RemovePodSandbox for \"cdeca7bc8c5d40767aa8b82f58ac84479cad41960dfa49f467a04bf46aa03b20\"" Feb 13 19:31:16.052732 containerd[1502]: time="2025-02-13T19:31:16.052730299Z" level=info msg="Forcibly stopping sandbox \"cdeca7bc8c5d40767aa8b82f58ac84479cad41960dfa49f467a04bf46aa03b20\"" Feb 13 19:31:16.052812 containerd[1502]: time="2025-02-13T19:31:16.052787576Z" level=info msg="TearDown network for sandbox \"cdeca7bc8c5d40767aa8b82f58ac84479cad41960dfa49f467a04bf46aa03b20\" successfully" Feb 13 19:31:16.056166 containerd[1502]: time="2025-02-13T19:31:16.056132833Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cdeca7bc8c5d40767aa8b82f58ac84479cad41960dfa49f467a04bf46aa03b20\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:31:16.056166 containerd[1502]: time="2025-02-13T19:31:16.056164583Z" level=info msg="RemovePodSandbox \"cdeca7bc8c5d40767aa8b82f58ac84479cad41960dfa49f467a04bf46aa03b20\" returns successfully" Feb 13 19:31:16.056422 containerd[1502]: time="2025-02-13T19:31:16.056390376Z" level=info msg="StopPodSandbox for \"06a7a23e17b4ca63f57567aca5f35e0cc59719129998a1820716ea386ece0d87\"" Feb 13 19:31:16.056532 containerd[1502]: time="2025-02-13T19:31:16.056504901Z" level=info msg="TearDown network for sandbox \"06a7a23e17b4ca63f57567aca5f35e0cc59719129998a1820716ea386ece0d87\" successfully" Feb 13 19:31:16.056532 containerd[1502]: time="2025-02-13T19:31:16.056520740Z" level=info msg="StopPodSandbox for \"06a7a23e17b4ca63f57567aca5f35e0cc59719129998a1820716ea386ece0d87\" returns successfully" Feb 13 19:31:16.056790 containerd[1502]: time="2025-02-13T19:31:16.056768164Z" level=info msg="RemovePodSandbox for \"06a7a23e17b4ca63f57567aca5f35e0cc59719129998a1820716ea386ece0d87\"" Feb 13 19:31:16.056790 containerd[1502]: time="2025-02-13T19:31:16.056785988Z" level=info msg="Forcibly stopping sandbox \"06a7a23e17b4ca63f57567aca5f35e0cc59719129998a1820716ea386ece0d87\"" Feb 13 19:31:16.056882 containerd[1502]: time="2025-02-13T19:31:16.056853414Z" level=info msg="TearDown network for sandbox \"06a7a23e17b4ca63f57567aca5f35e0cc59719129998a1820716ea386ece0d87\" successfully" Feb 13 19:31:16.060567 containerd[1502]: time="2025-02-13T19:31:16.060459801Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"06a7a23e17b4ca63f57567aca5f35e0cc59719129998a1820716ea386ece0d87\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:31:16.060567 containerd[1502]: time="2025-02-13T19:31:16.060504625Z" level=info msg="RemovePodSandbox \"06a7a23e17b4ca63f57567aca5f35e0cc59719129998a1820716ea386ece0d87\" returns successfully" Feb 13 19:31:16.060822 containerd[1502]: time="2025-02-13T19:31:16.060786694Z" level=info msg="StopPodSandbox for \"da5ce96c4e01c16bcc0104e0c1afe1e549752b36b8b8ccf8dd21f08debce350e\"" Feb 13 19:31:16.060924 containerd[1502]: time="2025-02-13T19:31:16.060890568Z" level=info msg="TearDown network for sandbox \"da5ce96c4e01c16bcc0104e0c1afe1e549752b36b8b8ccf8dd21f08debce350e\" successfully" Feb 13 19:31:16.060924 containerd[1502]: time="2025-02-13T19:31:16.060907320Z" level=info msg="StopPodSandbox for \"da5ce96c4e01c16bcc0104e0c1afe1e549752b36b8b8ccf8dd21f08debce350e\" returns successfully" Feb 13 19:31:16.061272 containerd[1502]: time="2025-02-13T19:31:16.061218895Z" level=info msg="RemovePodSandbox for \"da5ce96c4e01c16bcc0104e0c1afe1e549752b36b8b8ccf8dd21f08debce350e\"" Feb 13 19:31:16.061351 containerd[1502]: time="2025-02-13T19:31:16.061276543Z" level=info msg="Forcibly stopping sandbox \"da5ce96c4e01c16bcc0104e0c1afe1e549752b36b8b8ccf8dd21f08debce350e\"" Feb 13 19:31:16.061436 containerd[1502]: time="2025-02-13T19:31:16.061390196Z" level=info msg="TearDown network for sandbox \"da5ce96c4e01c16bcc0104e0c1afe1e549752b36b8b8ccf8dd21f08debce350e\" successfully" Feb 13 19:31:16.065846 containerd[1502]: time="2025-02-13T19:31:16.065814466Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"da5ce96c4e01c16bcc0104e0c1afe1e549752b36b8b8ccf8dd21f08debce350e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:31:16.065908 containerd[1502]: time="2025-02-13T19:31:16.065872135Z" level=info msg="RemovePodSandbox \"da5ce96c4e01c16bcc0104e0c1afe1e549752b36b8b8ccf8dd21f08debce350e\" returns successfully" Feb 13 19:31:16.066157 containerd[1502]: time="2025-02-13T19:31:16.066128045Z" level=info msg="StopPodSandbox for \"197b50055a8d04a83b5b696198cca280193728180dffd10631ed830366a0c893\"" Feb 13 19:31:16.066238 containerd[1502]: time="2025-02-13T19:31:16.066215929Z" level=info msg="TearDown network for sandbox \"197b50055a8d04a83b5b696198cca280193728180dffd10631ed830366a0c893\" successfully" Feb 13 19:31:16.066238 containerd[1502]: time="2025-02-13T19:31:16.066232861Z" level=info msg="StopPodSandbox for \"197b50055a8d04a83b5b696198cca280193728180dffd10631ed830366a0c893\" returns successfully" Feb 13 19:31:16.066482 containerd[1502]: time="2025-02-13T19:31:16.066458626Z" level=info msg="RemovePodSandbox for \"197b50055a8d04a83b5b696198cca280193728180dffd10631ed830366a0c893\"" Feb 13 19:31:16.066482 containerd[1502]: time="2025-02-13T19:31:16.066476459Z" level=info msg="Forcibly stopping sandbox \"197b50055a8d04a83b5b696198cca280193728180dffd10631ed830366a0c893\"" Feb 13 19:31:16.066573 containerd[1502]: time="2025-02-13T19:31:16.066542232Z" level=info msg="TearDown network for sandbox \"197b50055a8d04a83b5b696198cca280193728180dffd10631ed830366a0c893\" successfully" Feb 13 19:31:16.070498 containerd[1502]: time="2025-02-13T19:31:16.070458941Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"197b50055a8d04a83b5b696198cca280193728180dffd10631ed830366a0c893\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:31:16.070559 containerd[1502]: time="2025-02-13T19:31:16.070519334Z" level=info msg="RemovePodSandbox \"197b50055a8d04a83b5b696198cca280193728180dffd10631ed830366a0c893\" returns successfully" Feb 13 19:31:16.070849 containerd[1502]: time="2025-02-13T19:31:16.070824626Z" level=info msg="StopPodSandbox for \"1deac3e8f1315485d945ed4d7ff726c16660ae79ed6279b69ac45afcd92e7fed\"" Feb 13 19:31:16.070952 containerd[1502]: time="2025-02-13T19:31:16.070931437Z" level=info msg="TearDown network for sandbox \"1deac3e8f1315485d945ed4d7ff726c16660ae79ed6279b69ac45afcd92e7fed\" successfully" Feb 13 19:31:16.070952 containerd[1502]: time="2025-02-13T19:31:16.070946264Z" level=info msg="StopPodSandbox for \"1deac3e8f1315485d945ed4d7ff726c16660ae79ed6279b69ac45afcd92e7fed\" returns successfully" Feb 13 19:31:16.071193 containerd[1502]: time="2025-02-13T19:31:16.071167439Z" level=info msg="RemovePodSandbox for \"1deac3e8f1315485d945ed4d7ff726c16660ae79ed6279b69ac45afcd92e7fed\"" Feb 13 19:31:16.071230 containerd[1502]: time="2025-02-13T19:31:16.071193017Z" level=info msg="Forcibly stopping sandbox \"1deac3e8f1315485d945ed4d7ff726c16660ae79ed6279b69ac45afcd92e7fed\"" Feb 13 19:31:16.071303 containerd[1502]: time="2025-02-13T19:31:16.071269330Z" level=info msg="TearDown network for sandbox \"1deac3e8f1315485d945ed4d7ff726c16660ae79ed6279b69ac45afcd92e7fed\" successfully" Feb 13 19:31:16.074967 containerd[1502]: time="2025-02-13T19:31:16.074939005Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1deac3e8f1315485d945ed4d7ff726c16660ae79ed6279b69ac45afcd92e7fed\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:31:16.075023 containerd[1502]: time="2025-02-13T19:31:16.074985492Z" level=info msg="RemovePodSandbox \"1deac3e8f1315485d945ed4d7ff726c16660ae79ed6279b69ac45afcd92e7fed\" returns successfully" Feb 13 19:31:16.075203 containerd[1502]: time="2025-02-13T19:31:16.075177332Z" level=info msg="StopPodSandbox for \"a615d24e84e74129058bc9c7cb028777bb16a481796829c1a98ceaf03287d6c5\"" Feb 13 19:31:16.075275 containerd[1502]: time="2025-02-13T19:31:16.075258314Z" level=info msg="TearDown network for sandbox \"a615d24e84e74129058bc9c7cb028777bb16a481796829c1a98ceaf03287d6c5\" successfully" Feb 13 19:31:16.075275 containerd[1502]: time="2025-02-13T19:31:16.075270397Z" level=info msg="StopPodSandbox for \"a615d24e84e74129058bc9c7cb028777bb16a481796829c1a98ceaf03287d6c5\" returns successfully" Feb 13 19:31:16.075601 containerd[1502]: time="2025-02-13T19:31:16.075569118Z" level=info msg="RemovePodSandbox for \"a615d24e84e74129058bc9c7cb028777bb16a481796829c1a98ceaf03287d6c5\"" Feb 13 19:31:16.075601 containerd[1502]: time="2025-02-13T19:31:16.075598834Z" level=info msg="Forcibly stopping sandbox \"a615d24e84e74129058bc9c7cb028777bb16a481796829c1a98ceaf03287d6c5\"" Feb 13 19:31:16.075697 containerd[1502]: time="2025-02-13T19:31:16.075670658Z" level=info msg="TearDown network for sandbox \"a615d24e84e74129058bc9c7cb028777bb16a481796829c1a98ceaf03287d6c5\" successfully" Feb 13 19:31:16.080577 containerd[1502]: time="2025-02-13T19:31:16.080545734Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a615d24e84e74129058bc9c7cb028777bb16a481796829c1a98ceaf03287d6c5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:31:16.080669 containerd[1502]: time="2025-02-13T19:31:16.080601448Z" level=info msg="RemovePodSandbox \"a615d24e84e74129058bc9c7cb028777bb16a481796829c1a98ceaf03287d6c5\" returns successfully" Feb 13 19:31:16.080922 containerd[1502]: time="2025-02-13T19:31:16.080897605Z" level=info msg="StopPodSandbox for \"3ce1ab7c9e3b75163c498673b614ea401518c42e9012bdd7e9cd033bba480db0\"" Feb 13 19:31:16.081023 containerd[1502]: time="2025-02-13T19:31:16.080985269Z" level=info msg="TearDown network for sandbox \"3ce1ab7c9e3b75163c498673b614ea401518c42e9012bdd7e9cd033bba480db0\" successfully" Feb 13 19:31:16.081052 containerd[1502]: time="2025-02-13T19:31:16.081022258Z" level=info msg="StopPodSandbox for \"3ce1ab7c9e3b75163c498673b614ea401518c42e9012bdd7e9cd033bba480db0\" returns successfully" Feb 13 19:31:16.081276 containerd[1502]: time="2025-02-13T19:31:16.081249565Z" level=info msg="RemovePodSandbox for \"3ce1ab7c9e3b75163c498673b614ea401518c42e9012bdd7e9cd033bba480db0\"" Feb 13 19:31:16.081276 containerd[1502]: time="2025-02-13T19:31:16.081272267Z" level=info msg="Forcibly stopping sandbox \"3ce1ab7c9e3b75163c498673b614ea401518c42e9012bdd7e9cd033bba480db0\"" Feb 13 19:31:16.081393 containerd[1502]: time="2025-02-13T19:31:16.081354482Z" level=info msg="TearDown network for sandbox \"3ce1ab7c9e3b75163c498673b614ea401518c42e9012bdd7e9cd033bba480db0\" successfully" Feb 13 19:31:16.084846 containerd[1502]: time="2025-02-13T19:31:16.084823108Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3ce1ab7c9e3b75163c498673b614ea401518c42e9012bdd7e9cd033bba480db0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:31:16.084894 containerd[1502]: time="2025-02-13T19:31:16.084858185Z" level=info msg="RemovePodSandbox \"3ce1ab7c9e3b75163c498673b614ea401518c42e9012bdd7e9cd033bba480db0\" returns successfully" Feb 13 19:31:16.085135 containerd[1502]: time="2025-02-13T19:31:16.085116600Z" level=info msg="StopPodSandbox for \"29892c77e2068e69cefb658e7f5afd0cfb682cfc7981da5bee2ea2b80177b4bc\"" Feb 13 19:31:16.085207 containerd[1502]: time="2025-02-13T19:31:16.085193955Z" level=info msg="TearDown network for sandbox \"29892c77e2068e69cefb658e7f5afd0cfb682cfc7981da5bee2ea2b80177b4bc\" successfully" Feb 13 19:31:16.085231 containerd[1502]: time="2025-02-13T19:31:16.085205647Z" level=info msg="StopPodSandbox for \"29892c77e2068e69cefb658e7f5afd0cfb682cfc7981da5bee2ea2b80177b4bc\" returns successfully" Feb 13 19:31:16.085478 containerd[1502]: time="2025-02-13T19:31:16.085458130Z" level=info msg="RemovePodSandbox for \"29892c77e2068e69cefb658e7f5afd0cfb682cfc7981da5bee2ea2b80177b4bc\"" Feb 13 19:31:16.085513 containerd[1502]: time="2025-02-13T19:31:16.085482065Z" level=info msg="Forcibly stopping sandbox \"29892c77e2068e69cefb658e7f5afd0cfb682cfc7981da5bee2ea2b80177b4bc\"" Feb 13 19:31:16.085592 containerd[1502]: time="2025-02-13T19:31:16.085546656Z" level=info msg="TearDown network for sandbox \"29892c77e2068e69cefb658e7f5afd0cfb682cfc7981da5bee2ea2b80177b4bc\" successfully" Feb 13 19:31:16.089064 containerd[1502]: time="2025-02-13T19:31:16.089038297Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"29892c77e2068e69cefb658e7f5afd0cfb682cfc7981da5bee2ea2b80177b4bc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:31:16.089139 containerd[1502]: time="2025-02-13T19:31:16.089072492Z" level=info msg="RemovePodSandbox \"29892c77e2068e69cefb658e7f5afd0cfb682cfc7981da5bee2ea2b80177b4bc\" returns successfully" Feb 13 19:31:16.089300 containerd[1502]: time="2025-02-13T19:31:16.089279961Z" level=info msg="StopPodSandbox for \"70f85222cbcb79f1b19958776e833cf3199d71272c3f88786a10d32eeba936eb\"" Feb 13 19:31:16.089388 containerd[1502]: time="2025-02-13T19:31:16.089368667Z" level=info msg="TearDown network for sandbox \"70f85222cbcb79f1b19958776e833cf3199d71272c3f88786a10d32eeba936eb\" successfully" Feb 13 19:31:16.089388 containerd[1502]: time="2025-02-13T19:31:16.089377544Z" level=info msg="StopPodSandbox for \"70f85222cbcb79f1b19958776e833cf3199d71272c3f88786a10d32eeba936eb\" returns successfully" Feb 13 19:31:16.089601 containerd[1502]: time="2025-02-13T19:31:16.089570776Z" level=info msg="RemovePodSandbox for \"70f85222cbcb79f1b19958776e833cf3199d71272c3f88786a10d32eeba936eb\"" Feb 13 19:31:16.089630 containerd[1502]: time="2025-02-13T19:31:16.089598528Z" level=info msg="Forcibly stopping sandbox \"70f85222cbcb79f1b19958776e833cf3199d71272c3f88786a10d32eeba936eb\"" Feb 13 19:31:16.089692 containerd[1502]: time="2025-02-13T19:31:16.089666886Z" level=info msg="TearDown network for sandbox \"70f85222cbcb79f1b19958776e833cf3199d71272c3f88786a10d32eeba936eb\" successfully" Feb 13 19:31:16.093218 containerd[1502]: time="2025-02-13T19:31:16.093179757Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"70f85222cbcb79f1b19958776e833cf3199d71272c3f88786a10d32eeba936eb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:31:16.093218 containerd[1502]: time="2025-02-13T19:31:16.093214743Z" level=info msg="RemovePodSandbox \"70f85222cbcb79f1b19958776e833cf3199d71272c3f88786a10d32eeba936eb\" returns successfully" Feb 13 19:31:16.093483 containerd[1502]: time="2025-02-13T19:31:16.093453460Z" level=info msg="StopPodSandbox for \"fd20a9c9c675e34be18747d0be53656eb71ddfce7fadca259852046e986978fd\"" Feb 13 19:31:16.093572 containerd[1502]: time="2025-02-13T19:31:16.093530054Z" level=info msg="TearDown network for sandbox \"fd20a9c9c675e34be18747d0be53656eb71ddfce7fadca259852046e986978fd\" successfully" Feb 13 19:31:16.093572 containerd[1502]: time="2025-02-13T19:31:16.093544962Z" level=info msg="StopPodSandbox for \"fd20a9c9c675e34be18747d0be53656eb71ddfce7fadca259852046e986978fd\" returns successfully" Feb 13 19:31:16.093754 containerd[1502]: time="2025-02-13T19:31:16.093727995Z" level=info msg="RemovePodSandbox for \"fd20a9c9c675e34be18747d0be53656eb71ddfce7fadca259852046e986978fd\"" Feb 13 19:31:16.093754 containerd[1502]: time="2025-02-13T19:31:16.093748804Z" level=info msg="Forcibly stopping sandbox \"fd20a9c9c675e34be18747d0be53656eb71ddfce7fadca259852046e986978fd\"" Feb 13 19:31:16.093828 containerd[1502]: time="2025-02-13T19:31:16.093808226Z" level=info msg="TearDown network for sandbox \"fd20a9c9c675e34be18747d0be53656eb71ddfce7fadca259852046e986978fd\" successfully" Feb 13 19:31:16.097345 containerd[1502]: time="2025-02-13T19:31:16.097297653Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fd20a9c9c675e34be18747d0be53656eb71ddfce7fadca259852046e986978fd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:31:16.097394 containerd[1502]: time="2025-02-13T19:31:16.097347506Z" level=info msg="RemovePodSandbox \"fd20a9c9c675e34be18747d0be53656eb71ddfce7fadca259852046e986978fd\" returns successfully" Feb 13 19:31:16.097602 containerd[1502]: time="2025-02-13T19:31:16.097559624Z" level=info msg="StopPodSandbox for \"9a99dde00b5a0c5753dc22e79a23fa5cb756241ff3d7fe4eed74bd21fcf14d6f\"" Feb 13 19:31:16.097654 containerd[1502]: time="2025-02-13T19:31:16.097637139Z" level=info msg="TearDown network for sandbox \"9a99dde00b5a0c5753dc22e79a23fa5cb756241ff3d7fe4eed74bd21fcf14d6f\" successfully" Feb 13 19:31:16.097654 containerd[1502]: time="2025-02-13T19:31:16.097647569Z" level=info msg="StopPodSandbox for \"9a99dde00b5a0c5753dc22e79a23fa5cb756241ff3d7fe4eed74bd21fcf14d6f\" returns successfully" Feb 13 19:31:16.097926 containerd[1502]: time="2025-02-13T19:31:16.097905513Z" level=info msg="RemovePodSandbox for \"9a99dde00b5a0c5753dc22e79a23fa5cb756241ff3d7fe4eed74bd21fcf14d6f\"" Feb 13 19:31:16.097926 containerd[1502]: time="2025-02-13T19:31:16.097925670Z" level=info msg="Forcibly stopping sandbox \"9a99dde00b5a0c5753dc22e79a23fa5cb756241ff3d7fe4eed74bd21fcf14d6f\"" Feb 13 19:31:16.098015 containerd[1502]: time="2025-02-13T19:31:16.097993778Z" level=info msg="TearDown network for sandbox \"9a99dde00b5a0c5753dc22e79a23fa5cb756241ff3d7fe4eed74bd21fcf14d6f\" successfully" Feb 13 19:31:16.101368 containerd[1502]: time="2025-02-13T19:31:16.101343373Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9a99dde00b5a0c5753dc22e79a23fa5cb756241ff3d7fe4eed74bd21fcf14d6f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:31:16.101427 containerd[1502]: time="2025-02-13T19:31:16.101379921Z" level=info msg="RemovePodSandbox \"9a99dde00b5a0c5753dc22e79a23fa5cb756241ff3d7fe4eed74bd21fcf14d6f\" returns successfully" Feb 13 19:31:16.101619 containerd[1502]: time="2025-02-13T19:31:16.101598090Z" level=info msg="StopPodSandbox for \"238e6f92adeb80ea69f9c84d0bfdba6cf65400a9690652509bdfbb946b589a8f\"" Feb 13 19:31:16.101685 containerd[1502]: time="2025-02-13T19:31:16.101668633Z" level=info msg="TearDown network for sandbox \"238e6f92adeb80ea69f9c84d0bfdba6cf65400a9690652509bdfbb946b589a8f\" successfully" Feb 13 19:31:16.101685 containerd[1502]: time="2025-02-13T19:31:16.101680285Z" level=info msg="StopPodSandbox for \"238e6f92adeb80ea69f9c84d0bfdba6cf65400a9690652509bdfbb946b589a8f\" returns successfully" Feb 13 19:31:16.101876 containerd[1502]: time="2025-02-13T19:31:16.101857878Z" level=info msg="RemovePodSandbox for \"238e6f92adeb80ea69f9c84d0bfdba6cf65400a9690652509bdfbb946b589a8f\"" Feb 13 19:31:16.101908 containerd[1502]: time="2025-02-13T19:31:16.101875881Z" level=info msg="Forcibly stopping sandbox \"238e6f92adeb80ea69f9c84d0bfdba6cf65400a9690652509bdfbb946b589a8f\"" Feb 13 19:31:16.101965 containerd[1502]: time="2025-02-13T19:31:16.101937667Z" level=info msg="TearDown network for sandbox \"238e6f92adeb80ea69f9c84d0bfdba6cf65400a9690652509bdfbb946b589a8f\" successfully" Feb 13 19:31:16.105453 containerd[1502]: time="2025-02-13T19:31:16.105428797Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"238e6f92adeb80ea69f9c84d0bfdba6cf65400a9690652509bdfbb946b589a8f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:31:16.105500 containerd[1502]: time="2025-02-13T19:31:16.105461549Z" level=info msg="RemovePodSandbox \"238e6f92adeb80ea69f9c84d0bfdba6cf65400a9690652509bdfbb946b589a8f\" returns successfully" Feb 13 19:31:16.105757 containerd[1502]: time="2025-02-13T19:31:16.105731625Z" level=info msg="StopPodSandbox for \"737878d4cc9b2ec49100882d0cb04450e80ddc4b588a5fc065c8bb77559dc6ae\"" Feb 13 19:31:16.105853 containerd[1502]: time="2025-02-13T19:31:16.105809441Z" level=info msg="TearDown network for sandbox \"737878d4cc9b2ec49100882d0cb04450e80ddc4b588a5fc065c8bb77559dc6ae\" successfully" Feb 13 19:31:16.105853 containerd[1502]: time="2025-02-13T19:31:16.105820181Z" level=info msg="StopPodSandbox for \"737878d4cc9b2ec49100882d0cb04450e80ddc4b588a5fc065c8bb77559dc6ae\" returns successfully" Feb 13 19:31:16.106037 containerd[1502]: time="2025-02-13T19:31:16.106017461Z" level=info msg="RemovePodSandbox for \"737878d4cc9b2ec49100882d0cb04450e80ddc4b588a5fc065c8bb77559dc6ae\"" Feb 13 19:31:16.106037 containerd[1502]: time="2025-02-13T19:31:16.106039192Z" level=info msg="Forcibly stopping sandbox \"737878d4cc9b2ec49100882d0cb04450e80ddc4b588a5fc065c8bb77559dc6ae\"" Feb 13 19:31:16.106141 containerd[1502]: time="2025-02-13T19:31:16.106110427Z" level=info msg="TearDown network for sandbox \"737878d4cc9b2ec49100882d0cb04450e80ddc4b588a5fc065c8bb77559dc6ae\" successfully" Feb 13 19:31:16.109515 containerd[1502]: time="2025-02-13T19:31:16.109489937Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"737878d4cc9b2ec49100882d0cb04450e80ddc4b588a5fc065c8bb77559dc6ae\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:31:16.109558 containerd[1502]: time="2025-02-13T19:31:16.109520925Z" level=info msg="RemovePodSandbox \"737878d4cc9b2ec49100882d0cb04450e80ddc4b588a5fc065c8bb77559dc6ae\" returns successfully" Feb 13 19:31:16.109778 containerd[1502]: time="2025-02-13T19:31:16.109747069Z" level=info msg="StopPodSandbox for \"d5c2f498a17882b9810f07e4c11851a73e26bc1d64eec044bd1728166fa3fbcf\"" Feb 13 19:31:16.109846 containerd[1502]: time="2025-02-13T19:31:16.109827870Z" level=info msg="TearDown network for sandbox \"d5c2f498a17882b9810f07e4c11851a73e26bc1d64eec044bd1728166fa3fbcf\" successfully" Feb 13 19:31:16.109846 containerd[1502]: time="2025-02-13T19:31:16.109837408Z" level=info msg="StopPodSandbox for \"d5c2f498a17882b9810f07e4c11851a73e26bc1d64eec044bd1728166fa3fbcf\" returns successfully" Feb 13 19:31:16.110154 containerd[1502]: time="2025-02-13T19:31:16.110125710Z" level=info msg="RemovePodSandbox for \"d5c2f498a17882b9810f07e4c11851a73e26bc1d64eec044bd1728166fa3fbcf\"" Feb 13 19:31:16.110215 containerd[1502]: time="2025-02-13T19:31:16.110154734Z" level=info msg="Forcibly stopping sandbox \"d5c2f498a17882b9810f07e4c11851a73e26bc1d64eec044bd1728166fa3fbcf\"" Feb 13 19:31:16.110238 containerd[1502]: time="2025-02-13T19:31:16.110215047Z" level=info msg="TearDown network for sandbox \"d5c2f498a17882b9810f07e4c11851a73e26bc1d64eec044bd1728166fa3fbcf\" successfully" Feb 13 19:31:16.113617 containerd[1502]: time="2025-02-13T19:31:16.113591912Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d5c2f498a17882b9810f07e4c11851a73e26bc1d64eec044bd1728166fa3fbcf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:31:16.113668 containerd[1502]: time="2025-02-13T19:31:16.113622670Z" level=info msg="RemovePodSandbox \"d5c2f498a17882b9810f07e4c11851a73e26bc1d64eec044bd1728166fa3fbcf\" returns successfully" Feb 13 19:31:16.113862 containerd[1502]: time="2025-02-13T19:31:16.113840819Z" level=info msg="StopPodSandbox for \"8b5ece25a3742c4f88914be474455742b0444f124acba7c26fc8cc6a2d292b53\"" Feb 13 19:31:16.113947 containerd[1502]: time="2025-02-13T19:31:16.113918976Z" level=info msg="TearDown network for sandbox \"8b5ece25a3742c4f88914be474455742b0444f124acba7c26fc8cc6a2d292b53\" successfully" Feb 13 19:31:16.113947 containerd[1502]: time="2025-02-13T19:31:16.113928093Z" level=info msg="StopPodSandbox for \"8b5ece25a3742c4f88914be474455742b0444f124acba7c26fc8cc6a2d292b53\" returns successfully" Feb 13 19:31:16.114168 containerd[1502]: time="2025-02-13T19:31:16.114092731Z" level=info msg="RemovePodSandbox for \"8b5ece25a3742c4f88914be474455742b0444f124acba7c26fc8cc6a2d292b53\"" Feb 13 19:31:16.114168 containerd[1502]: time="2025-02-13T19:31:16.114111697Z" level=info msg="Forcibly stopping sandbox \"8b5ece25a3742c4f88914be474455742b0444f124acba7c26fc8cc6a2d292b53\"" Feb 13 19:31:16.114253 containerd[1502]: time="2025-02-13T19:31:16.114179004Z" level=info msg="TearDown network for sandbox \"8b5ece25a3742c4f88914be474455742b0444f124acba7c26fc8cc6a2d292b53\" successfully" Feb 13 19:31:16.117666 containerd[1502]: time="2025-02-13T19:31:16.117638023Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8b5ece25a3742c4f88914be474455742b0444f124acba7c26fc8cc6a2d292b53\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:31:16.117728 containerd[1502]: time="2025-02-13T19:31:16.117677106Z" level=info msg="RemovePodSandbox \"8b5ece25a3742c4f88914be474455742b0444f124acba7c26fc8cc6a2d292b53\" returns successfully" Feb 13 19:31:16.117955 containerd[1502]: time="2025-02-13T19:31:16.117905976Z" level=info msg="StopPodSandbox for \"b5ee6d231558bb9e6055e52293f90a21d1ac84e5b91aa9ef485efd32a23f0760\"" Feb 13 19:31:16.118024 containerd[1502]: time="2025-02-13T19:31:16.117985885Z" level=info msg="TearDown network for sandbox \"b5ee6d231558bb9e6055e52293f90a21d1ac84e5b91aa9ef485efd32a23f0760\" successfully" Feb 13 19:31:16.118024 containerd[1502]: time="2025-02-13T19:31:16.117998409Z" level=info msg="StopPodSandbox for \"b5ee6d231558bb9e6055e52293f90a21d1ac84e5b91aa9ef485efd32a23f0760\" returns successfully" Feb 13 19:31:16.118192 containerd[1502]: time="2025-02-13T19:31:16.118171855Z" level=info msg="RemovePodSandbox for \"b5ee6d231558bb9e6055e52293f90a21d1ac84e5b91aa9ef485efd32a23f0760\"" Feb 13 19:31:16.118238 containerd[1502]: time="2025-02-13T19:31:16.118192153Z" level=info msg="Forcibly stopping sandbox \"b5ee6d231558bb9e6055e52293f90a21d1ac84e5b91aa9ef485efd32a23f0760\"" Feb 13 19:31:16.118269 containerd[1502]: time="2025-02-13T19:31:16.118251565Z" level=info msg="TearDown network for sandbox \"b5ee6d231558bb9e6055e52293f90a21d1ac84e5b91aa9ef485efd32a23f0760\" successfully" Feb 13 19:31:16.121715 containerd[1502]: time="2025-02-13T19:31:16.121683904Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b5ee6d231558bb9e6055e52293f90a21d1ac84e5b91aa9ef485efd32a23f0760\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:31:16.121748 containerd[1502]: time="2025-02-13T19:31:16.121729870Z" level=info msg="RemovePodSandbox \"b5ee6d231558bb9e6055e52293f90a21d1ac84e5b91aa9ef485efd32a23f0760\" returns successfully" Feb 13 19:31:16.122115 containerd[1502]: time="2025-02-13T19:31:16.122083294Z" level=info msg="StopPodSandbox for \"d128712c7ee019c611fbc612ab74775c9fe61e0291a6f84232248274b76f860a\"" Feb 13 19:31:16.122234 containerd[1502]: time="2025-02-13T19:31:16.122218367Z" level=info msg="TearDown network for sandbox \"d128712c7ee019c611fbc612ab74775c9fe61e0291a6f84232248274b76f860a\" successfully" Feb 13 19:31:16.122258 containerd[1502]: time="2025-02-13T19:31:16.122234707Z" level=info msg="StopPodSandbox for \"d128712c7ee019c611fbc612ab74775c9fe61e0291a6f84232248274b76f860a\" returns successfully" Feb 13 19:31:16.122538 containerd[1502]: time="2025-02-13T19:31:16.122515484Z" level=info msg="RemovePodSandbox for \"d128712c7ee019c611fbc612ab74775c9fe61e0291a6f84232248274b76f860a\"" Feb 13 19:31:16.122538 containerd[1502]: time="2025-02-13T19:31:16.122538928Z" level=info msg="Forcibly stopping sandbox \"d128712c7ee019c611fbc612ab74775c9fe61e0291a6f84232248274b76f860a\"" Feb 13 19:31:16.122638 containerd[1502]: time="2025-02-13T19:31:16.122610142Z" level=info msg="TearDown network for sandbox \"d128712c7ee019c611fbc612ab74775c9fe61e0291a6f84232248274b76f860a\" successfully" Feb 13 19:31:16.126187 containerd[1502]: time="2025-02-13T19:31:16.126148691Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d128712c7ee019c611fbc612ab74775c9fe61e0291a6f84232248274b76f860a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:31:16.126235 containerd[1502]: time="2025-02-13T19:31:16.126184939Z" level=info msg="RemovePodSandbox \"d128712c7ee019c611fbc612ab74775c9fe61e0291a6f84232248274b76f860a\" returns successfully" Feb 13 19:31:16.126450 containerd[1502]: time="2025-02-13T19:31:16.126430419Z" level=info msg="StopPodSandbox for \"ddacbc97dad8b654492a3ede378ba89ed873bd2206e50dd63b006c03446e6466\"" Feb 13 19:31:16.126528 containerd[1502]: time="2025-02-13T19:31:16.126513785Z" level=info msg="TearDown network for sandbox \"ddacbc97dad8b654492a3ede378ba89ed873bd2206e50dd63b006c03446e6466\" successfully" Feb 13 19:31:16.126562 containerd[1502]: time="2025-02-13T19:31:16.126526880Z" level=info msg="StopPodSandbox for \"ddacbc97dad8b654492a3ede378ba89ed873bd2206e50dd63b006c03446e6466\" returns successfully" Feb 13 19:31:16.128852 containerd[1502]: time="2025-02-13T19:31:16.126725582Z" level=info msg="RemovePodSandbox for \"ddacbc97dad8b654492a3ede378ba89ed873bd2206e50dd63b006c03446e6466\"" Feb 13 19:31:16.128852 containerd[1502]: time="2025-02-13T19:31:16.126752884Z" level=info msg="Forcibly stopping sandbox \"ddacbc97dad8b654492a3ede378ba89ed873bd2206e50dd63b006c03446e6466\"" Feb 13 19:31:16.128852 containerd[1502]: time="2025-02-13T19:31:16.126833545Z" level=info msg="TearDown network for sandbox \"ddacbc97dad8b654492a3ede378ba89ed873bd2206e50dd63b006c03446e6466\" successfully" Feb 13 19:31:16.131662 containerd[1502]: time="2025-02-13T19:31:16.131625375Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ddacbc97dad8b654492a3ede378ba89ed873bd2206e50dd63b006c03446e6466\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:31:16.131750 containerd[1502]: time="2025-02-13T19:31:16.131669338Z" level=info msg="RemovePodSandbox \"ddacbc97dad8b654492a3ede378ba89ed873bd2206e50dd63b006c03446e6466\" returns successfully" Feb 13 19:31:16.131930 containerd[1502]: time="2025-02-13T19:31:16.131911251Z" level=info msg="StopPodSandbox for \"9b60fb3683d6c0de790a6b7336486ff63455b4499390069c146ad9dffdf344af\"" Feb 13 19:31:16.132001 containerd[1502]: time="2025-02-13T19:31:16.131985831Z" level=info msg="TearDown network for sandbox \"9b60fb3683d6c0de790a6b7336486ff63455b4499390069c146ad9dffdf344af\" successfully" Feb 13 19:31:16.132001 containerd[1502]: time="2025-02-13T19:31:16.131998144Z" level=info msg="StopPodSandbox for \"9b60fb3683d6c0de790a6b7336486ff63455b4499390069c146ad9dffdf344af\" returns successfully" Feb 13 19:31:16.133339 containerd[1502]: time="2025-02-13T19:31:16.132255968Z" level=info msg="RemovePodSandbox for \"9b60fb3683d6c0de790a6b7336486ff63455b4499390069c146ad9dffdf344af\"" Feb 13 19:31:16.133339 containerd[1502]: time="2025-02-13T19:31:16.132280504Z" level=info msg="Forcibly stopping sandbox \"9b60fb3683d6c0de790a6b7336486ff63455b4499390069c146ad9dffdf344af\"" Feb 13 19:31:16.133339 containerd[1502]: time="2025-02-13T19:31:16.132371965Z" level=info msg="TearDown network for sandbox \"9b60fb3683d6c0de790a6b7336486ff63455b4499390069c146ad9dffdf344af\" successfully" Feb 13 19:31:16.135920 containerd[1502]: time="2025-02-13T19:31:16.135889154Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9b60fb3683d6c0de790a6b7336486ff63455b4499390069c146ad9dffdf344af\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:31:16.135988 containerd[1502]: time="2025-02-13T19:31:16.135934179Z" level=info msg="RemovePodSandbox \"9b60fb3683d6c0de790a6b7336486ff63455b4499390069c146ad9dffdf344af\" returns successfully" Feb 13 19:31:16.136189 containerd[1502]: time="2025-02-13T19:31:16.136165483Z" level=info msg="StopPodSandbox for \"dff7a908d6d80b53c7af4ce2f86bc9e08a85c956a953efb2b1f0977164b5d965\"" Feb 13 19:31:16.136282 containerd[1502]: time="2025-02-13T19:31:16.136243639Z" level=info msg="TearDown network for sandbox \"dff7a908d6d80b53c7af4ce2f86bc9e08a85c956a953efb2b1f0977164b5d965\" successfully" Feb 13 19:31:16.136282 containerd[1502]: time="2025-02-13T19:31:16.136252816Z" level=info msg="StopPodSandbox for \"dff7a908d6d80b53c7af4ce2f86bc9e08a85c956a953efb2b1f0977164b5d965\" returns successfully" Feb 13 19:31:16.136529 containerd[1502]: time="2025-02-13T19:31:16.136504458Z" level=info msg="RemovePodSandbox for \"dff7a908d6d80b53c7af4ce2f86bc9e08a85c956a953efb2b1f0977164b5d965\"" Feb 13 19:31:16.136614 containerd[1502]: time="2025-02-13T19:31:16.136531188Z" level=info msg="Forcibly stopping sandbox \"dff7a908d6d80b53c7af4ce2f86bc9e08a85c956a953efb2b1f0977164b5d965\"" Feb 13 19:31:16.136664 containerd[1502]: time="2025-02-13T19:31:16.136624934Z" level=info msg="TearDown network for sandbox \"dff7a908d6d80b53c7af4ce2f86bc9e08a85c956a953efb2b1f0977164b5d965\" successfully" Feb 13 19:31:16.140149 containerd[1502]: time="2025-02-13T19:31:16.140123548Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dff7a908d6d80b53c7af4ce2f86bc9e08a85c956a953efb2b1f0977164b5d965\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:31:16.140205 containerd[1502]: time="2025-02-13T19:31:16.140157823Z" level=info msg="RemovePodSandbox \"dff7a908d6d80b53c7af4ce2f86bc9e08a85c956a953efb2b1f0977164b5d965\" returns successfully" Feb 13 19:31:16.140417 containerd[1502]: time="2025-02-13T19:31:16.140388205Z" level=info msg="StopPodSandbox for \"54b3ba93a73466d89b8ae0c6cf3e8e736b78e767106d426c2c1ec3a3288a83ac\"" Feb 13 19:31:16.140531 containerd[1502]: time="2025-02-13T19:31:16.140465570Z" level=info msg="TearDown network for sandbox \"54b3ba93a73466d89b8ae0c6cf3e8e736b78e767106d426c2c1ec3a3288a83ac\" successfully" Feb 13 19:31:16.140531 containerd[1502]: time="2025-02-13T19:31:16.140474747Z" level=info msg="StopPodSandbox for \"54b3ba93a73466d89b8ae0c6cf3e8e736b78e767106d426c2c1ec3a3288a83ac\" returns successfully" Feb 13 19:31:16.140700 containerd[1502]: time="2025-02-13T19:31:16.140682959Z" level=info msg="RemovePodSandbox for \"54b3ba93a73466d89b8ae0c6cf3e8e736b78e767106d426c2c1ec3a3288a83ac\"" Feb 13 19:31:16.140726 containerd[1502]: time="2025-02-13T19:31:16.140700511Z" level=info msg="Forcibly stopping sandbox \"54b3ba93a73466d89b8ae0c6cf3e8e736b78e767106d426c2c1ec3a3288a83ac\"" Feb 13 19:31:16.140776 containerd[1502]: time="2025-02-13T19:31:16.140761325Z" level=info msg="TearDown network for sandbox \"54b3ba93a73466d89b8ae0c6cf3e8e736b78e767106d426c2c1ec3a3288a83ac\" successfully" Feb 13 19:31:16.144123 containerd[1502]: time="2025-02-13T19:31:16.144087656Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"54b3ba93a73466d89b8ae0c6cf3e8e736b78e767106d426c2c1ec3a3288a83ac\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:31:16.144123 containerd[1502]: time="2025-02-13T19:31:16.144119605Z" level=info msg="RemovePodSandbox \"54b3ba93a73466d89b8ae0c6cf3e8e736b78e767106d426c2c1ec3a3288a83ac\" returns successfully" Feb 13 19:31:18.175819 systemd[1]: Started sshd@17-10.0.0.116:22-10.0.0.1:39558.service - OpenSSH per-connection server daemon (10.0.0.1:39558). Feb 13 19:31:18.224479 sshd[6126]: Accepted publickey for core from 10.0.0.1 port 39558 ssh2: RSA SHA256:ENn9hOvI2hLUXcV6iHA8gc9Z4CTEPvGDMkoVtxIuYbg Feb 13 19:31:18.226306 sshd-session[6126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:31:18.230749 systemd-logind[1485]: New session 18 of user core. Feb 13 19:31:18.242568 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 19:31:18.355133 sshd[6128]: Connection closed by 10.0.0.1 port 39558 Feb 13 19:31:18.355513 sshd-session[6126]: pam_unix(sshd:session): session closed for user core Feb 13 19:31:18.360931 systemd[1]: sshd@17-10.0.0.116:22-10.0.0.1:39558.service: Deactivated successfully. Feb 13 19:31:18.362971 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 19:31:18.363649 systemd-logind[1485]: Session 18 logged out. Waiting for processes to exit. Feb 13 19:31:18.364719 systemd-logind[1485]: Removed session 18. Feb 13 19:31:23.369332 systemd[1]: Started sshd@18-10.0.0.116:22-10.0.0.1:39570.service - OpenSSH per-connection server daemon (10.0.0.1:39570). Feb 13 19:31:23.427829 sshd[6150]: Accepted publickey for core from 10.0.0.1 port 39570 ssh2: RSA SHA256:ENn9hOvI2hLUXcV6iHA8gc9Z4CTEPvGDMkoVtxIuYbg Feb 13 19:31:23.431840 sshd-session[6150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:31:23.437578 systemd-logind[1485]: New session 19 of user core. Feb 13 19:31:23.443549 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 19:31:23.603703 sshd[6152]: Connection closed by 10.0.0.1 port 39570 Feb 13 19:31:23.606082 sshd-session[6150]: pam_unix(sshd:session): session closed for user core Feb 13 19:31:23.618951 systemd[1]: sshd@18-10.0.0.116:22-10.0.0.1:39570.service: Deactivated successfully. Feb 13 19:31:23.621725 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 19:31:23.622868 systemd-logind[1485]: Session 19 logged out. Waiting for processes to exit. Feb 13 19:31:23.626249 systemd-logind[1485]: Removed session 19. Feb 13 19:31:23.634496 systemd[1]: Started sshd@19-10.0.0.116:22-10.0.0.1:39578.service - OpenSSH per-connection server daemon (10.0.0.1:39578). Feb 13 19:31:23.684301 sshd[6164]: Accepted publickey for core from 10.0.0.1 port 39578 ssh2: RSA SHA256:ENn9hOvI2hLUXcV6iHA8gc9Z4CTEPvGDMkoVtxIuYbg Feb 13 19:31:23.686570 sshd-session[6164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:31:23.694439 systemd-logind[1485]: New session 20 of user core. Feb 13 19:31:23.700864 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 19:31:24.207301 sshd[6166]: Connection closed by 10.0.0.1 port 39578 Feb 13 19:31:24.210149 sshd-session[6164]: pam_unix(sshd:session): session closed for user core Feb 13 19:31:24.219608 systemd[1]: sshd@19-10.0.0.116:22-10.0.0.1:39578.service: Deactivated successfully. Feb 13 19:31:24.222613 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 19:31:24.223703 systemd-logind[1485]: Session 20 logged out. Waiting for processes to exit. Feb 13 19:31:24.232830 systemd[1]: Started sshd@20-10.0.0.116:22-10.0.0.1:39592.service - OpenSSH per-connection server daemon (10.0.0.1:39592). Feb 13 19:31:24.233837 systemd-logind[1485]: Removed session 20. Feb 13 19:31:24.280283 sshd[6176]: Accepted publickey for core from 10.0.0.1 port 39592 ssh2: RSA SHA256:ENn9hOvI2hLUXcV6iHA8gc9Z4CTEPvGDMkoVtxIuYbg Feb 13 19:31:24.282715 sshd-session[6176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:31:24.288676 systemd-logind[1485]: New session 21 of user core. Feb 13 19:31:24.295587 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 19:31:24.779778 kubelet[2600]: I0213 19:31:24.779679 2600 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:31:24.804509 kubelet[2600]: I0213 19:31:24.804424 2600 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-cfdf6" podStartSLOduration=49.838183877 podStartE2EDuration="57.804401274s" podCreationTimestamp="2025-02-13 19:30:27 +0000 UTC" firstStartedPulling="2025-02-13 19:31:02.291026905 +0000 UTC m=+46.569058221" lastFinishedPulling="2025-02-13 19:31:10.257244302 +0000 UTC m=+54.535275618" observedRunningTime="2025-02-13 19:31:10.701625107 +0000 UTC m=+54.979656433" watchObservedRunningTime="2025-02-13 19:31:24.804401274 +0000 UTC m=+69.082432610" Feb 13 19:31:25.818713 kubelet[2600]: E0213 19:31:25.818664 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:31:26.283936 sshd[6178]: Connection closed by 10.0.0.1 port 39592 Feb 13 19:31:26.284754 sshd-session[6176]: pam_unix(sshd:session): session closed for user core Feb 13 19:31:26.301544 systemd[1]: sshd@20-10.0.0.116:22-10.0.0.1:39592.service: Deactivated successfully. Feb 13 19:31:26.307827 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 19:31:26.311750 systemd-logind[1485]: Session 21 logged out. Waiting for processes to exit. Feb 13 19:31:26.323947 systemd[1]: Started sshd@21-10.0.0.116:22-10.0.0.1:49442.service - OpenSSH per-connection server daemon (10.0.0.1:49442). Feb 13 19:31:26.325010 systemd-logind[1485]: Removed session 21. Feb 13 19:31:26.370383 sshd[6198]: Accepted publickey for core from 10.0.0.1 port 49442 ssh2: RSA SHA256:ENn9hOvI2hLUXcV6iHA8gc9Z4CTEPvGDMkoVtxIuYbg Feb 13 19:31:26.372243 sshd-session[6198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:31:26.380424 systemd-logind[1485]: New session 22 of user core. Feb 13 19:31:26.385075 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 19:31:26.625109 sshd[6200]: Connection closed by 10.0.0.1 port 49442 Feb 13 19:31:26.625398 sshd-session[6198]: pam_unix(sshd:session): session closed for user core Feb 13 19:31:26.636991 systemd[1]: sshd@21-10.0.0.116:22-10.0.0.1:49442.service: Deactivated successfully. Feb 13 19:31:26.639290 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 19:31:26.641532 systemd-logind[1485]: Session 22 logged out. Waiting for processes to exit. Feb 13 19:31:26.650831 systemd[1]: Started sshd@22-10.0.0.116:22-10.0.0.1:49450.service - OpenSSH per-connection server daemon (10.0.0.1:49450). Feb 13 19:31:26.652874 systemd-logind[1485]: Removed session 22. Feb 13 19:31:26.693670 sshd[6210]: Accepted publickey for core from 10.0.0.1 port 49450 ssh2: RSA SHA256:ENn9hOvI2hLUXcV6iHA8gc9Z4CTEPvGDMkoVtxIuYbg Feb 13 19:31:26.695368 sshd-session[6210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:31:26.700756 systemd-logind[1485]: New session 23 of user core. Feb 13 19:31:26.706590 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 19:31:26.832502 sshd[6212]: Connection closed by 10.0.0.1 port 49450 Feb 13 19:31:26.832993 sshd-session[6210]: pam_unix(sshd:session): session closed for user core Feb 13 19:31:26.837177 systemd[1]: sshd@22-10.0.0.116:22-10.0.0.1:49450.service: Deactivated successfully. Feb 13 19:31:26.839403 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 19:31:26.840117 systemd-logind[1485]: Session 23 logged out. Waiting for processes to exit. Feb 13 19:31:26.841411 systemd-logind[1485]: Removed session 23. Feb 13 19:31:30.818339 kubelet[2600]: E0213 19:31:30.818245 2600 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:31:31.855266 systemd[1]: Started sshd@23-10.0.0.116:22-10.0.0.1:49454.service - OpenSSH per-connection server daemon (10.0.0.1:49454). Feb 13 19:31:31.934461 sshd[6225]: Accepted publickey for core from 10.0.0.1 port 49454 ssh2: RSA SHA256:ENn9hOvI2hLUXcV6iHA8gc9Z4CTEPvGDMkoVtxIuYbg Feb 13 19:31:31.935953 sshd-session[6225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:31:31.940170 systemd-logind[1485]: New session 24 of user core. Feb 13 19:31:31.947462 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 19:31:32.120280 sshd[6227]: Connection closed by 10.0.0.1 port 49454 Feb 13 19:31:32.120605 sshd-session[6225]: pam_unix(sshd:session): session closed for user core Feb 13 19:31:32.125240 systemd[1]: sshd@23-10.0.0.116:22-10.0.0.1:49454.service: Deactivated successfully. Feb 13 19:31:32.127343 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 19:31:32.128174 systemd-logind[1485]: Session 24 logged out. Waiting for processes to exit. Feb 13 19:31:32.129163 systemd-logind[1485]: Removed session 24. Feb 13 19:31:37.133862 systemd[1]: Started sshd@24-10.0.0.116:22-10.0.0.1:47164.service - OpenSSH per-connection server daemon (10.0.0.1:47164). Feb 13 19:31:37.198361 sshd[6266]: Accepted publickey for core from 10.0.0.1 port 47164 ssh2: RSA SHA256:ENn9hOvI2hLUXcV6iHA8gc9Z4CTEPvGDMkoVtxIuYbg Feb 13 19:31:37.200534 sshd-session[6266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:31:37.205602 systemd-logind[1485]: New session 25 of user core. Feb 13 19:31:37.220551 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 19:31:37.338804 sshd[6268]: Connection closed by 10.0.0.1 port 47164 Feb 13 19:31:37.339267 sshd-session[6266]: pam_unix(sshd:session): session closed for user core Feb 13 19:31:37.343851 systemd[1]: sshd@24-10.0.0.116:22-10.0.0.1:47164.service: Deactivated successfully. Feb 13 19:31:37.346185 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 19:31:37.347079 systemd-logind[1485]: Session 25 logged out. Waiting for processes to exit. Feb 13 19:31:37.348405 systemd-logind[1485]: Removed session 25. Feb 13 19:31:42.351494 systemd[1]: Started sshd@25-10.0.0.116:22-10.0.0.1:47176.service - OpenSSH per-connection server daemon (10.0.0.1:47176). Feb 13 19:31:42.396141 sshd[6280]: Accepted publickey for core from 10.0.0.1 port 47176 ssh2: RSA SHA256:ENn9hOvI2hLUXcV6iHA8gc9Z4CTEPvGDMkoVtxIuYbg Feb 13 19:31:42.397798 sshd-session[6280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:31:42.402176 systemd-logind[1485]: New session 26 of user core. Feb 13 19:31:42.408496 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 19:31:42.524084 sshd[6282]: Connection closed by 10.0.0.1 port 47176 Feb 13 19:31:42.524481 sshd-session[6280]: pam_unix(sshd:session): session closed for user core Feb 13 19:31:42.528661 systemd[1]: sshd@25-10.0.0.116:22-10.0.0.1:47176.service: Deactivated successfully. Feb 13 19:31:42.530833 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 19:31:42.531431 systemd-logind[1485]: Session 26 logged out. Waiting for processes to exit. Feb 13 19:31:42.532324 systemd-logind[1485]: Removed session 26. Feb 13 19:31:47.536743 systemd[1]: Started sshd@26-10.0.0.116:22-10.0.0.1:50590.service - OpenSSH per-connection server daemon (10.0.0.1:50590). Feb 13 19:31:47.579857 sshd[6321]: Accepted publickey for core from 10.0.0.1 port 50590 ssh2: RSA SHA256:ENn9hOvI2hLUXcV6iHA8gc9Z4CTEPvGDMkoVtxIuYbg Feb 13 19:31:47.581606 sshd-session[6321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:31:47.585901 systemd-logind[1485]: New session 27 of user core. Feb 13 19:31:47.595566 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 19:31:47.710599 sshd[6323]: Connection closed by 10.0.0.1 port 50590 Feb 13 19:31:47.711030 sshd-session[6321]: pam_unix(sshd:session): session closed for user core Feb 13 19:31:47.715754 systemd[1]: sshd@26-10.0.0.116:22-10.0.0.1:50590.service: Deactivated successfully. Feb 13 19:31:47.718542 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 19:31:47.719259 systemd-logind[1485]: Session 27 logged out. Waiting for processes to exit. Feb 13 19:31:47.720198 systemd-logind[1485]: Removed session 27.